Dados Bibliográficos

AUTOR(ES) Natalya N. Bazarova , Aarushi Bhandari , Dominic DiFranzo , Marie Ozanne
AFILIAÇÃO(ÕES) Cornell University School of Industrial and Labor Relations, Lehigh University, P.C. Rossin College of Engineering and Applied Science, Bethlehem, PA, USA
ANO 2022
TIPO Artigo
PERIÓDICO Big Data & Society
ISSN 2053-9517
E-ISSN 2053-9517
EDITORA Sage Publications Ltd
DOI 10.1177/20539517221115666
ADICIONADO EM 2025-08-18

Resumo

This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI versus other users, users perceived less accountability in the moderation system and had less trust in the moderation decision, especially for ambiguously worded harassments, as opposed to clear harassment cases. However, no differences emerged in the perceived moderation fairness, objectivity, and participants confidence in their understanding of the moderation process. Overall, our study demonstrates that users tend to question the moderation decision and system more when an AI moderator is visible, which highlights the complexity of effectively managing the visibility of automatic content moderation in the social media environment.

Ferramentas