Dados Bibliográficos

AUTOR(ES) Tarleton Gillespie
AFILIAÇÃO(ÕES) Cornell University School of Industrial and Labor Relations
ANO 2022
TIPO Artigo
PERIÓDICO Social Media + Society
ISSN 2056-3051
E-ISSN 2056-3051
DOI 10.1177/20563051221117552
CITAÇÕES 8
ADICIONADO EM 2025-08-18

Resumo

Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidelines, warrants demoting them in algorithmic rankings and recommendations. In this essay, I document this shift and explain how reduction works. I then raise questions about what it means to use recommendation as a means of content moderation.

Ferramentas