Dados Bibliográficos

AUTOR(ES) J. Zhang , C. Shen , H. Xue , Magdalena Wojcieszak
AFILIAÇÃO(ÕES) Department of Communication, University of California , Davis,
ANO 2024
TIPO Artigo
PERIÓDICO Human Communication Research
ISSN 0360-3989
E-ISSN 1468-2958
EDITORA Sage Publications (United States)
DOI 10.1093/hcr/hqae007
CITAÇÕES 1
ADICIONADO EM 2025-08-18

Resumo

Fact-checking labels have been widely accepted as an effective misinformation correction method. However, there is limited theoretical understanding of fact-checking labels' impact. This study theorizes that language intensity influences fact-checking label processing and tests this idea through a multi-method design. We first rely on a large-scale observational dataset of fact-checking labels from 7 U.S. fact-checking organizations (N = 33,755) to examine the labels' language intensity and then use a controlled online experiment in the United States (N = 656) to systematically test the causal effects of fact-checking label intensity (low, moderate, or high) and fact-checking source (professional journalists or artificial intelligence) on perceived message credibility of and the intention to engage with fact-checking messages. We found that two-thirds of existing labels were intense. Such high-intensity labels had null effects on messages' perceived credibility, yet decreased engagement intention, especially when labels were attributed to AI. Using more intense labels may not be an effective fact-checking approach.

Ferramentas