Dados Bibliográficos

AUTOR(ES) E. Ollion , S. Coavoux , Patrick Präg , Julien Boelaert , Ivaylo Petev
AFILIAÇÃO(ÕES) CREST, Ecole Polytechnique, Institut Polytechnique de Paris, Paris, France, CREST, ENSAE, Institut Polytechnique de Paris, Paris, France, CERAPS, Faculté des sciences juridiques politiques et sociales, Université de Lille, France, CREST, CNRS, Institut Polytechnique de Paris, Paris, France
ANO 2025
TIPO Artigo
PERIÓDICO Sociological Methods and Research
ISSN 0049-1241
E-ISSN 1552-8294
EDITORA SAGE Publications
DOI 10.1177/00491241251330582
CITAÇÕES 2
ADICIONADO EM 2025-08-18

Resumo

Generative artificial intelligence (AI) is increasingly presented as a potential substitute for humans, including as research subjects. However, there is no scientific consensus on how closely these in silico clones can emulate survey respondents. While some defend the use of these 'synthetic users,' others point toward social biases in the responses provided by large language models (LLMs). In this article, we demonstrate that these critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reasons. Our results show (i) that to date, models cannot replace research subjects for opinion or attitudinal research; (ii) that they display a strong bias and a low variance on each topic; and (iii) that this bias randomly varies from one topic to the next. We label this pattern 'machine bias,' a concept we define, and whose consequences for LLM-based research we further explore.

Ferramentas