Dados Bibliográficos

AUTOR(ES) Laura Stafford , Claire Preston , Alexandra C. Pike
AFILIAÇÃO(ÕES) Department of Psychology, University of York, UK
ANO 2024
TIPO Artigo
PERIÓDICO International Journal of Qualitative Methods
ISSN 1609-4069
E-ISSN 1609-4069
EDITORA SAGE Publications Inc.
DOI 10.1177/16094069241286417
CITAÇÕES 1
ADICIONADO EM 2025-08-18

Resumo

Large language models (LLMs), one application of artificial intelligence, experienced a surge in users between 2022–2023. During this time, we were conducting online focus groups in which participants insisted on responding using the chat box feature. Based on several chat box responses, we became concerned they were LLM generated. Out of the 42 participants who typed a chat box response during a focus group, we identify 9 as potentially providing LLM generated answers and present their responses with the highest similarity score to an LLM answer. Given the growth and improvement in LLMs, we believe that this issue is likely to increase in frequency. In response to this, in this article we reflect on (1) strategies to prevent participants from using LLMs, (2) indicators LLMs may be being used, (3) the fallibility of identifying LLM generated responses, (4) philosophical frameworks that may permit LLM responses to be incorporated into analyses, and (5) procedures researchers may follow to evaluate the influence of LLM responses on their results.

Ferramentas