Recognition of AI Hallucinations in Expert Declaration on AI Misinformation Case

In this statement, I aim to clarify the origin of certain citation errors in my recent expert declaration concerning the psychological and social effects of deepfakes. I will detail my qualifications in this domain, the context of the declaration’s production, my research methodology, and the influence of AI tools on my process. These errors, while unfortunate, do not diminish the scientific validity of the evidence or opinions presented. My expertise is rooted in over 15 studies related to AI and communication, including a highly cited foundational paper on AI-Mediated Communication and a special issue on the social implications of deepfakes, establishing my credentials in addressing the intricate dynamics of technology, trust, and misinformation.

The urgency of my work increased significantly following the launch of ChatGPT in 2022, which spurred advancements in tools that facilitate deepfake creation. I have kept abreast of the field’s rapid developments, as evidenced by the notable citation rate for our special issue. Since late 2022, I have published five peer-reviewed papers exploring the nexus of AI, trust, and communication. As a co-founder of the Journal of Online Trust and Safety, my commitment to analyzing misinformation and deepfakes is underscored. My role as a professor also reinforces my focus on the intersection of language and technology within graduate education.

My approach to drafting the declaration involved three key phases: surveying relevant literature, analyzing findings, and drafting the document itself. Initially, I revisited existing research on deepfakes to ensure comprehensive understanding, utilizing tools like Google Scholar and GPT-4o for literature identification and analysis. The significance of deepfake research spans multiple disciplines, and leveraging these resources allowed me to synthesize my prior knowledge with the latest findings effectively. I engaged in a meticulous review process to structure my insights, underscoring the rigorous backdrop against which I prepared this declaration.

During the drafting stage, the use of GPT-4o became critical for both outlining content and suggesting citations. However, it was in this phase that the citation errors, often termed “hallucinations,” occurred. Specifically, I inadvertently instructed GPT-4o incorrectly due to my note placeholders. The tool interpreted these prompts as commands to generate citations, resulting in the inclusion of non-existent references for two specific paragraphs, while I had meant to revisit them for proper sourcing. This misunderstanding stemmed from my reliance on GPT-4o’s capability as a generative AI research tool during the drafting process.

Upon reviewing the declaration, two citation errors became apparent: one relating to the influence of deepfake videos (misattributed to Hwang et al., 2023) should have been credited to Vaccari & Chadwick (2020), while another mistakenly cited De Keersmaecker & Roets (2023) instead of Hancock & Bailenson (2021). Both correct citations provide robust empirical support for the arguments in question by illustrating how deepfakes undermine trust and exploit cognitive biases. These corrections have been the focus of my reflection, ensuring I uphold the integrity of the research presented.

In closing, I reaffirm the validity of the substantive arguments in my declaration, despite the mistakes that were made. The citations that do accurately appear already substantiate the claims regarding deepfake impacts, with substantial backing from esteemed publications in the field. The essence of my findings and conclusions remains intact, as the scientific literature, including works by Hancock & Bailenson (2021) and Vaccari & Chadwick (2020), compliments and affirms the core tenets of my declaration regarding deepfakes and their implications for trust and credibility in media.

Share this content:

Post Comment