AI Hallucinations Evident in Misinformation Expert’s Court Filing Advocating for Anti-AI Misinformation Legislation
Minnesota recently enacted a law intended to combat the spread of misleading AI-generated deepfakes that could potentially manipulate election outcomes. This legislation is currently being challenged in the case of Kohls v. Ellison on the grounds of First Amendment rights. To bolster the law’s validity, the government introduced an expert declaration from a scholar affiliated with the Stanford Internet Observatory, which specializes in the intersection of AI and misinformation. The declaration highlights the challenges associated with verifying deepfakes, suggesting that the advanced technologies used in their creation can produce convincing likenesses of individuals’ appearances and voices. One study cited in the declaration indicates that even when people are aware of deepfakes, they frequently struggle to discern between authentic and altered content, an issue that is aggravated by the quick dissemination of such materials on social media platforms.
The situation took an unexpected turn when the plaintiffs in the case filed a memorandum to exclude the expert declaration on the grounds that the cited studies did not exist, including the one referenced regarding the influence of deepfake videos on political behavior. Upon investigation, it became clear that the article by Hwang et al. was nonexistent, and rather than a reputable citation, it seemed more like a “hallucination” generated by an AI model, such as ChatGPT. The reference contained a plausible title and appeared to cite a journal, but both the DOI (Digital Object Identifier) link and the corresponding publication were nowhere to be found. This scenario reveals the potential of AI models to fabricate convincing but ultimately erroneous scholarly references.
Moreover, a systematic search through common search engines like Google and Bing revealed no documentation of the asserted citation, and scholarly platforms like Google Scholar also failed to yield any results related to the supposed study. This raises intriguing questions regarding the reliability of AI-generated content, particularly when it is leveraged within legal frameworks. The phenomenon of ‘hallucination’ in AI—where the model generates content that appears accurate but is unfounded—presents significant challenges not only to researchers but also to legal practitioners who rely on expert testimonies and declarations in court.
The issues surrounding the expertise in the declaration extend beyond just the initial citation. Another reference, purportedly related to cognitive processes in misinformation acceptance and cited in the legal documentation, was also found to lack validity upon cross-referencing. The broader implications of these fabrications highlight the peril inherent in relying on AI outputs without thorough verification. Researchers and legal professionals alike are urged to exercise caution when utilizing AI-generated citations or expert declarations, particularly in contexts that demand a high degree of accuracy and credibility.
The flaws within the expert declaration yielded a cautionary narrative regarding the intersection of artificial intelligence and legal processes. As the plaintiffs challenged the law based on First Amendment protections, it became evident that the foundational evidence presented could be based on fictitious or misleading data. This situation underscores the importance of continual diligence in evaluating the sources of information, particularly as technology evolves and the capacity for misinformation escalates.
In conclusion, the complexity of AI-generated content, as evidenced by this legal dilemma in Minnesota, calls for a reevaluation of how such information is utilized in both academic and legal contexts. The case exemplifies the critical need for stringent checks and balances when relying on technology that can fabricate information, as the stakes in political and legal arenas are extraordinarily high. As the government seeks to uphold the law against deepfakes, the legal system must navigate the intricacies presented not only by the AI itself but also by the authenticity of the scholarly references that support its claims. This situation may prompt future guidelines regarding the use of AI in an era where misinformation can easily undermine democratic processes and foundational legal principles.
Share this content:
Post Comment