A federal lawsuit challenging Minnesota’s “Use of Deep Fake Technology to Influence an Election” law has brought artificial intelligence into the spotlight. In a recent development, attorneys opposing the law have raised concerns that an affidavit submitted in its defense may contain AI-generated content. According to the Minnesota Reformer, the affidavit, prepared by Stanford Social Media Lab founding director Jeff Hancock at the request of Attorney General Keith Ellison, includes references to non-existent studies—errors indicative of text generated by a large language model (LLM) such as ChatGPT.
One citation in Hancock’s affidavit refers to a 2023 study titled The Influence of Deepfake Videos on Political Attitudes and Behavior, allegedly published in the Journal of Information Technology & Politics. However, the Minnesota Reformer reports no evidence of such a study existing in the journal or any other academic publication. Another cited source, Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance, also appears to be fictional.
The revelation has prompted criticism from attorneys representing the plaintiffs in the case, including Minnesota state Representative Mary Franson and Christopher Khols, a conservative YouTuber known as Mr. Reagan. In a court filing, the plaintiffs’ lawyers stated, “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT.” They added that the presence of such fabricated citations undermines the credibility of the entire affidavit, especially since it lacks clear methodology or analytical rigor.
Jeff Hancock has not responded to requests for comment, including one from The Verge, leaving unanswered questions about how these fabricated citations made their way into the document. The incident raises concerns about the use of generative AI tools in preparing official or expert submissions for legal proceedings, especially when the technology can “hallucinate,” or produce false but plausible-sounding information.
The Minnesota law at the center of the lawsuit seeks to address the growing concern over deepfake technology being used to manipulate elections. Deepfakes, powered by advanced AI, can create hyper-realistic but false audio and video content, posing a significant risk to political integrity. Critics argue that such technology can spread disinformation, sway public opinion, and undermine trust in democratic processes. However, the lawsuit challenges the law on free speech grounds, asserting that its provisions are overly broad and could suppress legitimate expression.
The affidavit by Hancock was intended to support the state’s argument by providing academic insights into the psychological and social impacts of deepfake technology on political attitudes and behavior. However, the inclusion of non-existent sources has cast doubt on its reliability. The plaintiffs’ legal team argues that the affidavit’s flaws suggest a lack of thoroughness in its preparation and call into question the broader claims made in its defense of the law.
This controversy highlights a growing issue in the intersection of AI and the legal system: the reliance on AI-generated content without adequate verification. Large language models like ChatGPT are powerful tools capable of generating sophisticated text based on prompts. While they can aid in drafting, summarizing, and researching, they are prone to fabricating information, particularly when asked to provide citations or specific details. These “hallucinations” can be difficult to detect without careful fact-checking, especially in contexts where the reader might assume accuracy.
The incident underscores the need for stricter protocols when using AI in official or legal contexts. Experts warn that as AI tools become more prevalent, there is a growing risk of unintentional errors or deliberate misuse. The legal profession, in particular, faces challenges in balancing the efficiency offered by AI with the need for accuracy and accountability.
In this case, the implications extend beyond the affidavit itself. If the court finds the document unreliable, it could weaken the state’s defense of the deepfake law, potentially influencing the outcome of the lawsuit. More broadly, the incident raises questions about the preparedness of policymakers, legal professionals, and academics to navigate the complexities introduced by AI-generated content.
As AI continues to evolve, it is increasingly clear that its integration into sensitive areas like law, governance, and academia requires rigorous oversight. Verification processes, ethical guidelines, and clear standards for AI use will be crucial in preventing similar incidents and ensuring that AI serves as a tool for progress rather than a source of confusion or controversy.
In conclusion, the use of AI-generated text in Jeff Hancock’s affidavit has brought unexpected scrutiny to Minnesota’s deepfake law and its defense. While the law aims to address a legitimate concern about the misuse of technology, this case illustrates the challenges of relying on emerging AI tools in high-stakes scenarios. It serves as a cautionary tale for professionals and institutions as they grapple with the implications of AI in their work.