Jeff Hancock, a prominent misinformation expert and founder of the Stanford Social Media Lab, has come under scrutiny after admitting to using ChatGPT to assist in organizing citations for a legal document. This reliance on AI led to citation inaccuracies, commonly referred to as “hallucinations,” sparking criticism that questioned the reliability of the entire filing. Despite the errors, Hancock maintains that the mistakes do not undermine the document’s substantive arguments.
The affidavit in question was submitted in support of Minnesota’s “Use of Deep Fake Technology to Influence an Election” law, which faces a legal challenge in federal court. The challengers include Christopher Khols, a conservative YouTuber known as Mr. Reagan, and Minnesota state Rep. Mary Franson. Attorneys for Khols and Franson flagged Hancock’s filing for containing non-existent citations, calling it “unreliable” and seeking its exclusion from the case.
In a follow-up declaration filed last week, Hancock acknowledged his use of ChatGPT for drafting purposes but clarified that the AI was not responsible for writing the content of the affidavit. “I wrote and reviewed the substance of the declaration and stand firmly behind every claim made in it,” Hancock stated. He emphasized that all claims are backed by the latest scholarly research in the field and represent his expert opinion on how AI impacts misinformation and its broader societal implications.
Hancock explained the source of the errors, stating that he had employed a combination of Google Scholar and GPT-4o to identify potentially relevant articles for the affidavit. This approach aimed to merge his existing knowledge with new academic research. While using GPT-4o to compile a citation list, Hancock inadvertently introduced two errors where citations referenced works that did not exist. Additionally, incorrect authors were attributed to another citation.
“I did not intend to mislead the Court or counsel,” Hancock clarified in his latest filing. He expressed sincere regret for any confusion caused by the citation errors. Despite this, he reiterated his confidence in the document’s substantive points, asserting that they remain unaffected by the citation issues.
The affidavit and subsequent revelations have drawn attention to the increasing use of AI tools like ChatGPT in professional and academic settings. While these tools can streamline tasks such as organizing citations, their propensity for generating incorrect or fabricated information raises questions about their reliability in high-stakes contexts like legal filings. Hancock’s experience serves as a cautionary tale about the importance of thorough verification when using AI-generated outputs.
Critics argue that the citation inaccuracies undermine the credibility of Hancock’s affidavit, particularly given its role in a legal dispute involving the regulation of deep fake technology. They contend that reliance on AI without rigorous oversight compromises the integrity of expert testimony and, by extension, the legal process. However, Hancock and his supporters maintain that the affidavit’s central arguments remain sound, supported by valid and well-researched evidence.
The controversy surrounding Hancock’s affidavit also highlights the challenges faced by experts navigating the intersection of emerging AI technologies and traditional professional standards. While AI tools offer significant potential for enhancing productivity and efficiency, their limitations necessitate careful consideration and accountability.
Hancock’s admission and subsequent clarification reflect an effort to address these issues transparently. His acknowledgment of the citation errors and expression of regret underscore the need for vigilance in ensuring accuracy, especially when presenting expert opinions in a legal context. At the same time, his defense of the affidavit’s substantive points underscores his confidence in the underlying research and conclusions.
As the case over Minnesota’s deep fake law continues, the court will decide whether to exclude Hancock’s affidavit based on the cited inaccuracies. Regardless of the outcome, the incident serves as a reminder of the importance of balancing innovation with responsibility, particularly in areas as critical as misinformation and legal proceedings.
In conclusion, while AI tools like ChatGPT and GPT-4o can be valuable aids, their use must be accompanied by thorough checks to prevent errors from compromising the credibility of professional work. Hancock’s experience illustrates both the potential benefits and pitfalls of integrating AI into expert-driven fields, highlighting the ongoing need for vigilance and accountability in the era of AI.