Quality and Veracity of Communications Expert Witness' Declaration Called into Question on Account of AI Use

AI Use Casts Doubts on the Quality and Veracity of Communications Expert Witness’ Declaration

YouTuber and Minnesota state representative filed a lawsuit claiming that the state’s new law criminalizing the use of deepfakes to influence elections violates their First Amendment right to free speech.

Jeff Hancock, the founding director of Stanford’s Social Media Lab, submitted his expert opinion earlier this month. His opinion included a reference to a study that purportedly found “even when individuals are informed about the existence of deepfakes, they may still struggle to distinguish between real and manipulated content.” But the Plaintiff’s attorney contended that the study Hancock cited—titled “The Influence of Deepfake Videos on Political Attitudes and Behavior” and published in the Journal of Information Technology & Politics—did not actually exist.

Communications Expert Witness

Jeff Hancock is the founding director of the Stanford Social Media Lab and is Harry and Norman Chandler Professor of Communication at Stanford University. Professor Hancock and his group work on understanding psychological and interpersonal processes in social media. The team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception and trust, emotional dynamics, intimacy and relationships, and social support. 

Want to know more about the challenges Jeff Hancock has faced? Get the full details with our Challenge Study report. 

Discussion by the Court

Plaintiff argued that the study was a “hallucination” generated by an AI
large language model like ChatGPT. A part-fabricated declaration is unreliable.

The citation bears the hallmarks of being an artificial intelligence (AI) “hallucination,” suggesting that at least the citation was generated by a large language model like ChatGPT. Basically, Plaintiffs did not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever.

Moreover, Plaintiffs alleged that the title of the alleged article, and even a snippet of it, does not appear on anywhere on the internet as indexed by Google and Bing, the most commonly used search engines. Searching Google Scholar, a specialized search engine for academic papers and patent publications, reveals no articles matching the description of the citation authored by “Hwang” that includes the term “deepfake.”

The existence of a fictional citation Hancock (or his assistants) didn’t even bother to click calls into question the quality and veracity of the entire declaration.

Key Takeaway:

A well-published academic from Stanford was accused of spreading AI-generated misinformation despite being retained to testify in favor of a law designed to keep AI-generated misinformation out of elections. The irony was not lost on anyone.

Case Details:

Case Caption:Kohls Et Al V. Ellison Et Al
Docket Number:0:24cv3754
Court:United States District Court, Minnesota

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *