Civil Procedure

Print
Article
Reimagining National Security
Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases
Abhishek Dalal
Pritzker School of Law, Northwestern University.
Chongyang Gao
Ph.D. Candidate at Northwestern University.
Hon. Paul W. Grimm (ret.)
David F. Levi Professor of the Practice of Law & Director, Bolch Judicial Institute, Duke Law School.
Maura R. Grossman
Research Professor, David R. Cheriton School of Computer Science, University of Waterloo & Adjunct Professor, Osgoode Hall Law School, York University.
Daniel W. Linna Jr.
Senior Lecturer and Director of Law and Technology Initiatives, Pritzker School of Law & McCormick School of Engineering, Northwestern University.
Chiara Pulice
Dept. of Computer Science & Buffett Institute for Global Affairs, Northwestern University.
V.S. Subrahmanian
Walter P. Murphy Professor of Computer Science, Buffett Faculty Fellow at the Buffett In-stitute for Global Affairs, Northwestern University.
Hon. John Tunheim
United States District Court for the District of Minnesota.

With the widespread availability of Artificial Intelligence (AI) tools, specifically Generative AI, whether in the context of text, audio, video, imagery, or even combinations of these, it is inevitable that trials related to national security will involve evidentiary issues raised by Generative AI. We must confront two possibilities: first, that evidence presented is AI-generated and not real and, second, that other evidence is genuine but alleged to be fabricated. These are not challenges of a far-off future; they are already here. Judges will increasingly need to establish best practices to deal with a potential deluge of evidentiary issues. Our suggested approach illustrates how judges can protect the integrity of jury deliberations in a manner that is consistent with the current Federal Rules of Evidence and relevant case law.