Identifying AI-Generated Evidence and Holding Counsel Accountable
Summary
This article from JD Supra discusses the increasing prevalence of AI-generated evidence in legal proceedings and provides guidance for attorneys on how to identify and authenticate such evidence. It emphasizes the importance of critical evaluation, metadata analysis, and expert consultation to ensure the integrity of evidence and hold counsel accountable for its use.
What changed
This article addresses the growing challenge of AI-generated evidence in legal practice, offering practical advice for attorneys to identify and authenticate it. It highlights concerns regarding the origin and integrity of AI-generated materials, such as photographs, videos, and documents, and suggests methods for assessment. These include trusting professional instincts when evidence appears too polished or inconsistent with known facts, using depositions to compare digital evidence with a witness's authentic behavior, and examining metadata for creation details.
The article stresses that while metadata can be a crucial indicator, it can also be manipulated. Therefore, it recommends retaining technical experts, such as digital forensic scientists or data analysts, when authenticity remains in question, as courts increasingly rely on such expertise. The piece implicitly calls for increased diligence from legal professionals to maintain evidentiary standards and professional responsibility in the face of evolving AI technologies.
What to do next
- Develop protocols for scrutinizing AI-generated evidence for authenticity.
- Train legal staff on techniques for metadata analysis and identifying inconsistencies.
- Identify and vet potential expert witnesses for digital forensics and AI analysis.
Source document (simplified)
March 27, 2026
How to Identify AI-generated Evidence and Hold Counsel Accountable
LinkedIn Facebook X Send Embed
Artificial intelligence (AI) has become a part of nearly every industry, and the legal field is no exception. More specifically, AI-generated evidence is constantly evolving, and it is important for attorneys to keep learning about it, so that we can stay informed, prepared and more importantly, ahead of the curve.
AI-generated evidence consists of a variety of data including but not limited to photographs, videos and other documents or materials that are developed or integrated through AI technology to analyze data or create new content. While this form of evidence can be an innovative tool, there is wide-spread, valid concern about the use of this evidence. On one hand, unlike with tangible, physical evidence, there is no clear point of origination with AI-generated materials. Thus, there are concerns about authenticating the validity of the AI-generated evidence and the integrity of said evidence.
To determine whether evidence is AI-generated, there are a few steps we can take to guide our assessment. One step is to trust our instincts. Attorneys are trained to evaluate the credibility, consistency, and plausibility of all evidence and information presented to them while investigating and developing their case strategy. That same instinct should be applied when reviewing potential AI-generated evidence. If a document, text, image, or video appears too polished, or inconsistent with the surrounding facts, it may warrant closer scrutiny. There is something to be said about thinking a piece of evidence is “too good to be true.” There is no harm in following up on the validity of a piece of evidence if something about it has raised questioned or caused pause.
For example, consider a scenario where opposing counsel produces a video allegedly depicting a plaintiff speaking in a measured, articulate manner with calculated pauses and minimal emotions. Yet during deposition testimony, the attorney sees that same plaintiff speaks rapidly, with an accent, displays natural hesitation, or is animated. Such discrepancies between the evidence provided and real-world presentation should raise immediate concerns about manipulation or artificially-generated activity.
Depositions are therefore a critical investigative tool, not only for factual development, but also for evaluating whether the evidence aligns with the witness’s authentic behavior, speech patterns, and demeanor. Attorneys may also investigate whether there are any pre-suit recorded statements taken by a reputable third-party, such as an insurance company, of plaintiffs or relevant fact witnesses to further assess any discrepancies with proffered digital evidence.
Once our suspicions arise, the next step is to obtain the underlying metadata associated with the evidence. Metadata functions as a digital fingerprint, often revealing creation date and time and device or software used to generate the file.
For example, if a party claims a text message was sent five years earlier, metadata may show that the file was actually created recently and on what software. In many cases, metadata can serve as our smoking gun.
Nevertheless, attorneys must be aware and keep vigilant that in the age of AI, metadata can also be manipulated. It is just one piece of the puzzle that can be used to evaluate the authenticity of proffered evidence.
If authenticity remains in question, retaining an appropriate expert is critical. Courts increasingly rely on technical experts to interpret complex digital evidence, particularly where AI tools may be involved. Relevant experts may include digital forensic scientists, data analysts and/or cybersecurity professionals.
Courts have begun to confront the admissibility of AI-generated or claimed AI-manipulated evidence. In Huang v. Tesla [1], a California state court rejected an objection to video evidence premised on the generalized claim that the footage “could have been” a deepfake. The court made clear that the mere possibility of AI manipulation is insufficient to exclude evidence. Instead, the court determined that parties must present concrete, technically-grounded proof demonstrating that the presented evidence is inauthentic or unreliable. This ruling shows that challenges to AI-related evidence must be supported by specific facts, expert analysis, or forensic evidence.
This approach is consistent with longstanding authentication requirements under Rule 901 of the Federal Rules of Evidence. Rule 901 requires only that the proponent produce evidence “sufficient to support a finding that the item is what the proponent claims it is.” The standard is intentionally low: courts do not demand absolute proof of authenticity, but rather a prima facie showing through testimony of a witness with knowledge (expert witness), distinctive characteristics, metadata, or evidence describing the process or system that produced the item. Once that threshold is met, the burden shifts to the opponent to demonstrate a genuine issue as to authenticity. In the AI context, courts are making clear that hypothetical concerns about deepfakes do not, by themselves, defeat admissibility.
Similarly, attorneys must grapple with the potential consequences of the improper use of AI-generated evidence in their cases and the importance of identifying and asserting improper use by their adversaries. AI use presents risks that we cannot ignore, such as hallucinations in case law citations.
In the matter, Mendones v. Cushman & Wakefield, Inc. [2], the Superior Court of California, Alameda County dismissed the case with prejudice when it was discovered pro se plaintiffs had used deepfake videos and altered photographs as exhibits to their motion for summary judgment. As previously discussed, the subject videos showed witness testimony in an unnatural manner with unsynchronized mouth movements and other identifiable issues. Evaluation of the photographs demonstrated false data, such as altered screenshotted messages and a security guard superimposed into the image.
In the Order, the California Superior Court states, it “remains suspicious of the other evidentiary submissions, but it does not have the time, fundings, or technical expertise to determine the authenticity of Plaintiffs’ statements or conduct a forensic analysis.” This point leads into a deeper discussion about how improper use of altered, false and/or distorted AI-generated evidence puts further burden on court time and resources. In response, courts have begun imposing non-monetary and monetary sanctions in response to the AI-generated hallucinations in legal briefs. Recently, an Eastern District of Pennsylvania federal court judge sanctioned two attorneys after they filed a brief that included these AI-hallucinated citations.
While these sanctions are one tool used by the court to send a strong message throughout the legal community about the consequences of these serious acts of misconduct, it is important for attorneys to remember the professional rules of conduct and oath of fidelity, honesty, and lawful practice that they have an obligation to abide by.
This is not to say that use of AI-generated materials is strictly prohibited. We know that this ever-developing technology will continue to be a part of ongoing legal practice. However, it is important that all attorneys stay apprised of the rules, protocol and guidelines as outlined by the courts for use of AI in their legal practice and hold their adversaries accountable to uphold the same standards.
Since AI is here to stay, attorneys must approach it as a tool, and never as the final product. AI cannot replace professional judgment, ethical obligations, and strategic analysis that is essential to competent representation. There are many nuances that attorneys as humans discover in their cases that can never be replicated by AI technology. Thus, in this age of technology, it is important for attorneys to remember this human aspect of practice is a strength and accordingly, must hold ourselves and our adversaries accountable.
[1] https://www.thomsonreuters.com/en-us/posts/ai-in-courts/deepfakes-evidence-authentication/
[2] Mendones v. Cushman et al Decision
[View source.]
Latest Posts
- How to Identify AI-generated Evidence and Hold Counsel Accountable
- Commonwealth Court Bars Supplier of Medical Goods from Challenging Amounts Paid Under Workers’ Compensation Through Fee Reviews
- [Event] Hospitality Risk & Readiness Legal Roundtable & Networking - April 13th, West Deptford, NJ
- Subrogation Lien Rights: Pay Close Attention to the Distribution of the Money!
- Penalties, PARs, and Psychological Claims: Recent New York Workers’ Compensation Decisions See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
Attorney Advertising.
©
Weber Gallagher Simpson Stapleton Fires & Newby LLP
2026
Written by:
Weber Gallagher Simpson Stapleton Fires & Newby LLP Contact + Follow Gabrielle Outlaw + Follow Felicia Romain + Follow
PUBLISH YOUR CONTENT ON JD SUPRA
- ✔ Increased readership
- ✔ Actionable analytics
- ✔ Ongoing writing guidance Join more than 70,000 authors publishing their insights on JD Supra
Published In:
Artificial Intelligence + Follow Deep Fake + Follow Depositions + Follow Discovery + Follow Document Review + Follow e-Discovery + Follow Electronic Evidence + Follow Electronically Stored Information + Follow Evidence + Follow Expert Witness + Follow Federal Rules of Evidence + Follow Law Practice Management + Follow Legal Ethics + Follow Legal Project Management + Follow Metadata + Follow Professional Responsibility + Follow Risk Management + Follow Testimony + Follow Civil Procedure + Follow Electronic Discovery + Follow Professional Practice + Follow Science, Computers & Technology + Follow more
Weber Gallagher Simpson Stapleton Fires & Newby LLP on:
Solve with 2Captcha
Solve with 2Captcha
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Data Privacy & Cybersecurity alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when JD Supra Technology & Cyber publishes new changes.