Civil Procedure Guidance on AI-Generated Fake Legal Authorities
Summary
Civil Procedure News (The White Book Service) issued guidance on AI-generated fake legal authorities in litigation, referencing the Upper Tribunal's show cause notice in Ayinde [2025] EWHC 1383 and Munir [2026] UKUT 81. The guidance warns that uploading client information to public AI tools risks breaching legal professional privilege and confidentiality. Courts expect qualified legal professionals to verify all AI-generated citations before submission.
What changed
Civil Procedure News published guidance addressing two critical AI-related risks in litigation: fake legal authorities generated by AI tools, and breaches of client confidentiality through use of public AI platforms. The guidance cites Upper Tribunal cases where immigration advisers cited non-existent Court of Appeal judgments (Ayinde) and cited real judgments for incorrect propositions (Munir). In Ayinde, the Tribunal issued a rare show cause notice and noted that referral to the Immigration Advice Authority may be warranted to prevent false material causing public expense.
Legal professionals must implement verification procedures for all AI-generated citations before filing. Any use of public AI tools to process client information risks waiving legal professional privilege and breaching confidentiality duties. Failure to check documents wastes opponents' time and may result in substantial costs awards in judicial review proceedings. The guidance extends these concerns to expert witnesses using AI tools. Compliance is immediate for ongoing matters.
What to do next
- Verify all AI-generated legal citations against primary sources before filing court submissions
- Do not upload client information or case materials to public AI tools without assessing privilege and confidentiality implications
- Update internal AI usage policies to require qualified solicitor supervision of all AI-assisted document preparation
Archived snapshot
Apr 3, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
Civil Procedure Guidance on AI and “Fake Authorities”
31 March 2026 by Rosalind English
In two weeks’ time my interview with Jacob Turner and Michael Workman on the Judicial Taskforce’s draft Statement and Consultation on AI and private law will come out on Law Pod UK. In the mean time, a short note of the guidance on this subect in Civil Procedure News, put out by The White Book Service (Issue 3/2026 11 March 2026).
The guidance quotes the notorious cases of (R) Ayinde v London Borough of Haringey [2025] EWHC 1383 and R (Munir) v Secretary of State for the Home Department (AI hallucinations[2026] UKUT 81. Both these cases involved the “incautious” use of AI in ways that could result in the loss of privilege through uploading information to an AI tool that is open to the public.
And of course there is the use of fake authorities. In the Ayinde case the UT issued a rare show cause notice, which required an explanation to be given to the question why grounds of appeal to the Tribunal had included citation of a Court of Appeal judgment that could be found nowhere on BAILII and why it also included citation of another Court of Appeal judgment, which while it was available was not authority for the proposition it was said to support.
Had the immigration adviser in question not referred himself to the Immigration Advice Authority, the Tribunal would have so referred Mr Mohammed in order to “stop false material coming before the Tribunal which leads to considerable public expense due to the need to address the problem”.
With regard to the second case, the Tribunal observed that it would be
“easy to think that this is a case about the naïve use of generative AI, but it is not merely that: it is principally about supervision and the obligation to ensure that the tribunal is not misled. It matters not how citation errors come about. Whether they are inserted by a hapless trainee or by ChatGPT is really neither here nor there; the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal…. Failure to check is also wasteful of an opponent’s time, thereby potentially leading (in judicial review proceedings) to large awards of costs.”
As the authors at Civil Procedure note,
“This case raises continuing concerns about the use of fake authorities, notwithstanding the Divisional Court’s guidance in Ayinde. It also, apparently for the first time, raises concerns about the use of open AI tools by lawyers in ways that can result in breaches of client confidentiality and loss of legal professional privilege concerning information uploaded to such tools. It ought to be apparent that the risk of such breaches is not confined to lawers but might also arise through the use of AI tools by, for instance, expert witnesses”.
Tune in for our next epiosde on AI and Private Law, and the proposals for circumventing problems of liability and causation thrown up by autonomy, capacity and the self-teaching capacity of generative AI.
Like this:
Leave a Reply
Named provisions
Related changes
Get daily alerts for Inner Temple Library Current Awareness
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from Civil Procedure News.
The plain-English summary, classification, and "what to do next" steps are AI-generated from the original text. Cite the source document, not the AI analysis.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when Inner Temple Library Current Awareness publishes new changes.