Title: AI and the Risks of False Citations in Legal Documents — Lessons from the Ayinde Case
In an era where generative AI tools are reshaping the legal landscape, a cautionary judgment from the Divisional Court in Ayinde, R (On the Application Of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin) sends a stark warning to practitioners: rely on AI without proper verification, and you may face wasted costs orders, regulatory referrals, or worse.
The Rise of AI in Legal Drafting — and the Dangers
AI-driven tools offer time-saving solutions to legal research and drafting, but their outputs are only as reliable as the care with which they are verified. The case of Ayinde v Qatar National Bank — considered alongside earlier proceedings involving the same claimant and The London Borough of Haringey — highlights the consequences of placing blind trust in such technology.
In both cases, fabricated case citations — presumed to be generated via AI — were placed before the court. The citations did not exist. As the Court noted, this kind of error risks misleading the judiciary, wasting court time, and undermining trust in the profession.
The legal representatives involved were referred to professional regulators, with serious scrutiny of their conduct, supervision, and competence. Though the Court ultimately declined to initiate contempt proceedings, the message was unambiguous: “This court’s decision not to initiate contempt proceedings in respect of Ms Forey is not a precedent. Lawyers who do not comply with their professional obligations in this respect risk severe sanction.”
Summary of the Ayinde Judgment: What Went Wrong
The claimant’s legal team submitted judicial review proceedings in a housing matter. The defendant local authority was debarred due to procedural failings and later sought a wasted costs order against the claimant’s solicitors and counsel, citing false authorities in the pleadings.
Counsel and the instructing solicitor were unable to explain how the fabricated citations were produced. While junior lawyer Ms Forey denied using generative AI tools and claimed she sourced her cases from various websites, the court found this explanation lacked coherence. No trace of the cited cases could be found online or in any legal database.
The Divisional Court found:
- Ms Forey failed to provide a credible or verifiable source for the false authorities.
- Even if generative AI was not directly used, the material clearly had not been checked against authoritative legal sources.
- Supervision of junior staff was inadequate, both at the law centre and chambers.
- A Legal Aid certificate was in place — heightening the need for professional diligence.
While the Court recognised the difficult working context and Ms Forey’s junior status, it concluded that the threshold for contempt had been met, but declined to proceed due to broader issues around training and supervision which were better handled by regulators.
Risks of Using Generative AI in Legal Practice
This case illustrates some specific risks:
1.
Hallucinated Case Law
Generative AI tools, when prompted to produce legal arguments or summaries, can “hallucinate” case law — that is, generate plausible but entirely fictional authorities. These may include invented party names, case citations, or even fictitious judicial reasoning.
2.
False Sense of Reliability
Legal practitioners may assume that AI-generated material is accurate, especially when it appears confidently written or includes references. However, as the Court noted, even “Google summaries” can be powered by generative AI and are not a substitute for verified sources.
3.
Regulatory and Ethical Breaches
Placing false material before the court — whether negligently or recklessly — breaches a barrister’s duty to the court and may trigger:
- Wasted costs orders.
- Referrals to the Bar Standards Board or Solicitors Regulation Authority.
- Contempt proceedings in serious cases.
Minimising the Risks: Practical Safeguards
AI can still play a valuable role in legal practice, provided its use is disciplined and transparent. Practitioners should:
✅
Verify All Sources Manually
Never rely solely on AI-generated case law or quotes. Always cross-check against authoritative databases such as:
- BAILII
- The National Archives caselaw site
- Westlaw, LexisNexis, or Justis
- Law Reports in the Inn libraries
✅
Include Proper Citations with Links or References
If AI is used to assist in drafting, ensure every citation is verifiable. Add links or full references in drafts to aid manual checking.
✅
Disclose the Use of AI Where Appropriate
For internal supervision and professional integrity, junior lawyers or pupils should be encouraged to disclose when they have used AI tools, allowing supervisors to exercise proper oversight.
✅
Maintain Supervision and Training Standards
As highlighted in Ayinde, proper supervision of junior staff is essential. Chambers and law centres must ensure:
- Clear protocols for review of draft pleadings.
- Specific training about the risks of AI use.
- Regular updates on technology and professional responsibility.
✅
Keep AI-Assisted Research as a Secondary Tool
Treat AI-generated research as a starting point, not a definitive source. It can help brainstorm arguments, but it must not replace doctrinal research or legal reasoning based on binding precedent.
A Warning, Not Just a Lesson
The Divisional Court’s judgment does more than admonish individual lawyers. It sets a benchmark for how the judiciary will respond to the growing integration of AI into legal practice. As the Court concluded:
“Our overarching concern is to ensure that lawyers clearly understand the consequences (if they did not before) of using artificial intelligence for legal research without checking that research by reference to authoritative sources.”
This is not a green light for relaxed standards because “everyone uses AI now.” On the contrary, the legal profession must uphold the highest standards of accuracy, candour, and accountability — even, and especially, in the digital age.
Final Thoughts
Legal technology is here to stay. But with its power comes the responsibility to wield it carefully. The Ayinde judgment is a timely reminder that shortcuts, however tempting, can lead to significant professional and legal consequences.
For now, the safe use of AI in law depends on human judgment, rigorous checking, and proper training. And as the courts have made clear — failure to comply with these standards will not be excused by inexperience or technological naivety.
Ayinde, R (on the application of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin