Skip to content

Legal Eye – a leading provider of risk and compliance solutions – is highlighting the heightened need for robust AI governance within law firms following the Divisional Court’s recent judgment in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin).

In a strongly worded decision, the Court criticised the use or suspected use of generative AI tools that resulted in fictitious case law, fake citations and misstatements of law being put before the Court. The judgment also makes clear that professional liability, ethical duties and the integrity of the justice system are directly at stake when inaccurate or fabricated material is submitted.

Significantly, the Court emphasised that individuals in leadership roles—including managing partners, heads of chambers and senior managers—must ensure that everyone providing legal services understands:

  • The limits of AI tools
  • The duty to verify outputs
  • The heightened obligation of oversight
  • The serious ramifications of submitting fictitious material to the court.

 

The Court also warned that in future Hamid-style hearings, judges may examine whether leadership responsibilities relating to AI oversight and training have been fulfilled.

How Legal Eye is helping law firms use AI safely

At a time when the use of generative AI in legal practice is rapidly increasing, the ruling strengthens the need for firms to demonstrate clear oversight, risk management and verification processes whenever AI is used in practice.

Legal Eye has long supported firms in putting appropriate governance around new technologies. For example, its AI policy for law firms, last updated in September 2025 and available through the Legal Eye Policy Store for £195 + VAT sets out a structured framework to help firms implement AI in a controlled and compliant manner. The policy addresses:

  • The acceptable use of AI tools across the organisation
  • Requirements for human oversight, verification and accountability
  • Confidentiality, privilege and data-protection safeguards
  • Procedures for identifying and managing inaccuracies, hallucinations and bias
  • Expectations around staff training and awareness
  • Alignment with guidance from the SRA, CLC, CILEx and the ICO.

 

The policy is intended for firms already engaging with AI technologies as well as those planning to introduce them.

Why an AI policy matters

The Ayinde and Al-Haroun cases highlight how generative AI can produce material that appears credible but is entirely false, and how quickly such content can enter formal documents if not checked. Regulators including the SRA, CLC and CILEx have been clear that innovation is welcome only where firms can show that standards of supervision, competence and client protection are maintained.

A formal AI policy helps firms to manage risks consistently and evidence compliance with their regulatory duties.

Paul Saunders, Managing Director at Legal Eye, commented: “The High Court judgment makes it clear that AI oversight is now a core professional responsibility. The risks associated with misuse extend far beyond inaccuracies – they go to the heart of public confidence in the justice system. Legal Eye’s AI policy and consultancy services are designed to give firms the clarity and controls they need to use AI responsibly while maintaining compliance and protecting their reputation.”

For more information, or to discuss how Legal Eye can support your firm with AI governance and compliance, please contact bestpractice@legal-eye.co.uk or call 020 3051 2049.

Back To Top