A Royal take on Artificial Intelligence in Courts
UK Courts Issue New Guidance on Artificial Intelligence Use in Legal Practice
As of 31 October 2025, the United Kingdom has issued significant updates regarding the use of Artificial Intelligence (AI) in the courts. The Guidance for Judicial Office Holders, as well as for barristers and lawyers, outlines key considerations and limitations for AI in legal practice.
Key Definitions
Algorithms are now defined broadly to include any set of instructions used to perform tasks such as calculations and data analysis, typically via a computer or smart device. AI itself is defined as any computer system capable of performing tasks normally requiring human intelligence. Consequently, the use of AI implies both explicit and implicit measurement of human intelligence.
Guidance for Practitioners
Before using any AI tools, users must ensure a foundational understanding of their capabilities and limitations. While AI tools like ChatGPT can help locate material you already recognize as correct, the UK judiciary warns against relying on them to uncover new information that cannot be independently verified. Judicial notes and law firm memoranda should not be drafted using AI as a primary source; instead, AI may serve as a tool for non-definitive confirmation.
Practitioners are also reminded that all public AI tools should be treated as potentially public-facing, similar to blockchain logs, meaning any input could be accessible externally. Despite this, the courts assume that AI may be involved in argumentation, presentations, and depositions throughout litigation.
Recommended and Non-Recommended Tasks
The guidance identifies tasks suitable for AI, such as summarizing large bodies of text, drafting presentations, and prioritizing emails. Conversely, legal research and analysis are not recommended for AI, as public chatbots do not produce convincing legal reasoning or argumentation.
Identifying AI Usage
Potential signs that AI has been used include unfamiliar citations from foreign jurisdictions and residual AI prompts embedded within the text, such as “prompt rejection” or “AI model, I can’t.”
Conclusion
The UK judiciary’s guidance provides a pragmatic framework for the responsible use of AI in legal practice. It emphasizes that while AI can support efficiency, human oversight and critical judgment remain essential. This guidance not only benefits English legal practitioners but may also serve as a reference for legal professionals internationally seeking to integrate AI responsibly.