May 2025
The Rise of AI Hallucination in Legal Research
Navigating the Challenges and Implications of AI-Generated Errors in Legal Analysis
By Apeksha Jain
The question of Artificial Intelligence’s (AI’s) impact on the legal profession is no longer a question of if, but how.
According to the 2024 Thomson Reuters Future of Professionals report, 42% of professionals believe the rise of AI will transform their field in the next five years. This signals a growing recognition that AI's influence is not just coming, it's already on the horizon.
At this time, the dominant value of AI lies in its ability to enhance both efficiency and speed. The report notes that AI has the potential to save professionals up to 4 hours per week, which equates to about 200 hours annually. By automating routine tasks like document drafting, information summarization, and legal research, AI creates opportunities for law firms to improve access to justice, adopt value-based billing models, and reallocate resources toward more strategic work.
However, integrating AI into the legal field comes with its own set of challenges. On May 6, 2025, Justice Myers’ decision in the Ko v. Li case (2025 ONSC 2766), underscores the importance of using AI responsibly and ethically in legal practice. This ruling serves as a reminder that while AI has immense potential, its application must be carefully managed to avoid false claims and misrepresentations.
Counsel for the applicant submitted a factum in support of a motion to set aside a divorce order, citing several cases. However, upon review, Justice Myers encountered multiple issues. Some citations led to 404 error pages, others referred to cases wholly unrelated to the matter at hand, and several authorities were misrepresented, arguing points that were either incorrect or directly contradicted by the cited decisions.
Though it has not been confirmed whether generative AI was used to draft the factum, Justice Myers noted the “similar pattern” to submissions prepared using tools like ChatGPT. The matter has been set for a scheduling conference to determine whether counsel will face contempt charges.
In his reasons, Justice Myers offered a stern reminder:
“All lawyers have duties to the court, their client, and to the administration of justice. It is the lawyer’s duty to faithfully represent the law to the court… It is the lawyer’s duty to use technology, conduct research, and prepare court documents competently.”
This case is an example of a growing concern in the legal community; AI hallucinations. AI hallucinations are a phenomenon in Large Language Models (LLM) such as generative AI platforms like ChatGPT, which create outputs that are nonsensical or altogether inaccurate.
At SorbaraLAW, we recognize the promise AI holds for improving legal workflows, but we also believe that its use must be guided by caution, professionalism, and an understanding of its limitations. Based on recent developments and our own experience, we recommend the following key principles for lawyers using AI:
1. AI is a Tool, Not a Substitute for Legal Judgment
AI can help with formatting or basic drafting, but it cannot replace the legal reasoning, analysis, and contextual understanding that comes with professional training and experience. Lawyers must retain full authorship and oversight over any legal document submitted to the court.
2. Never Input Confidential Client Data into Public AI Systems
Entering sensitive or identifiable client information into generative AI tools may expose that data to unintended storage or misuse. These systems often “learn” from inputs, creating risks of future data leakage or algorithmic bias.
3. Use Trusted Legal Research Tools
Generative AI is not a reliable source for legal research. Unlike established platforms like CanLII, Westlaw, and LexisNexis, AI tools may fabricate cases, misquote judgments, or miss legal nuance. Lawyers must rely on trusted sources to find and interpret case law.
AI should, can, and will be a valuable tool for professionals, but its use must be strategic and thoughtful. For example, one thing you should never do is rely on AI for legal advice. Doing so would likely be both unethical and, at best, unreliable. While AI has the potential to streamline certain services, it should be harnessed to make legal processes more efficient and accessible, rather than as a substitute for professional legal services.
Take legal research, for instance. It can be an incredibly time-consuming task, but AI can certainly help speed things up. That said, there are important nuances to keep in mind. If you’re using AI for research, it’s crucial that you ask for high-level information that doesn’t require precise legal citations or case law references. AI can help identify patterns or summarize concepts, but the final analysis, particularly for specific legal citations, should always involve human oversight to ensure accuracy and compliance with legal standards.
In the end, AI should be viewed as a tool that enhances human expertise, not as a replacement for it. It can increase efficiency, reduce costs, and make legal services more accessible, but only if used with caution and within the right context.
Thank you for reading!