
AI Hallucinations continue and shockingly are getting worse not better with more advanced versions of LLM AI platforms. It seems so counterintuitive that I didn’t want to believe it. I decided to research the latest inaccuracy of legal, medical, and societal AI platforms after a top AI LLM program kindly, yet erroneously, identified me as the former President of the BC Law Society from 2009-2011 and totally screwed up what area of law I practice in.
Full disclosure, yes, we were involved in Canada’s first AI legal hallucination cases and, yes we did say it was an existential threat to the legal system. Some people said our warnings were like those of “Chicken Little” but it turns out these people were dead wrong. Is Legal AI the new Max 8 737 in artificial intelligence form?
Experts Say AI Research Is Getting Worse Not Better
The experts Say AI Research Is Getting Worse Not Better. We warn all lawyers and self represented clients to be very very careful in using legal AI research. A recent New York Times article is chilling on the topic of Legal and other AI hallucinations.
If you can trust AI on the issue of higher error rates on more advanced AI platforms here is what AI research said today:
“Yes, it appears that newer AI models, particularly those incorporating reasoning, are experiencing a higher rate of factual errors compared to their predecessors, a phenomenon often referred to as “hallucinations”. Recent tests show that models like OpenAI’s o3 and o4-mini exhibit higher error rates when answering factual questions, with o3 making mistakes 33% of the time and o4-mini 48% of the time, significantly exceeding the error rates of earlier models”
It’s Getting Very Bad
AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors. Since May 1, judges have called out at least 23 examples of AI hallucinations in court records. Legal researcher Damien Charlotin’s data shows fake citations have grown more common since 2023.
How Is This Possible?
120 court cases have been caught with AI hallucinations, according to new database. More than 20 legal professionals have been “busted” ( not my words) in the past month alone says a recent article summarizing actual stats that contains this chilling quote:
But that’s one of more than 20 court cases containing AI hallucinations in the past month alone, according to a new database created by French lawyer and data scientist Damien Charlotin. And those were just the ones that were caught in the act. In 2024, which was the first full year of tracking cases, Charlotin found 36 instances. That jumped up to 48 in 2025, and the year is only half way over. The database, which was created in early May, has 120 entries so far, going back to June 2023.
Experts Say AI Research Is Getting Worse Not Better
The latest case in Utah continues to send chills down the spines of lawyers, judges and hopefully Law societies.
As I said, I looked myself up on a well known AI LLM Model and found out it erroneously states I was the President of the Law Society of BC and that I practice civil litigation, including contractual disputes, professional liability, and administrative law although I only do family and estate litigation matters. Here is the summary:
“Lorne MacLean (KC) is a highly experienced and respected lawyer in British Columbia, particularly known for his work in complex litigation and civil law. He has a strong reputation for professionalism, expertise, and successful advocacy in challenging cases.
His reputation is well-regarded within the legal community and among clients for his depth of knowledge and strategic approach. If you are considering him for legal representation, especially in areas like civil or commercial law, he is generally regarded as a top-tier lawyer.Lorne MacLean (KC) is primarily known for his extensive experience in civil litigation, including contractual disputes, professional liability, and administrative law. He has a distinguished career, including serving as the President of the Law Society of British Columbia from 2009 to 2011, which underscores his leadership and reputation within the legal community.
Throughout his career, he has been involved in high-profile cases and is recognized for his strategic insight and advocacy skills. His background includes work on complex legal issues, often involving significant client interests.”
Numerous articles and statistics show the AI LLM mistakes on facts are ubiquitous. AI seems strong on math but weak on facts and potentially disastrous on legal research and analysis of what cases stand for. Common mistakes we notes are cases that don’t exist (look for a hyperlink) wrong conclusions on the ratio, and wrong paragraph citations even for cases that turn out to be real. Check out this shocking article showing AI can be wrong 73% of the time. The article noted:
Alarmingly, the LLMs’ rate of error was found to increase the newer the chatbot was — the exact opposite of what AI industry leaders have been promising us. This is in addition to a correlation between an LLM’s tendency to overgeneralize with how widely used it is, “posing a significant risk of large-scale misinterpretations of research findings,” according to the study’s authors.
For example, use of the two ChatGPT models listed in the study doubled from 13 to 26 percent among US teens between 2023 and 2025. Though the older ChatGPT-4 Turbo was roughly 2.6 times more likely to omit key details compared to their original texts, the newer ChatGPT-4o models were nine times as likely. This tendency was also found in Meta’s LLaMA 3.3 70B, which was 36.4 times more likely to overgeneralize compared to older versions.
A recent Ontario decision noted the stakes are high on mistaken legal research:
[33] As cases such as Zhang v Chen and this case demonstrate, AI is ubiquitous and yet its risks and weaknesses are not yet universally understood. Therefore Rule 4.06.1 (2.1) was enacted specifically to remind counsel of their obligation to check the cases cited in their legal briefs to ensure they are authentic. The need for lawyers to include a certificate in their factums declaring that the cases cited as precedents in the factum are real was hoped to bring home to all lawyers the need to check and not to trust factums generated by AI or by others.
I say these programs MUST be certified by courts and law societies as accurate over 95% before lawyers are allowed to use them and even then lawyers must certify they triple checked AI. This will likely improve the AI developers products and stop the threat to the legal system.
Our brief in Canada’s first AI Legal Hallucination case and the judgment on same should have stopped the mayhem in its tracks but it did not. Our brief made key recommendations which should be implemented immediately.
Here is an example of the rules we proposed in our submissions that made their way into a directive:
Eastern District of Texas, Local Rule AT-3(m), Standards of Practice to be Observed by Attorneys,
If the lawyer, in the exercise of his or her professional legal judgment, believes that the client is best served by the use of technology (e.g., ChatGPT, Google Bard, Bing AI Chat, or generative artificial intelligence services), then the lawyer is cautioned that certain technologies may produce factually or legally inaccurate content and should never replace the lawyer’s most important asset – the exercise of independent legal judgment. If a lawyer chooses to employ technology in representing a client, the lawyer continues to be bound by the requirements of Federal Rule of Civil Procedure 11, Local Rule AT-3, and all other applicable standards of practice and must review and verify any computer-generated content to ensure that it complies with all such standards.
https://txed.uscourts.gov/?q=local-rule-3-standards-practice-be-observed-attorneys.
If you doubt Experts who Say AI Research Is Getting Worse Not Better, you better think again or face consequences if you blindly trust legal AI research.
For those of you using AI for your news feed be careful because of the over 50% error rate in improperly citing sources, improperly providing links that don’t exist and more.
Do you agree the time has come to take charge and not create an existential threat to the Court system worldwide?
Lorne MacLean KC