
Flaws in legal AI research, plagiarism issues over people using legal AI, and a potential, if not guaranteed, marked decline in the skill set of lawyers, articled students and law students who blindly use AI, should send chills through the legal industry.
A new article in Canadian Lawyer Magazine, interviewing our own Fraser MacLean, makes suggestions and tries to help the legal AI research system advance and stay safe for lawyers, judges and clients wanting a healthy legal outcome. Can lawyers sue new AI legal research platforms in a class action for continued legal hallucinations? Stand by because we guarantee it will happen.
Other articles including a recent Walrus article say the quickest way to lose a legal case is to use fake legal research cases that don’t exist or incorrectly describe the ratio of a case.
Law suits may well result against law firms using legal Ai research platforms when a law firm not really understanding why or how its AI tool is making the decisions it makes, and therefore being unable to explain it to clients and the Court. That’s not uncommon with AI legal research technology, since it is essentially a “black box” that can more or less train itself, typically relies on unknown very complicated calculations, and, once trained, operates with little or no human intervention. Do lawyers, the public and developers respond to reports of errors in the AI application? The concern is how do “human’s in the loop ” supervise any form of AI including legal AI if we don’t understand the algorithms.
Best Legal AI Research Practices Lawyers
In this brave new world new AI legal research issues are now surfacing. Legal pioneers that are tech savvy need to take charge to stop an existential threat to the legal system.
We have been following a plethora of AI Court legal research mayhem incidents for the past several months. Did you know we are well over 120 incidents were fake cases were put before the courts around the world? We were sold on AI being more efficient, increasing access to justice and more. More disturbingly, what is the negative impact on legal skills of lawyers. We are amongst a group of lawyers that are now extremely concerned that the quality of legal arguments will go down and you need to also be aware of legal plagiarism law suits where lawyers and databases sue for having their data and arguments hijacked by others.
Our analogy is that, we are in a gold rush, where there are many fake claims to be staked about how good artificial intelligence is and several years of due diligence by both users and developers of AI programs to make sure we don’t make the legal system worse. Here is a great BIV article where Telus faces legal action for overselling AI.

The New English case of Frederick Ayinde, R (on the application of) v The London Borough of Haringey [2025] EWHC 1040 (Admin), provides sage advice from a recent UK decision, that mirrors what we recommended to the BC Supreme Court in Canada’s first legal AI hallucination fake family law research case:
3. The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.
The use of artificial intelligence in court proceedings
4. Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts. A recent report into disclosure in cases of fraud before the criminal courts has recommended the creation of a cross-agency protocol covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material. Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.
5. This comes with an important proviso however. Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.
6. In the context of legal research, the risks of using artificial intelligence are now well known. Freely available generative artificial intelligence tools, trained on a large 1Disclosure in the Digital Age, Independent Review of Disclosure and Fraud Offences, Jonathan Fisher KC, recommendation 2 and paragraphs 430-433. The appendix to this judgment contains examples from different jurisdictions of material being put before a court that is generated by an artificial intelligence tool, but which is erroneous. language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.
7. Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example). Authoritative sources include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers.
8. This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.
9. We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.
Notably English Courts have issue a number of directives on the ethical use of AI and they are taking a properly stricter approach than Canada has to stop the existential threat to the legal system of fake or inaccurate case analyses being inadvertently adopted by courts or arbitrators and lawyers settling cases.
If you need proper legal advice that involves properly “human in the loop” trained family lawyers , Call us.
hashtag ailegalhallucinations hashtag cba hashtag canadianlawyermagazine hashtag wsj hashtag globeandmail hashtag vancouversun hashtag TLABC hashtag biv hashtag sharpmagazine hashtag nationalpost