Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
_pods_template
lawyer
acf-field-group
acf-field
Stopping Fake BC AI Legal Cases

Canada’s first stopping fake BC AI legal cases occurred in late 2023 and early 2024  when MacLean Law lawyers Fraser and Lorne MacLean KC successfully blocked fake legal cases generated by CHATGPT from being relied upon by the BC Supreme Court after they and their client were put to the expense of  exposing fake cases erroneously created by the CHATGPT search.

We set out below, an extract of what we argued in the BC Supreme Court, which may be useful to lawyers, Law Societies and the Courts in Canada. We presented a recent and chilling Stanford University study to Justice Masuhara of the BC Supreme Court that showed an astonishing 75% error rate in AI creating wholly fake cases or grossly inaccurate analyses of cases. We also cited US and UK cases where similar AI fake case disasters occurred that led to fines and sanctions against lawyers. Our recommendations made in submissions to the Court are mirrored in a recent New Jersey Court directive.

The case sent shockwaves through the legal community around the world.

We welcome a new BC Court of Appeal direction concerning lawyers verifying the accuracy of any materials including AI generated legal research that was just issued:

“The Registrar’s Filing Directive has been updated to reiterate parties’ responsibility to ensure the accuracy and authenticity of materials filed with the Court, particularly given various litigation aids and artificial intelligence tools.”

Stopping Fake AI Legal cases
Fraser MacLean and Lorne MacLean KC on Global News Canada

Abuse of Process and Costs Arguments

Many lawyers and the public were unaware that the initial application by our client to dismiss the application to remove children from BC to a non- Hague state, as an abuse of process and to be made whole on wasted costs in exposing the error, was initially made against the Respondent as US cases had approved. We know top lawyers acting properly for their client would have done exactly what MacLean Law did and their clients would expect nothing less.

The judge found, there were unfortunate delays in admitting who really caused the error until the day before the hearing commenced, when it only became known that the error was by the father’s counsel alone. The judge pressed both counsel to admit that the father, client was not at fault but rather it was the father’s lawyer. At this point costs focused on the lawyer both as special costs or in the alternative costs under the BC Supreme Court Family Rules where a lawyer who has wasted the other party’s money on legal costs can be found personally liable. In the end result the multi-millionaire father lost his application to take the children to China, given our client’s fear of non-return, and he had to pay our client’s legal costs on the merits and his lawyer had to pay the legal costs for the wasted time spent by his wife’s lawyers to discover the cases were fake.

Success Followed $30,000 a Month Support Win By Fraser MacLean

This success followed Fraser MacLean’s win a month before for our client of roughly $30,000 a month in spousal and child support and the father paid this overdue sum as part of our win on the travel and costs issues.

Fraser MacLean Of MacLean Law Warns Of The Dangers

Fraser MacLean, the tenacious 5 year call family lawyer, who discovered the fake legal cases speaks about the impact of the judgment to Global TV News.

This “first case of its kind” in Canada has generated a firestorm of academic and public comments about how this mistake could have possibly happened and what should be done to make sure it never happens again.

Stopping Fake BC AI Legal Cases

Stopping Fake BC AI Legal Cases
Fraser MacLean and Lorne MacLean KC

Lawyers have a professional obligation to make sure that the information they submit to the court is not deceptive or misleading (BC Law Society Rule 6.1). Unrepresented parties don’t have that duty so courts must ensure that unrepresented parties must also be governed by strict AI legal research rules given that many family law cases involve self represented parties.

Courts have warned of the dangers of using Artificial Intelligence in legal briefs or in judgments by Courts.  In March 2023, Chief Justice Hinkson issued a directive to the court, the first in Canada. This was circulated to the Canadian Judicial Council:

I accept the recommendation and ask you to refrain from using ChatGPT or other like platforms until further notice.

The basis for the recommendation is that the platforms are still in a test or early use phase; can produce entirely false information; are unable to identify or reveal the data sources upon which they base their responses; can influence outcomes or create perceptions of a user unrelated to a proceeding; may be using information which is proprietary; may not protect the privacy of a user; and raises ethical questions as to whether decisions are a judge’s alone.

I have asked the committee to continue to review developments in this area and provide relevant updates.

We are pleased to see the BC Law Society made recent updates in response to our win:

Code amendments regarding technological competence

The effective use of technology has become an essential element of responsible legal practice. While the Code of Professional Conduct for British Columbia already provides extensive guidance on standards for lawyer competence, the impact of technology on the contemporary delivery of legal services merits specific guidance for lawyers on the competent use of technology. Commentaries 4.1 and 4.2 have been added to BC Code rule 3.1-2 to address this issue and bring the Code more closely in line with the Federation of Law Societies’ Model Code of Professional Conduct.

Canada Stopping Fake BC AI Legal Cases

Some US Courts and the Federal Court of Canada have provided some strict guidance in the area. Presently , new rules on AI lawyer competency and lawyer ethical duties are overdue and need to be implemented to prevent an existential threat to the legal system in Canada. The BC Law Society has said it will implement guidelines in the near future.

Lawyers, judges, and humanity must question whether ChatGPT and/or Google Bard and the like, simply make mistakes, whether it creates what the user is looking for in an attempt of crass corporate greed or is it testing the human race to see if it can keep up, and to see what it can get away with and if so, we are dealing with something far more malevolent.

As officers of the court, lawyers have a gatekeeping duty to protect the integrity of the courts and public trust in the profession. Courts and Law Societies have a duty to ensure lawyers and self represented parties do not hijack the court system with erroneous arguments and case law.

Amy Salzyn, Ottawa Law professor suggests law societies, and the courts, should make the following changes:

  1. The amended competence rule would require lawyers to understand and use “relevant” technology that is “reasonably available” to them;
  2. the amended confidentiality rule would require lawyers to make “reasonable efforts” to prevent unauthorized or inadvertent disclosure of confidential client information; and
  3. the proposed new due diligence rule would require “reasonable steps” be taken to ensure that legal technology being used is consistent with a lawyer’s ethical duties.
  4. MacLean Law urges Courts and Law Societies to add a fourth suggestion that whenever AI related to legal documents filed in courts is used that the lawyers identify for the courts what, if any, portion of documents or arguments have been created with the assistance of artificial intelligence, and for what purpose. And further, and critically, that the lawyer provides their undertaking that they have provided direct human oversight to ensure that any such artificially generated legal work is verified as 100% accurate. It is critical there is always a “human in the loop”.

If we don’t take immediate steps to stop AI fake cases deception or error, then we can expect potential legal mayhem, including judges no longer being able to trust submissions received from counsel or self represented parties who may use AI to help them level the playing field, forcing judges to verify every case themselves. We questioned how many fake cases had already slipped through the Court system and whether there needed to be a review.

Dangers For Not Stopping Fake BC AI Legal Cases

In Morgan v. Cmty. Against Violence, United States District Court for the District of New Mexico, at pg. 9, the court states:

Quite obviously, many harms flow from such deception—including wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system (to name a few).

We can do no better than to refer to the summary of harms set out by United States District Judge Castel at the second paragraph of the now infamous Mata case out of New York (this is on pg. 7 of 27 pf the Lexis version of the judgment):

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

Obviously, the damage that flows from AI hallucinations in the Court systems is:

  1. The potential for flawed or erroneous decision-making by the courts;
  2. The wasting and increasing of an opposing party’s time and money;
  3. The concurrent wasting of the court’s resources and tax dollars; and
  4. The reputational harms to the legal system as a whole because people may well doubt the legitimacy of decisions if they do not believe real cases were used.

How Can we Protect The legal System By Stopping Fake BC AI Legal Cases?

Like Oppenheimer and the atomic bomb which ended a war, but threatened civilization moving forward, if we fail to supervise powerful AI legal technology, we create an existential threat to the legal system.

The real fear is that many fake cases have already made their way into the Canadian and other legal systems and no one even knows. We hope no mistakes in legal decisions have occurred because of unknown fake cases being relied upon in a busy courtroom day.  This is a chilling thought. Think of Hal 9000 and Space Odyssey 2001 as the worst case, the next time someone wants to sell you on an AI platform to do your legal thinking for you.

At the same time with proper “human in the loop”  supervision, there is hope that “human and machine” can coexist and we can lower the legal costs for Canadians and improve the quality of legal services. Lawyers will be expected to drive costs down through the use of accurate and reliable technology and to be able to ferret out “fake legal news”.

There is a legal AI sweet spot for Courts, lawyers and the public but we need Law Societies and the Courts to make education mandatory, not optional for lawyers and also ensure self represented parties adhere to the same strict standards.