
When AI Goes Wrong in Court: The R. v. Chand Wake-Up Call
A Canadian criminal case exposes dangerous gaps in AI oversight that could reshape legal practice forever.
September 22, 2025
The Fake Citations That Shocked Ontario's Courts
Picture this: You're a judge reviewing a criminal case, and something feels off about the legal precedents cited. After investigation, you discover they're completely fabricated, created by artificial intelligence and passed off as legitimate law.
This isn't science fiction. It happened in Ontario's R. v. Chand case, where Justice Joseph F. Kenkel had to order a complete refiling after uncovering AI-generated fake citations. The defence lawyer's AI tool had essentially "made up" legal precedents that don't exist.
Welcome to the new frontier of legal malpractice in the age of ChatGPT and generative AI.
AI Hallucinations: A Global Legal Crisis
The Chand case isn't isolated. A CBC investigation revealed 137 similar incidents worldwide where AI hallucinations, false information presented as fact, have infiltrated court proceedings. From New York to London, lawyers are submitting briefs with non-existent cases, fake quotes, and fabricated legal standards.
But here's what makes Canada's situation particularly troubling: we're flying blind without proper AI governance while our legal system becomes an unwitting testing ground for unvetted technology.
Where Canada's AI Oversight Falls Short
Legal Practice Without Safety Rails
Law professor Amy Salyzyn from the University of Ottawa puts it bluntly: these AI errors could cause actual miscarriages of justice. When fake precedents influence real judicial decisions, people's lives hang in the balance.
Yet the Law Society of Ontario's current AI guidelines rely on voluntary compliance—clearly insufficient given cases like Chand.
Government AI Use Lacks Transparency
Here's a startling fact: Canada's federal government operates nearly 300 AI projects. These systems predict tax case outcomes, sort visa applications, and make decisions affecting millions of Canadians. Most operate without robust oversight or public transparency about how they work or what could go wrong.
Healthcare AI Operates in Regulatory Shadows
Canadian hospitals increasingly use AI for medical diagnosis and treatment planning. Unlike pharmaceutical drugs, these AI systems face minimal regulatory scrutiny before deployment. The potential for diagnostic errors or treatment recommendations based on biased data remains largely unaddressed.
Employment Discrimination Through AI
Recruitment AI systems can perpetuate hiring bias, potentially violating human rights legislation. Current accountability measures can't adequately identify or prevent these discriminatory practices.
The Policy Vacuum That Enables Crisis
Canada's Artificial Intelligence and Data Act (AIDA) was supposed to fix these problems. Proposed in 2022, it promised risk-based regulation matching safety requirements to AI's potential for harm.
Instead, AIDA died in January 2025 after criticism for being toothless and vague.
Why This Matters Beyond Courtrooms
Professional Competence in Question
The Chand case raises uncomfortable questions about what lawyers should charge clients when AI does the research in minutes. If artificial intelligence performs the bulk of legal work, what exactly are clients paying for? Professional ethics haven't caught up with technological reality.
International Competitive Disadvantage
Canada once led global AI development. Now we're watching the EU implement comprehensive AI regulation while we debate basic oversight principles. Silicon Valley companies are establishing AI governance standards faster than our government can draft legislation.
Public Trust at Risk
Every AI failure in high-stakes environments, courts, hospitals, government services, erodes public confidence in both technology and institutions. Without proactive governance, we're setting ourselves up for systemic breakdowns.
What Canada Must Do Now
Mandatory AI Disclosure: High-risk sectors should immediately require disclosure when AI assists decision-making, especially in legal, medical, and government applications.
Professional Body Action: Law societies, medical colleges, and engineering associations need enforceable AI competency standards—not just suggestions.
Government Transparency: Federal departments must publish detailed inventories of their AI systems, including risk assessments and accountability measures.
Long-Term Structural Reform
Sector-Specific Regulation: Instead of waiting for omnibus federal legislation, implement targeted rules for legal practice, healthcare, and public administration.
Cross-Industry Learning: Create mechanisms for sharing AI governance lessons across sectors to prevent repeating costly mistakes.
Proactive Verification Systems: Develop tools and processes to validate AI outputs before they influence critical decisions.
The Real Cost of Doing Nothing
Justice Masuhara in a British Columbia case noted that "the integrity of the justice system requires no less" than competent, responsible AI use. But integrity isn't limited to courtrooms—it extends to every area where AI makes decisions affecting human lives.
The R. v. Chand case should terrify us not because a lawyer used AI poorly, but because it reveals how unprepared we are for AI's integration into critical systems.
We can't uninvent generative AI. We can't stop its adoption across legal practice, healthcare, and government. But we can choose whether to govern it responsibly or stumble blindly toward more systematic failures.