Back to Blog
    General Legal

    AI in the Courtroom: Why Lawyers Are Getting Sanctioned for Fake Citations

    DA
    Diogo Almeida
    April 14, 20269 min read
    Share this article

    Need an Attorney?

    Get matched with pre-screened attorneys in your area. Free consultation, no obligation.

    Get Matched Free
    100% FreeNo ObligationConfidential
    Two attorneys reviewing AI-generated legal document with fake citations on tablet screen showing justice scales and circuit board
    Two attorneys examine an AI-generated legal filing flagged for fabricated case citations, with a gavel and scales of justice visible on the desk — illustrating the growing ethics crisis facing lawyers who rely on unverified AI output in court submissions.

    Attorneys across the United States are facing professional sanctions — fines, mandatory training, and referrals to state bar associations — after submitting court filings that cite cases that do not exist. The citations were generated by artificial intelligence tools. The most recent high-profile example involves a Phoenix attorney disciplined by a federal judge in April 2026 after AI-fabricated case references were found in legal filings tied to a workplace discrimination lawsuit against the NBA's Phoenix Suns. Legal ethics experts say the problem is growing faster than the profession's response to it.

    What Happened in the Phoenix Suns Case

    The case, Montes v. Suns Legacy Partners LLC et al., involved a former Phoenix Suns employee who alleged sexual harassment and retaliation. The attorneys representing the plaintiff submitted court motions that the opposing counsel flagged for problematic legal citations. A federal judge reviewed the filings and identified at least 18 fabricated references — citations to cases that do not exist in any court record.

    Attorney Sheree Wright, who represented the plaintiff, acknowledged the problem in court. Three separate motions contained questionable citations, all traced to the same staff member. The judge ordered Wright to cover a portion of the opposing attorneys' fees and to complete additional training on the ethical use of AI. A copy of the ruling was forwarded to the Arizona State Bar and to all district and magistrate judges in Arizona.

    Sean Harrington, an attorney and director of ASU's AI and Legal Tech Studio, told reporters he was not surprised. He stated that more than 12,000 cases of AI hallucinations in court filings have been documented — and those are only the ones known to have occurred. "We've seen serious sanctions, we haven't seen anyone get disbarred because of it, but we have seen people get suspended," Harrington said.

    An AI hallucination occurs when a generative AI tool produces output that is plausible-sounding but factually false. In a legal context, this means the AI generates case citations — complete with party names, court names, and docket numbers — that refer to cases that were never decided, or in some instances, never filed at all.

    The American Bar Association's Formal Opinion 512, issued on July 29, 2024, addressed this risk directly. The opinion noted that generative AI tools currently lack the ability to understand the meaning of the text they produce or evaluate it in context, and may therefore combine otherwise accurate information in unexpected ways to produce false results. Uncritical reliance on that output — without independent verification — can result in misleading representations to courts.

    The problem is not theoretical. It is recurring, it is accelerating, and it is reaching courts at every level of the federal and state judiciary.

    A Pattern of Sanctions Across the Country

    The Phoenix Suns case is the latest in a growing national pattern. In February 2025, a federal district court sanctioned three lawyers from Morgan & Morgan for citing AI-generated fake cases in motions in limine. Of the nine cases cited in the motions, eight were non-existent. The attorney who drafted the motions admitted it was their first time using the firm's in-house AI tool in that capacity and that they failed to verify the output before filing. The lawyer who drafted the motions was sanctioned $3,000; the two attorneys who co-signed the filing without drafting it were each sanctioned $1,000.

    In a separate case, a federal court in Oregon ordered a lawyer to pay $109,700 in sanctions and costs for AI-generated errors in court filings — potentially a record penalty at the time. The landmark Mata v. Avianca, Inc. case, decided in the Southern District of New York, first brought widespread public attention to the issue when attorneys were sanctioned for submitting a brief containing fabricated judicial decisions generated by ChatGPT. A California judge later fined two law firms $31,000 for submitting a brief containing AI-generated fake citations that were not reviewed prior to filing.

    A researcher at HEC Paris who tracks AI-related court sanctions globally documented more than 1,200 such cases as of early 2026, with approximately 800 originating from U.S. courts. The rate is still increasing.

    What the ABA's Formal Opinion 512 Requires

    ABA Formal Opinion 512 established the current ethical framework governing how attorneys must use generative AI tools. It applies existing Model Rules of Professional Conduct to the specific challenges posed by AI-generated content. The opinion is not binding on state bars independently, but it provides the professional standard that courts and disciplinary bodies reference when evaluating attorney conduct.

    The opinion addresses six areas of ethical obligation. Under Model Rule 1.1 (Competence), attorneys must understand the capabilities and limitations of any AI tool they use and keep that understanding current as the technology evolves. Under Model Rule 3.3 (Candor Toward the Tribunal), attorneys must carefully review all AI output to ensure that representations made to a court are not false or misleading — and must correct any such representations previously made.

    Model Rule 5.3 (Supervision of Nonlawyer Assistance) further requires managerial and supervisory attorneys to establish clear policies on AI use and ensure that all staff, including paralegals and legal assistants, comply with professional conduct rules when using these tools. The Phoenix Suns case illustrates precisely what happens when that supervisory responsibility is not exercised: a staff member's unsupervised use of AI flowed directly into filed court documents, and the attorney of record bore the legal and professional consequences.

    Why Attorneys Keep Making This Mistake

    The persistence of AI hallucination sanctions — despite years of high-profile cases and extensive bar guidance — reflects several structural pressures within legal practice. Attorneys face heavy caseloads, tight deadlines, and increasing pressure to reduce the cost of legal services. Generative AI tools promise faster research and drafting. The temptation to trust the output without verification is real, particularly when the citations appear detailed, authoritative, and credible.

    The problem is compounded by delegation. In many of the documented sanctions cases, the attorney of record did not personally use the AI tool — a paralegal, law clerk, or junior staff member did. The attorney then signed and filed the document without independently checking the citations. Under the ABA's supervisory rules and under the professional conduct standards of every U.S. jurisdiction, that is not a defense. The filing attorney remains fully responsible for what is submitted to the court under their signature.

    There is also a competence gap. Many attorneys who adopted AI tools early did so without adequate training on their limitations. Generative AI tools are not legal research databases. They do not retrieve verified case law from indexed court records the way Westlaw or LexisNexis do. They generate text — and that text can include entirely fictional citations that are formatted to look exactly like real ones.

    What Attorneys Must Do to Avoid Sanctions

    The professional standard is clear: every citation generated by an AI tool must be independently verified in a primary legal database before it is included in any court filing. This is not optional, and it is not delegable without adequate supervision.

    Speaking of legal matters...

    Need Help with Your Case?

    Our network of accredited attorneys specializes in cases just like yours. Get a free consultation today.

    Attorneys who use AI tools for drafting briefs or motions should implement a citation verification step as a mandatory part of their workflow — separate from the drafting process and completed by a person with access to verified legal databases. Firms should establish written AI use policies that specify how AI-generated content must be reviewed before submission to any tribunal.

    Beyond verification, ABA Formal Opinion 512 advises that attorneys consider whether client disclosure of AI use is required under Model Rule 1.4. Depending on the jurisdiction and the nature of the representation, clients may have a right to know that AI tools are being used in the preparation of their legal documents. Some courts have already implemented local rules requiring attorneys to disclose generative AI use in filed documents — a trend that is accelerating nationally.

    What This Means for Clients

    For consumers seeking legal representation, the AI hallucination problem has a direct practical implication: not all attorneys using AI tools are using them responsibly. A brief containing fabricated citations is not just an ethical violation — it can damage your case. If opposing counsel identifies the hallucinated citations before you do, the credibility of your entire filing is undermined. If the court identifies them, the consequences can include sanctions, adverse rulings, and reputational damage to your legal team.

    When evaluating an attorney, it is reasonable to ask how they use AI tools in their practice and what verification procedures they follow. Responsible use of AI in legal work is not a concern — it is an increasingly standard part of efficient legal practice. Irresponsible use, however, is a documented source of professional sanctions and case damage that clients ultimately bear.

    What is an AI hallucination in a legal context?

    An AI hallucination in a legal filing is a case citation or legal reference generated by an AI tool that does not correspond to any real court decision. The citation may appear detailed and credible but refers to a case that was never decided or filed.

    What happened in the Phoenix Suns AI citation case?

    In April 2026, a federal judge sanctioned attorney Sheree Wright after finding at least 18 fabricated case citations in filings submitted in a workplace discrimination lawsuit against the Phoenix Suns. The judge ordered Wright to cover opposing attorneys' fees, complete AI ethics training, and referred the matter to the Arizona State Bar.

    Can an attorney be disbarred for submitting AI-generated fake citations?

    As of early 2026, no attorney has been disbarred solely for submitting AI-generated fake citations, but attorneys have been suspended. Sanctions have included fines, mandatory training, fee disgorgement, and referrals to state bar disciplinary bodies. The severity of consequences depends on the jurisdiction and the judge.

    What does ABA Formal Opinion 512 say about AI use?

    ABA Formal Opinion 512, issued in July 2024, establishes that attorneys' duties of competence, candor toward tribunals, client communication, confidentiality, supervision, and fee reasonableness all apply to the use of generative AI tools. Attorneys cannot delegate professional responsibility to an AI tool and must independently verify all AI-generated content before submitting it to a court.

    Are attorneys required to disclose AI use in court filings?

    Disclosure requirements vary by jurisdiction. Some federal courts have adopted local rules requiring attorneys to disclose whether generative AI was used in preparing filed documents. ABA Formal Opinion 512 advises attorneys to evaluate whether client disclosure is required under their duty of communication, depending on the scope of AI use in the representation.

    Who is responsible if a paralegal or staff member submits AI-generated fake citations?

    The attorney of record who signs and files the document remains professionally responsible, regardless of which staff member used the AI tool. ABA Model Rules 5.1 and 5.3 require supervising attorneys to establish policies ensuring that all staff comply with professional conduct rules, including verification of AI-generated content.

    How can an attorney verify AI-generated case citations?

    Every case citation generated by an AI tool should be independently verified in a primary legal database such as Westlaw or LexisNexis before it is included in any court filing. This verification should confirm that the case exists, that it says what the AI claims it says, and that it has not been overruled or limited by subsequent decisions.

    Is using AI for legal research allowed?

    Yes. AI tools are permitted in legal practice, and many jurisdictions encourage their responsible use to improve efficiency and access to justice. The ethical obligation is not to avoid AI, but to use it competently — which means understanding its limitations, supervising its use, and verifying its output before relying on it in client matters or court submissions.

    How many AI hallucination sanction cases have occurred in U.S. courts?

    As of early 2026, researchers tracking AI-related court sanctions globally have documented more than 1,200 such cases, with approximately 800 originating from U.S. courts. Experts note this figure reflects only cases that became publicly known, and the actual number is likely higher. The rate of documented cases is still increasing.

    What should I ask my attorney about AI use?

    You can reasonably ask your attorney whether they use AI tools in your matter, what those tools are used for, and what verification procedures they follow before submitting AI-assisted work to a court. Attorneys who use AI responsibly should be able to answer these questions clearly and without hesitation.

    This content is for general informational purposes only, is not legal advice, and does not create an attorney-client relationship. Joy Coleman is licensed in Georgia and New Jersey. Readers should consult a qualified attorney licensed in their jurisdiction for advice specific to their situation.

    If you have concerns about how your attorney is handling your case, find a qualified attorney on AttorneyReview.com. Not sure where to start? Use our Get Matched feature to connect with a vetted attorney suited to your legal needs.

    Need an Attorney?

    Get matched with pre-screened attorneys in your area. Free consultation, no obligation.

    Get Matched Free
    100% FreeNo ObligationConfidential

    Legal information only — not legal advice. No attorney-client relationship is formed. Laws vary by jurisdiction. Deadlines are strict. Don't wait. If you have a potential case, contact Counsel immediately.

    Related Articles

    Explore more articles on our blog.

    Need an Attorney?