John Roach, Esq. | February 2, 2026 | Attorney Tips
AI for Personal Injury Lawyers: Harness the Power
Artificial intelligence has changed how I run my solo personal injury practice — not by replacing the legal judgment that wins cases, but by compressing the time it takes to do the work that supports that judgment. Research that once took hours, deposition summaries that used to require a paralegal, demand letter drafts that started from a blank page — these tasks look different now. This post covers how I use AI tools in my practice at the Law Office of John J. Roach, the prompting techniques that produce useful outputs, the ethical guardrails every attorney must maintain, and the practical limits that every attorney who uses these tools honestly will acknowledge.
I handle catastrophic car accident cases, traumatic brain injury cases, and complex UIM arbitrations as a solo practitioner. I am not a large firm with an army of associates and paralegals. AI tools — used carefully and verified thoroughly — help me compete at a level that the complexity of my cases demands. If you handle similar cases and are considering referrals, visit my attorney referrals page.

What AI Actually Does Well in a Personal Injury Practice
The most useful applications of AI in my practice fall into four categories: summarization, drafting, research, and strategy brainstorming. Each has meaningful limits that an honest practitioner should understand before deploying these tools on client matters.
Summarization. Long depositions, voluminous medical records, and multi-volume discovery productions can be summarized by AI tools accurately and quickly. In a serious TBI case with two years of treating records from multiple providers, the ability to generate a chronological summary of treatment, symptoms, and functional limitations in minutes — rather than hours — is genuinely valuable. I use these summaries as starting points, not final work product. I verify every significant fact against the source document before it appears in a brief or mediation statement.
Drafting. AI produces useful first drafts of demand letters, discovery responses, deposition outlines, and jury instructions when given specific, detailed prompts. “First draft” is the operative term. AI-generated legal documents require substantive attorney review, factual verification, and significant editing before they are ready to serve a client. The value is in eliminating the blank page — not in eliminating the attorney’s role in the document.
Legal research. AI legal research tools can identify relevant case law and statutory authority quickly. However, AI hallucination — the generation of plausible-sounding but fictitious citations — is a documented and ongoing problem that has resulted in court sanctions against attorneys who filed briefs with AI-generated fake citations. Every case citation produced by an AI tool must be verified in Westlaw, Lexis, or the official reporter before it appears in any filing. No exceptions.
Strategy brainstorming. This is where I find AI most useful as a thinking partner. Prompting an AI model to play devil’s advocate — to argue the defense’s strongest case theory against my facts — surfaces weaknesses in my own analysis that I might otherwise miss. Prompting it to suggest jury instruction arguments, damages presentation approaches, or deposition cross-examination outlines gives me starting material that I then evaluate and refine with actual legal judgment.
Prompting Techniques That Produce Better Outputs
The quality of AI output is almost entirely determined by the quality of the prompt. Vague inputs produce vague, generic outputs. Specific, structured prompts with clear context and defined objectives produce outputs that are actually useful. Here is the framework I use:
Assign a role. Begin by telling the AI what expertise to apply. “You are an experienced California personal injury trial attorney specializing in traumatic brain injury cases” produces more targeted output than a generic prompt with no role assignment.
Define the objective precisely. “Draft a demand letter” is too vague. “Draft a demand letter to CSAA summarizing a UIM claim involving a client who suffered a mild TBI with post-concussive syndrome, two years of treatment, and a neuropsychological diagnosis of cognitive impairment. The letter should be 1,000–1,500 words, organized chronologically, and emphasize the permanence of the cognitive deficits and their impact on daily functioning” gives the model what it needs to produce useful output.
Provide anonymized case context. Strip all identifying information before inputting case facts — names, dates of birth, addresses. Substitute placeholders. Then give the model the relevant facts: mechanism of injury, diagnosis, treatment course, functional limitations, expert opinions. The more specific the factual input, the more tailored the output.
Use iterative loops. Treat AI drafting as a multi-step process. Step one: research and summarize the relevant legal standard. Step two: draft an outline. Step three: draft the argument using the outline. Step four: identify the three strongest counterarguments and suggest responses. Each step builds on the last and produces better work product than a single “draft this brief” prompt.
Include an escape clause. Add language such as: “If you need additional information to complete this task accurately, ask me before proceeding.” This prevents the model from filling gaps with fabricated facts or unsupported assumptions.
Spotting AI-Generated Content in Opposing Filings
As AI use becomes more common in legal practice, identifying AI-generated content in opposing filings is a useful skill. Common markers include: vague generalities that avoid case-specific specifics; overuse of transitional phrases like “furthermore,” “moreover,” and “it is important to note”; repetitive list structures in groups of three; symbolic or grandiose language not anchored in specific citations or facts; and an overall tone of confident authority unsupported by verifiable sources.
When I identify these patterns in an opposing brief or expert report, it signals a potential lack of careful attorney review. Always verify citations in opposing filings that show these markers. AI hallucination occasionally makes it into filed documents — and when it does, it creates significant credibility problems for opposing counsel.
Ethical Obligations When Using AI in Legal Practice
The California State Bar and bar associations across the country have issued guidance on attorney use of AI tools. The core professional obligations are unchanged, but their application to AI requires specific attention.
Competence — Rule 1.1. This requires attorneys to maintain the legal knowledge, skill, and thoroughness necessary for competent representation. In the AI context, this means understanding the capabilities and limitations of the tools you use — including the risk of hallucination, the absence of real-time legal updates in many tools, and the mandatory need for attorney verification of all AI-generated content before use in client matters.
Confidentiality — Rule 1.6. Before inputting any case information into an AI tool, review the tool’s terms of service and data retention policies. Many general-purpose AI tools retain and use input data for model training. Anonymize all identifying information before inputting case facts — substitute placeholders for client names, birthdates, addresses, and other identifying details. Never input raw, unredacted client information into any tool whose data handling practices have not been thoroughly reviewed.
Candor to the tribunal — Rule 3.3. This directly prohibits submitting AI-generated citations that have not been verified. The attorney, not the AI, is responsible for the accuracy of every citation and factual statement in every filing. An increasing number of courts have standing orders requiring disclosure of AI use in filings — check local rules before submitting AI-assisted work product.
Supervision. If you use AI tools in a firm setting with associates or staff, you are responsible for supervising their use. Establish clear office policies on when AI tools may be used, what must be verified, and how outputs must be reviewed before use in client matters.
The Honest Limits of AI in Personal Injury Practice
AI does not know your client. It does not know the mediator’s tendencies, the judge’s preferences, or the defense attorney’s patterns across a decade of cases in the same courthouse. It cannot evaluate witness credibility, read a deposition transcript for tone and hesitation, or make the judgment call about when to push in cross-examination and when to sit down. These things — the core of trial lawyering — are not AI tasks.
The risk of overreliance is real. Attorneys who delegate too much to AI tools without maintaining their own substantive engagement with the case produce worse work — not better. The goal is to use AI to free up attorney time for the judgment-intensive work that only attorneys can do: case strategy, witness preparation, negotiation, and trial. Used that way, it is genuinely valuable. Used as a substitute for attorney judgment, it is a liability.

If you have questions about how I handle complex personal injury litigation or want to discuss a potential referral, visit my attorney referrals page or call me at (415) 851-4557. I handle every significant case personally and work on contingency.
Frequently Asked Questions: AI in Personal Injury Law Practice
Yes, with appropriate safeguards. The California State Bar has issued guidance confirming that AI tools may be used in legal practice, subject to the existing duties of competence, confidentiality, and candor to the tribunal under the California Rules of Professional Conduct. Attorneys must understand the limitations of the tools they use, verify all AI-generated content before use in client matters, protect client confidential information, and comply with any court-specific disclosure requirements regarding AI use in filings.
AI hallucination refers to the generation by AI tools of plausible-sounding but fabricated information — including case citations, statutory references, and factual claims that do not exist or are materially inaccurate. Several attorneys have faced court sanctions for filing briefs containing AI-generated citations that did not exist. Every case citation produced by an AI tool must be verified in an authoritative legal database before it appears in any court filing, brief, or demand letter.
Before inputting any case information into an AI tool, review the tool’s terms of service and data retention policies — many general-purpose AI tools retain input data for model training. Anonymize all identifying information by substituting placeholders for client names, birthdates, and addresses. Never input raw, unredacted client information into any AI tool whose data handling practices have not been thoroughly reviewed and confirmed to comply with attorney confidentiality obligations.
The most useful AI applications in personal injury practice are: summarization of long depositions and medical records; first-draft generation of demand letters, discovery responses, and deposition outlines; legal research as a starting point (with mandatory verification of all citations); and strategy brainstorming — including prompting the AI to argue the defense’s strongest case theory to identify weaknesses in your own analysis. All AI outputs require substantive attorney review and verification before use in client matters.
Yes — an increasing number of federal and state courts have issued standing orders or local rules requiring attorneys to disclose whether AI tools were used in preparing filings and to certify that all AI-generated content has been reviewed for accuracy. Check the local rules and any standing orders of the specific court before submitting AI-assisted work product. Failure to comply with disclosure requirements where they exist can result in sanctions.
Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. Consult a licensed attorney for advice specific to your situation.