ai-for-lawyers April 5, 2026

AI Ethics for California Attorneys: What Rule 1.1 Means in 2026

California Rule 1.1 now includes technological competence. What solo and small firm attorneys need to know about using AI ethically in their practice.

C
Cedent Team

The competence question has changed

California Rule of Professional Conduct 1.1 requires attorneys to provide competent representation. For decades, that meant knowing the law and understanding your client’s situation. In 2026, it means something more.

The ABA’s Comment 8 to Model Rule 1.1 states that competent representation includes “keeping abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.

California adopted this principle. If you’re using AI in your practice - or choosing not to - Rule 1.1 requires that you understand what you’re doing and why.

What the duty of technological competence requires

You must understand the tools you use

If you’re using an AI tool to draft a motion, calculate a deadline, or fill a form, you need to understand:

  • Where the AI gets its information
  • How confident it is in its output
  • What it might get wrong
  • How to verify its work before it reaches the court or opposing counsel

This doesn’t mean you need to understand how the AI model works internally. It means you need to understand its capabilities, its limitations, and your review process.

You must supervise AI output

The duty of competence extends to supervision. Using AI to draft an FL-150 doesn’t relieve you of the obligation to review every field before filing. If the AI populated a field incorrectly and you signed the declaration, the ethical responsibility is yours - not the AI’s.

This is why any AI tool you use for legal work should have an approval step. Output goes to a staging area. You review it. You approve or reject. Then - and only then - does it leave your desk.

You must protect client data

AI tools that process client information must meet the same confidentiality standards as any other system in your practice. Before using any AI service with client data, you should understand:

  • Where the data is stored
  • Whether it’s used to train the AI model
  • Who has access to it
  • What happens to it when you stop using the service

Attorney-client privilege can be waived by inadequate data protection. This isn’t hypothetical - it’s a practical concern that Rule 1.6 (Confidentiality) reinforces alongside Rule 1.1.

The flip side: not using AI may also be a competence issue

Here’s the part that gets less attention. The duty of technological competence cuts both ways.

If AI tools can help you catch deadline errors, identify missing documents, or flag contradictions in your case file - and you choose not to use them - a future disciplinary inquiry might ask whether that choice was itself a failure of competence.

We’re not there yet. No California attorney has been disciplined for failing to adopt AI. But the trajectory of the duty of technological competence points in that direction, and attorneys who understand the tools available to them will be better positioned.

Practical guidelines for ethical AI use

1. Maintain human oversight on every output

Never allow an AI tool to send an email, file a document, or modify case data without your explicit review and approval. This isn’t just good practice - it’s what Rule 1.1 requires.

2. Verify the source of every AI claim

If your AI tool tells you a hearing deadline is June 22, verify that the calculation is correct for your county. If it fills an FL-150 field with an income figure, confirm it matches your client’s actual documentation.

The tools that help you do this most easily are the ones that show you where each piece of information came from - with a link to the source document and a confidence indicator.

3. Understand the AI’s limitations

Every AI tool has boundaries. Some handle deadline calculation well but struggle with nuanced legal research. Some are good at form population but poor at drafting legal argument. Know what your tool does well and what it doesn’t.

4. Document your review process

If you’re using AI in your practice, document how you review its output. What do you check? How do you verify? This creates a record of competent supervision that protects you if questions arise later.

5. Keep client data within secure systems

Use AI tools that are transparent about data handling. Avoid pasting client information into general-purpose AI chatbots that may use your input for training. Choose tools designed for legal work with appropriate confidentiality protections.

The standard is reasonable

Rule 1.1 doesn’t require you to become an AI expert. It requires you to understand the technology you use, supervise its output, and make informed decisions about when AI helps your clients and when it doesn’t.

For solo and small firm attorneys, the practical question is: does this tool make me more reliable, more thorough, and more responsive - while keeping me in control of every decision that matters?

If the answer is yes, and you’ve done your diligence on how it works and how it protects client data, you’re meeting the standard. If the answer is no, or you’re not sure, that’s worth investigating before the next time you need to calculate a deadline or fill out a form under pressure.