ABA Formal Opinion 512 AI Compliance: What Every Attorney Needs to Know
ABA Formal Opinion 512 does not invent new rules. It reminds lawyers that the old rules already cover generative AI — and that most legal AI products fail the Rule 1.6 confidentiality test by architecture, not by accident.
I have read ABA Formal Opinion 512 maybe thirty times since it came out. The first read is a relief — the Standing Committee on Ethics and Professional Responsibility did not invent a new rulebook for AI. It did something subtler and more durable: it reminded the profession that the Model Rules we already have — Competence, Confidentiality, Supervision, Candor, and the rest — cover generative AI just fine, and that most vendors hoping to sell AI to lawyers will fail them.
The goal of this post is practical ABA Formal Opinion 512 AI compliance guidance. I will walk through the Model Rule obligations Opinion 512 draws on, translate them into a vendor checklist, and name the architectural question most firms are not asking their AI providers. The advice applies whether you run a solo practice or a 500-attorney firm.
What ABA Formal Opinion 512 actually does
Opinion 512 is the first formal ethics opinion from the ABA Standing Committee on Ethics and Professional Responsibility that squarely addresses generative AI in legal practice. It was issued in 2024. It is not a new rule. It is an interpretive opinion that maps the existing Model Rules of Professional Conduct onto generative AI workflows.
Opinion 512 reaches several Model Rules. The three that matter most to vendor selection — and to ABA Formal Opinion 512 AI compliance at the firm level — are 1.1 (Competence), 1.6 (Confidentiality of Information), and 5.3 (Responsibilities Regarding Nonlawyer Assistance). The opinion also reaches billing (Rule 1.5), candor to the tribunal (Rule 3.3), and supervisory responsibilities for partners and managers (Rule 5.1). I will focus on the three that drive architecture.
You can find the Model Rules themselves in the ABA Model Rules index. If you have not read them in the last twelve months, start there. The opinion is a gloss. The rules are the binding text.
Rule 1.1 (Competence): you are accountable for what your AI tool produces
Model Rule 1.1 requires competent representation. The 2012 amendment to Comment 8 extended that to include “the benefits and risks associated with relevant technology.” Opinion 512 applies the principle squarely: lawyers have to understand, at a working level, how their AI tool produces its output, what its known failure modes are, and whether it is accurate enough for the task at hand.
In practice, that means you need two things before you use an AI tool on client work. First, a basic architectural understanding — who built the model, where it runs, how citations are produced, how they are verified. Second, a number. Not an adjective. A measured accuracy or hallucination rate you can audit.
Aewita ran 800 consecutive queries in internal testing and observed zero hallucinated outputs — statistically, a rough upper bound under 0.3% at 95% confidence. Ask your current vendor for their number. If they won’t give you one, that’s your answer.
Every citation Aewita produces is independently verified against the source before it reaches you. That is a Rule 1.1 design choice, and it is the number I want to see from every legal AI vendor in the market.
Rule 1.6 (Confidentiality): the subprocessor problem most vendors have
Rule 1.6(a) prohibits a lawyer from revealing information relating to the representation absent informed consent or a listed exception. Rule 1.6(c) requires lawyers to make reasonable efforts to prevent the unauthorized disclosure of, or unauthorized access to, client information.
Here is the architectural problem. Most legal AI products in the market — including some of the best-known ones — do not build their own AI. They wrap a third-party AI provider’s model with retrieval, a UI, and workflow templates, then sell it to lawyers. Your prompt, your document, your client’s confidential information, travels to the third-party provider’s infrastructure for processing, returns as a completion, and is rendered in the vendor’s interface. The vendor will tell you the provider has a no-training enterprise agreement and 30-day retention. That is true. It is also not the same thing as the data never leaving your vendor’s boundary.
Rule 1.6 is a reasonable-efforts standard. What counts as reasonable depends on the sensitivity of the information, the cost of additional safeguards, and the client’s instructions. Large clients — governments, regulated enterprises, financial institutions, hospital systems — increasingly publish outside-counsel guidelines that list approved subprocessors. Adding a commercial AI provider to that list triggers a review. Sometimes a denial.
Aewita’s architecture enforces confidentiality by design — your data path terminates inside the platform. We built the AI. We host the AI. Every prompt, every document, every citation check stays inside infrastructure we operate. There is no third-party AI provider to add to your DPA. That is what we mean by compliance by architecture, not by terms of service. Read more on our security page.
Rule 5.3 (Nonlawyer Assistance): you have to supervise your vendors
Model Rule 5.3 requires lawyers to make reasonable efforts to ensure that nonlawyer assistants — including outside vendors — behave consistently with the lawyer’s professional obligations. Comment 3 to Rule 5.3, added in 2012, expressly extends this to outsourced services including cloud providers and third-party software.
Opinion 512 draws the implication plainly. If your AI vendor hands client data to a third-party AI provider, you are supervising two vendors, not one. You have to read two sets of enterprise terms, track two security-posture changes, and respond to two breach notifications. That supervision is possible. It is just meaningfully harder, and in a bar-disciplinary review it is meaningfully harder to defend.
A single-vendor architecture reduces that supervisory surface to one. Not zero. One. That is why the Rule 5.3 analysis, done honestly, points toward vendors that built and host their own AI.
The ABA Opinion 512 vendor checklist
Here is the practical compliance checklist I walk firms through. Each item names the Model Rule obligation, translates it into plain English, and names what to ask the vendor. Use it on any AI platform under evaluation, including Aewita.
1. Who built the model, and who hosts it?
Obligation: Rule 1.6 confidentiality, Rule 5.3 supervision. Ask: does my prompt data leave your infrastructure at any point, and is the answer produced by an AI your company built, or by an AI provided by a third party? Aewita: we built and host our own frontier reasoning model. No third-party AI provider in the data path.
2. What is the published hallucination rate?
Obligation: Rule 1.1 competence. Ask: what is your measured hallucination rate, with confidence interval and sample size? Aewita: under 0.3% at 95% confidence, measured on 800 consecutive queries in internal testing.
3. Is every citation independently verified?
Obligation: Rule 1.1 competence, Rule 3.3 candor. Ask: when the AI says a case stands for a proposition, is that citation resolved against the actual primary source before it reaches me? Aewita: every citation is independently verified. Unresolvable citations are flagged, not silently rendered.
4. Is the platform grounded in retrieved primary sources?
Obligation: Rule 1.1 competence, Rule 3.3 candor. Ask: does the model generate answers from its training data alone, or is every answer grounded in retrieved primary sources? Aewita: grounded in retrieved primary sources — every U.S. court opinion from 1700 to today, every federal statute, and every state statute for all 50 states and the District of Columbia.
5. What data is retained, for how long, and who can see it?
Obligation: Rule 1.6 confidentiality. Ask: where are prompts stored, for how long, and which employees have access to them? Can I enforce zero retention for a specific matter? Aewita: the full answer lives on our security page, including enterprise retention controls.
6. Who is your subprocessor for the AI inference step?
Obligation: Rule 1.6 confidentiality, Rule 5.3 supervision. Ask: name every third party that touches my prompt or my documents in the AI inference path. Aewita: none. The data path terminates inside Aewita.
7. Do you support informed-consent workflows for client-confidential inputs?
Obligation: Rule 1.6(a) informed consent. Ask: if I want to opt out of certain kinds of logging for a specific matter, or require a record of informed consent before enabling a feature, does the product support that? Aewita: yes, with matter-level controls.
8. What happens when the AI is wrong?
Obligation: Rule 1.1 competence. Ask: when the model is uncertain or a citation cannot be resolved, does the product tell me, and how visibly? Aewita: unresolved citations are flagged explicitly; confidence is surfaced at the answer level.
9. Who is accountable if a breach occurs?
Obligation: Rule 5.3 supervision. Ask: is there a single notification path, or do I have to chase two vendors in a multi-party data path? Aewita: one vendor, one boundary, one notification path.
10. Can the product be used without a procurement cycle?
Obligation: practical, not regulatory. Ask: can a partner start a trial without an enterprise MSA, or do I need a committee approval to type one prompt? Aewita: 14-day free trial, credit-card checkout, no seat minimum.
If you run through those ten items with a vendor and any answer is “we will have to get back to you,” that is also an answer. See our comparison page for how Aewita maps against the best-known products in this category.
What Opinion 512 means for general-purpose AI tools
Opinion 512 does not categorically prohibit consumer and general-purpose AI chat tools for legal work. It does require lawyers to perform the Rule 1.6 analysis honestly. Consumer-grade AI chat tools route inputs through a third-party AI provider whose retention and training policies are written for the mass market, not for privileged legal practice. The provider may offer a business-tier version with stronger controls. Those are not, by default, what you are using when you paste a witness statement into a free web chat interface.
The honest answer to “can I use a general-purpose AI chat tool for client work” is: not unless you have done the specific confidentiality analysis for that product, that plan tier, that data classification, and that client’s outside-counsel guidelines. In most situations, a purpose-built legal AI platform with architectural confidentiality controls is the right tool.
Why firms are moving toward single-vendor, self-hosted legal AI
The trend line in our customer conversations is clear. Firms that were previously evaluating AI wrappers on third-party providers are increasingly asking about architecture. Not because the wrappers are bad. Because the ABA Formal Opinion 512 AI compliance analysis is simpler when there is one vendor in the boundary, and the client conversations are shorter when the DPA has one subprocessor line, not two.
That is the product choice we made at Aewita. We built our own frontier reasoning model. We host it on infrastructure we operate. The retrieval layer, the inference step, and the citation verifier all run inside the same boundary. There is nothing to subprocess out. You can read more about the team and the thesis on our about page, and about the research surface specifically on our research product page.
How to use this in a firm-wide AI policy
If your firm is drafting or revising an AI use policy, Opinion 512 gives you a clean structure. Anchor each policy section to a Model Rule. Require each approved tool to have a documented answer to each of the ten vendor-checklist questions above. Revisit the list quarterly; this market changes.
The specific Aewita commitments — self-hosted model, every citation independently verified against primary sources, under 0.3% hallucination rate at 95% confidence, every U.S. case 1700 to today plus federal and state statutes — are written to map one-to-one to the Opinion 512 obligations. Your policy can cite our architecture as evidence for the Rule 1.6 reasonable-efforts analysis. That is what compliance by architecture, not by terms of service, means in practice.
Opinion 512 did not make legal AI harder. It made the vendors who took architecture seriously easier to pick.
See how our architecture satisfies Rule 1.6.
Read our security practices — or start a 14-day trial and evaluate the product against your firm’s AI policy.