Aewita vs. Harvey AI: the subprocessor question
Two legal AI platforms. Two very different architectures. The question is not which model is smarter. It is which one lets your firm say “yes” when a client asks who actually sees the prompt.
The architectural fork
Harvey AI and Aewita start from opposite ends of the same problem. Both want to give lawyers a research and drafting copilot. Both index case law. Both promise hallucination control. But the two platforms answer one question very differently: when a partner types a privileged prompt, where does that prompt go?
Harvey AI runs on OpenAI. The company was originally built out of Allen & Overy as an OpenAI-powered research layer, and it has raised heavily from the OpenAI Startup Fund and Sequoia. Harvey’s product sits on top of OpenAI’s API, with Harvey’s own retrieval and workflow layers around it. That means every prompt your associate writes, every clause in a draft merger agreement, every note from a witness interview, is processed by a third-party model provider. Harvey has enterprise agreements with OpenAI that restrict training and retention, and those agreements are real. But the prompt still leaves Harvey’s infrastructure.
Aewita is structured differently. We self-host a frontier reasoning model on infrastructure we control. No call goes to OpenAI. None to Anthropic. None to Google. The retrieval, the inference, and the citation verifier all run inside the Aewita boundary. See our security architecture for the full diagram.
The question is not whether OpenAI is trustworthy. It is whether your client’s outside-counsel guidelines let you add OpenAI as a subprocessor in the first place.
Why this matters for Rule 1.6
ABA Model Rule 1.6(c) requires lawyers to make reasonable efforts to prevent the unauthorized disclosure of client information. “Reasonable” is not a fixed number. It depends on the sensitivity of the information, the cost of additional safeguards, and the client’s instructions. When clients—especially regulated enterprises, governments, and financial institutions—publish outside-counsel guidelines that list approved subprocessors, those guidelines tend to be specific. Adding OpenAI as a subprocessor often triggers a formal review. Sometimes a flat no.
Aewita’s architecture is designed so there is no third-party LLM subprocessor to add. That doesn’t make Harvey wrong. It makes the two products appropriate for different client portfolios.
Pricing transparency
Harvey does not publish pricing. Every engagement is an enterprise contract with seat minimums, annual commitments, and a procurement cycle. Firms we’ve spoken with report a range; we won’t invent a number here because reported ranges vary widely and depend on modules, seats, and term. What is publicly known is that Harvey’s model is AmLaw-200-first: the sales motion is built for large firms with a head of knowledge management, a security committee, and a procurement team.
Aewita is $99 per month or $720 per year. No seat minimum. No annual commit required. 14-day free trial. Cancel anytime. One attorney can adopt Aewita without a committee meeting. A 40-person firm can add 40 seats without a negotiation.
| Aewita | Harvey AI | |
|---|---|---|
| List price | $99/mo, $720/yr | Not published |
| Seat minimum | None | Yes (enterprise) |
| Annual commit | Optional | Typically required |
| Free trial | 14 days, full product | Pilot by contract |
| Procurement | Credit card | MSA + DPA |
Coverage
Aewita indexes every U.S. case from 1665 to today, every federal statute, and every state statute. That is a deliberate choice: we don’t want a lawyer to run a 1923 chain-of-title question and hit an empty result. Seventeenth-century colonial cases still come up in land disputes, municipal boundary claims, and historic trust matters. They are rare, and that is precisely why any serious research tool has to have them.
Harvey’s public descriptions of its corpus are less specific. It covers U.S. federal and state case law with depth, and it also covers several international jurisdictions we do not. If you are a cross-border firm with heavy UK or EU mandates, Harvey’s multi-jurisdiction footprint is a real advantage. If you are a U.S.-only firm and you want a bright guarantee on domestic completeness, Aewita’s corpus is the stronger fit.
Hallucination stance
This is the part most comparisons skip. Aewita publishes a measured hallucination rate: under 0.3% at a 95% confidence interval, based on a structured evaluation of 800 queries across our 22 practice areas. Every citation that our model emits runs through a verifier that resolves it against our case and statute index before the answer is rendered. Unverifiable citations are flagged; they don’t just appear in green text.
Harvey has not, to our knowledge, published a comparable number. The company talks about accuracy qualitatively, and it has customer quotes from AmLaw 100 firms. Those quotes are meaningful; they are not the same thing as a measured rate. We think every legal AI vendor should publish one.
“Accurate” is a feeling. “Under 0.3% at 95% CI across 800 queries” is a number you can audit.
Feature parity — where each is stronger
Where Harvey is stronger
- Brand recognition inside BigLaw. If your managing partner wants a tool that shows up on the front page of the Financial Times, Harvey has that distribution.
- Cross-border coverage. Harvey’s UK, EU, and commonwealth corpora are more mature than ours today.
- Enterprise deployment muscle. Harvey has a customer-success org built for 500-lawyer rollouts.
Where Aewita is stronger
- Self-hosted model. Zero third-party LLM subprocessors. See /security.
- No seat minimum. A solo litigator can buy the same product as a 200-lawyer firm.
- Playbooks. Firm-specific drafting patterns that the model actually follows. See /product/playbooks.
- Research depth. Every U.S. case 1665–today; full federal and state statutory coverage. See /product/research.
- MCP. A Model Context Protocol endpoint at mcp.aewita.com lets your existing tools talk to Aewita as a first-class data source.
- Published hallucination number. Under 0.3% at 95% CI, measured.
Who should pick which
We’ll be honest. If your firm is a top-25 AmLaw shop, your knowledge-management leadership is already deep in a Harvey conversation, and your clients are comfortable with OpenAI as a subprocessor, Harvey is a reasonable choice. It is a serious product backed by serious capital, and its team knows this market.
If your firm treats privilege as a first-class architectural requirement—if you have a government practice, a defense practice, a health-care practice, a deal practice with clients who won’t let their drafts touch a third-party LLM—Aewita is the better fit. Same if you are a solo or a mid-size firm that wants a real product without a six-month procurement cycle.
The question we’d encourage you to ask your vendor, whichever you choose: who else sees my prompt, and what’s their retention policy in writing? A good answer names every party. A great answer names one.
The Rule 5.3 angle most firms miss
Rule 1.6 gets most of the attention in AI-vendor discussions. Rule 5.3 deserves more. Model Rule 5.3 requires lawyers to make reasonable efforts to ensure that nonlawyer assistants, including outside vendors, behave in a way consistent with the lawyer’s professional obligations. The 2012 comment to the rule expressly extends to outsourced services, including cloud providers and third-party software.
What Rule 5.3 practically requires is that the lawyer understand the vendor’s conduct well enough to supervise it. When a legal AI product runs on OpenAI, the lawyer is implicitly supervising two vendors, not one. That supervision is possible; it is just meaningfully harder. You need to read two sets of enterprise terms, track two security posture updates, and respond to two breach-notification flows. Aewita’s single-vendor architecture reduces that surface to one. Not zero. One.
Verifiability, not just accuracy
There is another architectural difference worth naming. Aewita’s citation verifier is a separate process from the language model. The model proposes an answer. The verifier resolves every citation against our statute and case index. If a citation cannot be resolved, it is not rendered as a green link; it is flagged to the lawyer explicitly. This is a patent-pending design filed alongside our retrieval system and inference pipeline filings.
Harvey has a citation-checking layer too. It is less public about the boundary between the model and the verifier, and it relies partly on OpenAI for the generative step. Both approaches can produce accurate answers most of the time. The Aewita design is easier to audit: you can point to the verifier, examine its corpus, and inspect its decisions. Auditability is different from accuracy, and we think it is the more durable property.
For a side-by-side matrix across all the categories above, see our full comparison page, and our research page for the retrieval architecture in more detail.
See the difference yourself.
14 days free. Real access to the real product.