Home / Security
Architectural privilege

Built on our AI. Not someone else's.

Rule 1.6 confidentiality isn't promised — it's architecturally enforced. Your data path terminates inside Aewita.

No subprocessor. None.

Every other legal AI platform — Harvey, CoCounsel, Westlaw AI, Legora — runs on a third-party LLM. When you paste a privileged memo into one of them, that memo crosses an organizational boundary into OpenAI, Anthropic, or Google. The vendor's terms of service tell you what happens next. Ours don't, because there isn't a next.

Aewita self-hosts its own frontier reasoning model. Your query hits our infrastructure. Our infrastructure answers. The work never leaves.

Never trains on client data.

Not as a preference that could be toggled. As a build constraint. The production inference stack does not write to the training pipeline. There is no reversible switch.

ABA Model Rules, mapped.

Rule 1.1 — Competence

Verify without leaving.

Every answer traces to its primary source. A lawyer's duty to verify is satisfied in-line.

Rule 1.6 — Confidentiality

No third party.

Client files never cross to an external model. The only DPA is with us — because there is no one else in the loop.

Rule 5.3 — Supervision

Auditable by default.

Logs show what the AI did, what it cited, and what a human approved. The responsible attorney can review every step.

Compliance roadmap.

  • SOC 2 Type II — audit in progress. Q3 2026.
  • HIPAA BAA — available on the Attorney plan.
  • FIPS 140-2 cryptography throughout.
  • Regional isolation — U.S.-only inference for U.S. customers. No data transits outside the country.

Pull-quote.

Every other legal AI tool asks you to trust a subprocessor. Aewita doesn't have one.

Read the full security brief.

Or jump straight in — the architecture speaks for itself.