Rule 1.6 confidentiality isn't promised — it's architecturally enforced. Your data path terminates inside Aewita.
Every other legal AI platform — Harvey, CoCounsel, Westlaw AI, Legora — runs on a third-party LLM. When you paste a privileged memo into one of them, that memo crosses an organizational boundary into OpenAI, Anthropic, or Google. The vendor's terms of service tell you what happens next. Ours don't, because there isn't a next.
Aewita self-hosts its own frontier reasoning model. Your query hits our infrastructure. Our infrastructure answers. The work never leaves.
Not as a preference that could be toggled. As a build constraint. The production inference stack does not write to the training pipeline. There is no reversible switch.
Every answer traces to its primary source. A lawyer's duty to verify is satisfied in-line.
Client files never cross to an external model. The only DPA is with us — because there is no one else in the loop.
Logs show what the AI did, what it cited, and what a human approved. The responsible attorney can review every step.
Every other legal AI tool asks you to trust a subprocessor. Aewita doesn't have one.
Or jump straight in — the architecture speaks for itself.