MCP for law firms: what it is and why it changes the DMS question
MCP is the first real standard for connecting AI assistants to firm data. It changes what you should ask a legal AI vendor, and changes which vendors can honestly answer.
Every BigLaw CIO has lived through the same meeting. A vendor demos a new AI tool. The demo is good. Then someone asks the question. "How do we connect it to NetDocuments?" The vendor says they are working on it. The pilot stalls. Six months later the firm is still stitching together integrations that never quite work.
That meeting is about to change. A standard called MCP makes it possible for any compliant AI assistant to talk to any compliant firm data system, without custom integration work on either side. This piece explains what MCP is, why it matters for your DMS, what Aewita ships at mcp.aewita.com, and what your IT team should look for before calling any of this production-ready.
What MCP is
MCP stands for Model Context Protocol. Anthropic published it as an open standard in late 2024. It solves a specific problem: AI assistants are useful in proportion to the context they can see, and every vendor was building bespoke, incompatible ways to feed them that context.
The shortest analogy is USB. Before USB, every printer, scanner, and camera needed its own cable, its own driver, and its own connector on the back of the computer. USB standardized the handshake. Any device that spoke USB could talk to any computer that spoke USB. Innovation shifted up the stack, because the plumbing was solved.
MCP does the same thing for AI assistants. It is a protocol with three roles. A host is the AI client, for example Claude Desktop, an IDE, or an internal agent. A server is a program that exposes tools, resources, and prompts to the host. The transport carries messages between them over JSON-RPC. Any host that speaks MCP can connect to any server that speaks MCP. The assistant gains a new capability without a custom integration.
The protocol is open, versioned, and maintained publicly. It is not an Anthropic-only standard. OpenAI, Google, and a growing list of independent tools have all shipped MCP support. This is the first piece of AI plumbing that multiple frontier labs have agreed on, which is why it matters more than the usual announcement.
Why it matters for DMS integration
Before MCP, every legal AI vendor had to build a custom NetDocuments integration, a custom iManage integration, a custom SharePoint integration, a custom Worldox integration, and so on for every firm's stack. Each integration was a contract negotiation, a credentialing exercise, a security review, and an ongoing maintenance burden. None of those integrations worked with anyone else's AI.
The practical effect: AI at most firms has been siloed. The tool the associates use for research cannot see the matter files. The tool that reviews contracts cannot see the playbook. The tool that summarizes depositions cannot see the case strategy memo. Everything lives in its own walled garden, and getting data across the walls costs months.
MCP changes the model. A DMS vendor exposes an MCP server. An AI vendor exposes an MCP client. Any compliant client can talk to any compliant server, subject to the firm's authentication and authorization. The integration becomes a configuration question instead of an engineering project.
That is the promise. The reality in 2026 is that most legal AI vendors have not shipped MCP support yet. Not Harvey. Not CoCounsel. Not Westlaw AI. Not Legora. Some have announced it. None has a production MCP endpoint that a firm can point a third-party assistant at today.
What Aewita ships at mcp.aewita.com
We publish an MCP server at mcp.aewita.com. It exposes your firm's Aewita workspace, the library, the matters, the drafting history, the playbooks you have configured, to any MCP-compatible client your firm wants to use.
In practice this means you can point Claude at your Aewita workspace and ask questions that combine Claude's reasoning with Aewita's retrieval and citation stack. You can do the same with any internal agent your firm builds. You can do it with any other MCP client that complies with the spec.
The important detail: when Claude or another client calls our MCP server, the retrieval and citation verification still run inside Aewita. Your DMS content is not shipped out to a third party. The external client sees only the answer Aewita returns, which has already passed through our citation verifier. The under-0.3% hallucination guarantee we describe at /security applies to any answer that comes out, regardless of which client asked the question.
This is a deliberate design choice. MCP gives you a universal plug. It does not automatically give you a universal standard of answer quality. We made Aewita the verification layer, so the citation guarantees travel with the data.
A concrete scenario
An associate is running a matter called Kline. They want to move fast. They open Claude, which they have already connected to your firm's Aewita MCP server through your IT team's configured client. They type:
"Claude, summarize the last five depositions in the Kline matter and flag anything inconsistent with our standard indemnity clause."
What happens under the hood:
- Claude recognizes the request involves firm data. It calls the Aewita MCP server.
- Aewita authenticates the call against your firm's credentials, checks the associate's matter permissions, and confirms they have access to Kline.
- Aewita retrieves the deposition transcripts and the firm's standard indemnity clause from the matter and the playbook library.
- Aewita produces the summary and the inconsistency analysis. The citation verifier checks every quoted passage against the underlying transcript.
- Aewita returns the verified answer. Claude presents it to the associate, with the Aewita citations inline.
The associate gets a Claude-native interface. The firm gets Aewita-grade retrieval and verification. The DMS gets one well-audited integration point instead of twenty.
Why most legal AI will not ship MCP
MCP is easy to announce and hard to ship responsibly. The reason has to do with how most legal AI is actually built.
If your inference runs on an OpenAI or Anthropic API call, exposing an MCP server is straightforward. It also does not solve your real problem. Your real problem is that privileged client data is being sent to a third-party model provider every time the model answers a question. MCP does not fix that. It just adds another lane for the data to travel on. The privacy question is upstream of the protocol.
If your inference is self-hosted and runs inside your own infrastructure, MCP is a genuine advantage, because the data never leaves your control. That is the posture Aewita runs: self-hosted frontier reasoning model, no API calls to OpenAI or Anthropic or Google in the inference path. The model runs in our environment, under our security controls, under our audit logs. That is why we can responsibly ship an MCP endpoint.
Vendors whose entire product is an OpenAI wrapper can ship an MCP server in a week. They should not, not as a primary integration story, because the security story underneath does not hold up. Whether they do anyway is the question to ask.
What to look for in an MCP implementation
If you are evaluating an MCP endpoint from any legal AI vendor, four things matter.
Authentication. The MCP server should require proper auth. OAuth 2.0 or an equivalent bearer-token flow tied to your firm's identity provider. Not a shared API key in a config file.
Per-matter scoping. The server should honor your existing matter-level access controls. A user who cannot see a matter in the DMS must not be able to query it through MCP. The scoping has to be server-enforced, not client-trusted.
Audit logging. Every tool invocation, every resource access, every response should be logged in a format your security and compliance teams can review. You want to be able to answer the question "what did the AI see and say, on this matter, on this date" in under a minute.
Data-egress controls. The server's documentation should be explicit about what data leaves your environment and where it goes. With Aewita self-hosted, the answer is that client data does not leave the Aewita deployment. With other architectures, the answer is more complicated, and you should insist on seeing it written down.
More detail on how we implement each of these lives at /security and /product/integrations.
What this means for BigLaw IT
For the first time, a firm has a serious reason to stop custom-integrating individual AI tools and start selecting DMS-plus-MCP platforms instead. The right posture is:
- Pick a DMS with a credible MCP roadmap, or a legal AI platform that already sits in front of your DMS and exposes MCP itself.
- Pick a legal AI platform with its own MCP server and a citation and accuracy story that travels through it.
- Let your attorneys use whichever MCP-compatible client fits their workflow, because the standardized handshake means you are no longer locked into one vendor's UI.
The consolidation is coming quickly. Firms that wait for their current AI vendor to announce MCP will be a year behind firms that picked an MCP-native platform in 2026.
A short comparison of how MCP support lines up across the major legal AI platforms is at /compare. If you want to see the MCP server in action against your own matters and your own DMS, that is the 14-day trial.
Plug Claude, or any MCP client, into your firm's data
14 days free. Real access to the real product.