Home / Resources / Case studies / Mid-market litigation
Case study — Litigation

40 attorneys. Two matters. Six weeks.

A mid-market litigation boutique put Aewita on two active matters with three associates. They measured research cycle time on novel questions, citation traceability, and partner comfort with associate output. Here is what they reported.

Case Litigation — Illustrated

Illustrative case study based on early-pilot patterns. Named-customer stories coming as pilots graduate.

The firm.

A commercial-litigation boutique. Roughly forty attorneys across two offices. Bread-and-butter work: complex commercial disputes, trade-secret matters, securities defense. Strong regional reputation. Partners who actually try cases.

They do not have a research department. Associates do research. Partners supervise. Westlaw is the incumbent. Practical AI Law had been circulating around the firm for a year. Two partners had tried a general-purpose chatbot on a brief and been burned by a fabricated citation.

The problem.

Associates were spending, by partner estimate, north of 40% of billable hours on research. Partners were not reading raw case text; they were reading summaries. Two related failures kept recurring.

First, Westlaw output was over-summarized. Associates were over-relying on headnotes and Key Numbers. A few misreads slipped into internal memos. Partners caught them — but the catching itself took time.

Second, on novel questions — the ones that actually matter — associates were spending days triangulating across sources. A question about how a specific Delaware Chancery doctrine had been applied in an analogous commercial context could eat most of a week.

The managing partner wanted a tool that would answer novel questions faster, surface the actual underlying text, and never invent a citation. He had read enough about AI hallucinations to be skeptical of any vendor promising those three things together.

Pilot scope.

Six weeks. Two active matters — one securities-defense motion, one complex commercial dispute. Three mid-level associates. One supervising partner per matter. A simple set of success criteria the firm wrote up front.

  • Can associates answer novel research questions faster than on prior matters of similar shape?
  • Is every cited authority traceable to the underlying text?
  • Do partners catch any hallucinated or misstated citations?
  • Does the firm's comfort level with associate output go up, down, or sideways?

The firm self-hosted nothing. They used Aewita through the standard workspace. ABA Rule 1.6 concerns were addressed by architecture — no client data leaves Aewita's controlled environment, and the model itself is self-hosted rather than routed to a third-party API. That conversation was over in one call with the firm's general counsel.

What they did differently.

Associates asked questions the way they would ask a senior associate. Not a keyword search. Actual questions. "In Delaware Chancery, how have courts treated fiduciary-duty claims against a controlling stockholder when the controller is a private-equity fund with a staggered exit?" Get an answer with the cases. Click a citation. Land on the paragraph.

The three associates kept time logs on each novel research task — start, stop, total minutes. The supervising partners logged every revision they made to associate memos, flagging anything that looked like a citation issue.

"The thing that got me was the traceability. Every citation, one click, I'm reading the paragraph. It's not a summary of the case. It's the actual language. That's the first AI tool I've used that made me want to read more of what it produced, not less."

— A partner at the firm, supervising one of the pilot matters

What they reported.

The partners told us research cycle time on novel questions was cut roughly in half across the two pilot matters. That's a range, not a single number — some questions saw far more reduction, some saw less. The associates' own logs converged on the same rough picture.

Every cited authority in Aewita's output was traceable to the underlying text. The partners did not catch a single hallucinated or misstated citation across six weeks. That matches Aewita's published under-0.3% hallucination rate at 95% CI, and it matched the firm's own independent spot-checks.

Partner comfort went up. The firm described it as "we are reading the same underlying text the associate is reading" — which meant supervision became about reasoning, not about citation verification.

By the numbers — reported
  • ~50% reduction in research cycle time on novel questions (associate logs, six weeks).
  • 0 hallucinated or misstated citations caught in partner review of Aewita-generated work product.
  • 100% of cited authorities traceable to underlying primary source in one click.
  • 2 matters, 3 associates, 6 weeks — pilot scope.

What changed about the workflow.

By week four, the firm was building playbooks. Motion-to-dismiss structure in commercial disputes — the senior partner's house style, captured as a repeatable sequence. A first-year could start there instead of from a blank page. The partner still edits the draft. But the starting point is three days closer to the final.

The associates changed how they worked too. Less time in Westlaw, more time in Aewita. More questions asked, because the cost of asking another question dropped. The partners noted that the associates' memos were longer on analysis and shorter on throat-clearing recitation of the law.

What didn't change.

Partners still read every brief. Associates still verified every citation they used — Aewita surfaces the source, but the lawyer's duty to check it is not transferable. The firm did not lay off any research associates. They redeployed the hours toward more matters.

Next step.

The firm is rolling Aewita out across both offices. They are prioritizing litigation first, then their smaller corporate practice. The managing partner told us the pilot changed his mind about a question he had been asking for two years — whether AI belonged in a firm that tries cases. His framing now is that it belongs in the firm; the question is how quickly it can be adopted without skipping the supervision layer.

← Back to case studies

Run your own pilot.

Fourteen days free. No seat minimum. $99/mo or $720/yr after.