Recently, Pramata’s Chief Solutions Architect wrote an article laying out the biggest mistakes he sees companies make as they begin to explore using generative AI and large language models for contract management and analysis. In his article, Guhin also explains many common myths and misconceptions, and what strategic businesses should be thinking about and doing instead. We sat down with Tom to discuss some key takeaways, but if you’re looking for a deeper dive, read the full article now.
Q: You’ve been having the same conversation with legal tech buyers for months. What’s the question everyone’s asking?
Guhin: First and foremost, people are wondering if they should go with a legal tech vendor that’s built its own “Legal AI Model,” or one that uses the big public models like GPT and Claude. I hear this at least three times a week. It feels like a smart, strategic question—after all, wouldn’t a model built specifically for legal work be better than a general-purpose one?
Q: And what’s your answer?
Guhin: That they’re asking the wrong question entirely. It’s like evaluating cars by only looking at engine specs while ignoring whether the transmission works or the brakes function. The AI model is just one component in a much larger system. Ultimately, it doesn’t really matter which AI “engine” you’re using if the rest of the system isn’t built to support your goals.
Q: Does that mean companies should be going with purpose-built, proprietary AI models that were made just to work with contracts?
Guhin: It’s actually the opposite, and here’s why: Today’s situation is very similar to the debate over public cloud vs. private cloud ten years ago. Companies spent millions building private infrastructure, convinced they were making the safe, strategic choice. Meanwhile, their competitors moved to AWS and Azure and gained access to continuous innovation that no internal IT team could match. This is analogous to today’s debate over whether legal teams (or anyone, really) will get the best results from AI that’s built specifically for contract work or from general models that are built and backed by some of the world’s most innovative companies. The leading AI labs are investing billions of dollars annually in R&D. Any legal tech company building a proprietary model is competing against this with a fraction of the resources. The math doesn’t work.
What do you say to people who try to convince you that a legal-specific large language model would function better?
Guhin: That’s intuitive, but it misses how modern AI actually works. Today’s leading models aren’t just trained on legal data—they’re trained on essentially all human knowledge, including vast amounts of legal content. When GPT-4 gets better at reasoning, it gets better at legal reasoning. When Claude improves language understanding, it improves contract analysis. This is why it’s the smart move to work with legal tech that can plug in various existing LLMs to do the parts of the job they’re best at. With that said, it’s important to remember two things: First, Legal AI that can leverage the strengths of GPT and Claud (among other commercially available LLMs doesn’t mean “feed your entire portfolio of contracts into ChatGPT and ask it questions. Two, when it comes to Legal AI, the “engine” is less important than the “architecture.”
Q: And what exactly does that mean, that the architecture is more important than the engine?
Guhin: I’ve seen incredible AI demos that completely fall apart when you feed them real contracts—you know, the ones with OCR artifacts, three amendments, and references to exhibits that may or may not exist in your system. The AI wasn’t the problem. The platform architecture was. Think about it this way: your contracts aren’t demo-ready. They’re scattered across systems, stored in different formats, and organized in ways that confuse AI systems. The best AI in the world can’t extract insights from garbage data. So when I talk about architecture, I’m talking about how the system handles data cleanup, organizes document relationships, manages context, and integrates with your existing workflows. A Ferrari engine means nothing if it’s connected to a broken transmission and faulty brakes.
Q: What about everyone’s favorite topic: AI hallucinations? What kinds of mistakes are people making as they try to solve for that?
Guhin: The misconception is that AI reliability comes from making models more cautious. I keep hearing vendors say, “Our AI doesn’t hallucinate because it flags uncertainty instead of guessing.” That sounds responsible, but it’s actually useless in practice. You ask about payment terms and get back “I’m not confident, please review manually.” Congratulations, you’re back to doing the work yourself, which defeats the entire purpose. The real solution is to engineer away the uncertainty from the start. Clean and structure data upfront, provide precise context to AI models, implement verification layers throughout the process. When AI operates on a solid, well-organized foundation, hallucinations become rare because you’ve eliminated the conditions that cause them. It’s not about making AI more cautious—it’s about creating systems that give AI the right information to be confident and accurate.
____________
These insights barely scratch the surface of what legal teams need to know before making Contract AI decisions that will impact their organizations for years to come.
In his comprehensive analysis, Tom Guhin, Pramata’s Chief Solutions Architect, breaks down the specific components that separate successful Legal & Contract AI implementations from expensive disappointments, reveals the technical details behind reliable AI systems, and provides evaluation frameworks that help legal teams ask the right questions of potential vendors.
Whether you’re just starting to explore Contract AI or questioning decisions you’ve already made, Guhin’s deep dive offers the strategic guidance that comes from a decade of watching legal tech implementations succeed and fail.
Read the complete article here to discover why platform thinking beats model thinking—and how to avoid the costly mistakes most organizations are making right now.