The Power of Human-assisted AI in LegalTech

Power of human assisted AI in legal tech
When considering legal technology, it’s important to understand what AI’s good at and what it’s not. Find out where humans (with law degrees!) should fit in to maximize the success of your contract management solution.

Love it or hate it, artificial intelligence (AI) is here to stay. Scientists and entrepreneurs alike have long hoped AI technology would be a panacea to a wide range of human problems. With artificial intelligence, the thought was, we could automate every mundane task, remove human biases from decision making, and boost corporate profits by serving people ads for the exact things they’re ready to buy but didn’t even know they wanted. 

In the years since AI first came into mainstream use, these prophecies haven’t exactly proven to be true. In fact, numerous high-profile AI projects have been scrapped precisely because AI couldn’t deliver on these overstated promises. 

Unfortunately, legal departments are among those who were sold a bill of goods about the miracles AI would perform. Many have bought into “the magic AI button” and now we’re experiencing wide-spread frustrations against legal tech because AI isn’t living up to the hype. 

But the idea that we can replace people (particularly attorneys) with machines, is a fundamentally flawed premise. In reality, AI is an extremely useful tool for all kinds of purposes. The key is knowing what AI’s good at and what it’s not, and then using it to supplement human intelligence – not replace it. 

This is how Pramata works, so we wanted to dive more deeply into why this is the case and how we’ve tapped into a successful mix of human- and artificial- intelligence to bring attorneys and general counsel the solution they’ve been looking for. 

What is artificial intelligence good at? 

AI may never become “more intelligent” than humans across all contexts, but there’s no question it’s already better and faster than us at some very specific processes. Here’s what AI is undeniably good at. And these are areas where we can leverage it to take the burden of tedious calculations and tasks off of humans. 

AI is great at recognizing patterns 

This is actually a double-edged sword, because AI is definitely useful for recognizing patterns – particularly across data sets so large it would take teams of humans years to even review them. But we should acknowledge that because AI is so focused on rules and pattern recognition, it can also be easily fooled when something meets the patterns it knows, but isn’t what it thinks it is. 

As an overly simplistic example, an AI could be trained to classify anything with four legs and black and white stripes as a zebra. It might be 100 percent accurate at identifying zebras when asked to review 100,000 images of horses, donkeys, and zebras. And it could probably pull out the zebra images in a matter of seconds: certainly something no human could do. 

But what if some other images are thrown into the mix? It’s likely the AI would call “zebra” on a dog in a zebra costume or a person in a striped prison uniform on their hands and knees. No human would make that mistake, but AI does that kind of thing all the time. 

In some ways, the problem may be that AI is too good at pattern recognition, to the point that it uses patterns it’s learned or been programmed to know above what we’d consider logic and common sense. 

AI excels at following rules 

Artificial intelligence likes structure and rules. So, for tasks that require classifying large amounts of data based on hard and fast rules, AI will beat humans hands-down. If you give AI a million data points and rules for how to sort them, it’ll get the job done. This is known as rules-based AI and is an entirely different beast from machine learning, in which a computer starts to make up its own rules based on outputs it can access without human input. 

AI never tires of doing the same task over and over 

Unlike humans, AI doesn’t get bored, tired, or burned out. This makes it an excellent tool for doing the tedious parts of work, especially those based on repeatable processes and rules. Often, we can use AI to do some legwork before passing the results off to humans who then have a solid starting point on which to perform their analyses. 

AI can provide instant, on-demand engagement 

As much as we all groan when we think about our experience with chatbots, the AI technology behind them is getting better, and does serve a purpose. In many cases, AI can provide accurate answers to commonly asked questions without needing a human touch. Say you log into your internet provider’s website and you’re just looking for some quick answers to questions like: 

  • When’s my next bill due? 
  • Are there any promos available? 
  • How can I move my service to a new location?

Chances are, a chatbot can help and you won’t even miss talking with a live human. Chatbots are also fantastic at starting the conversation and routing you to a real person who can solve your problem, but no longer has to collect all the preliminary information about your account and your issue to get started. In this way, chatbots can take pressure off customer service teams and reduce customer wait times, without sacrificing the quality of service we receive. 

What is artificial intelligence bad at? 

If you own an Amazon Alexa device or ever ask Siri a question, you know the frustration of AI’s limitations. Even seemingly simple questions that I could look up online in two seconds can throw these virtual assistants for a loop. 

“Hey Siri, what’s the price of gold today?” 

“I can’t get information about this commodity. Sorry about that.” 

Seriously, give it a try and see Siri stumped by the price of gold. Aside from being unable to look up some basic information, AI is weak in many of the areas that come most naturally to humans. These include what we refer to as “common sense,” applying learnings from one area to another area, using context to make judgements, and inferring non-literal meaning. 

One area where AI can fail in the world of legal contracts is when it’s been taught to recognize words and phrases, yet those words and phrases can mean very different things based on context. For example, an AI is taught to comb through contracts for the words “term of one year” and, if it sees that phrase, it’s told to extract the information that this contract has a one year term. 

That sounds like a convenient way for attorneys to search through thousands of contracts based on their term length. And it works fine until the AI comes across a contract that has a slightly different meaning, even with the same literal words. A contract might include the phrase “if the agreement has a term of one year or longer…” which doesn’t mean the contract term is one year, but the conditions are met for the AI to think it does. Without humans involved, this contract would be categorized as a one year term, and no one would know any different until a problem arose. 

This is where a human-AI partnership can, and does, make a huge difference. For example, good AI won’t just categorize the line incorrectly. It would recognize it as an inexact match to its rule and flag it for human review.

With most contract lifecycle management solutions on the market today, this process of reviewing the AI’s work, correcting it, and teaching it new rules falls onto an internal person or team. You know, the people at your company, who already have their day-to-day jobs, are not  well-versed in AI technology, and don’t particularly want to be doing this work. 

In other cases, companies end up hiring an expensive and slow third party to help out–but this is usually a one-time project–not ongoing as you execute contracts day-to-day. 

Better Together: AI + Humans in LegalTech

Contracts, regulations, case law: All of these are incomplete records of their subject matter. A lawyer’s talent is the ability to read the plain text of a legal document and readily know and understand what information they need to gather to supplement existing materials, and arrive at the correct conclusion. 

Unlike a lawyer, AI lacks the ability to interpret legal concerns, even when it has access to all the necessary information. Even more than that, AI doesn’t have the ability to know what it doesn’t know the way humans do. Attorneys constantly recognize gaps in their understanding and know where to go to find the answers. AI doesn’t have self-awareness and thus can’t decide to reference outside resources that are required to correctly interpret legal questions. 

However, there’s a limit to how much and how quickly any individual attorney can do. That’s where combining human- and artificial- intelligence gives us the best legal tech solution.

Recognizing patterns and performing repetitive tasks are completely different beasts than performing legal analysis. We believe humans (specifically, humans with law degrees) are still the best ones for the latter. While AI has its place to get you there more efficiently, even experts at Google agree that keeping humans in the loop is mission critical: 

“Humans in the loop is a core competency for an AI provider. They will be needed for the foreseeable future.” -Vinod Valloppillil, Head of Product, Google Cloud Language & Vision AI. 

HERE’S HOW WE COMBINE AI WITH REAL HUMANS TO GIVE PRAMATA USERS A TRUE ADVANTAGE: 

Our AI flags new concepts for human review. 

Once reviewed and solved by a real person, we program the concept back into the AI so it’s trained and recognizes the concept next time. Our out-of-the-box AI has been trained for over 15+ years on millions of contracts. 

Our technology digitizes your contracts and performs rich text searches for key terms.

Then our Expert Assist team reviews each contract to tag it for concepts that exist in context, but not in the literal way AI picks up on. 

Our Expert Assist team is made up of actual experts.

We’re talking about lawyers, finance professionals, and data scientists who do the heavy lifting of getting clean data into the system where AI alone falls short. Typical problem areas for AI include amended terms, third-party paper, and poor quality documents, so we’ve got people to handle any exceptions and data extraction that’s needed.

Although there’s no “magic AI button,” Pramata can make the experience feel like there is because our team does the work of checking the AI’s accuracy and course correcting as needed, so you don’t have to. 

This means when your contracts are managed in Pramata, you never have to spend time teaching the AI or reviewing the output. There is no more stressing over the accuracy (our output has a proven 99%+ accuracy rate), instead you can spend your time quickly getting to exactly what you need, sharing reports with your team, and being the all-around superhero your company needs on its legal team. 

Give the human-AI partnership a chance 

We know a lot of attorneys who’ve implemented AI-based CLMs are having trouble with accuracy. You buy a contract lifecycle management solution to lighten your load, not to give you more places to fact check! 

Pramata’s different. This has always been a part of our DNA, it’s why it comes integrated into our affordable solution–for every customer. See the advantages of our unique human-AI partnership for yourself by scheduling a demo today. 

Subscribe to Our Legal Impact Newsletter

Get exclusive event invites, peer best practices and the latest industry news right in your inbox!

More To Explore

Blog

Avoid the “Redlining Tool” Trap: Why Redlining is Just the Start and How a Comprehensive CLM Changes Everything

In-house legal teams spend a great deal of time negotiating contracts. Whether it’s with brand new customers, customer renewals, vendors, employees or any other contractual relationship, negotiation is a vital part of the job. Naturally, this leads to a lot of redlining activity. And that means legal teams are often seeking a tech tool to simplify and shorten the redlining process.

Blog

4 Unique Ways to Use Industry-Specific Contract Data

When you think about contract management, and what it looks like to use a contract lifecycle management system (CLM), you’re likely imagining the very basics – like being able to search through a digital repository of your contracts and locate a specific agreement quickly.