22,826 total views
22,827 total views
OpenAI’s GPT-4 is a multimodal large language model (LLM) that has changed the game for generative AI.
It’s a pretty magical new technology that already has wide-ranging implications—from generating captivating images to writing entire segments of complex code.
We’ve already seen generative AI make a splash by automating time-consuming manual processes for consumers and businesses. But how will it impact legal teams?
‘Assistive tool’
Attorneys spend a lot of time writing, reviewing, editing and negotiating clauses in documents—often getting caught up in the same clauses every time.
Just think about it: for every vendor you’re checking, are the payment terms in line? How much limitation of liability will they accept? Did they sneak in a non-solicitation clause?
And that is just the beginning. Not only will generative AI provide time savings in contexts like this one, but it will be transformative over the entire industry.
We’re already seeing a few real-world applications of generative AI in other legal areas.
For example, we’re seeing models that can be trained on existing contracts and legal playbooks to generate draft clause language, insert clauses from prior contracts, produce suggested redlines during contract negotiations and summarize clause language.
At Lexion, we’ve released our own AI Contract Assist tool that already helps with many of these tasks.
The models can also be used to provide a natural language interface to entire corpuses of text.
Imagine asking your CLM: “What contracts can’t be assigned without written consent?” and getting a report delivered to you.
These kinds of AI applications dramatically accelerate tasks that would have previously taken weeks to complete, subsequently freeing up legal teams to provide higher-level guidance and support to the business.
Like all new technology, generative AI is not without its flaws and buyers should understand some of the limitations and security concerns.
What to consider
Legal professionals weighing the use of generative AI should get familiar with the technology and evaluate how their companies could implement it in a meaningful way.
Here are some considerations before jumping in:
● Prioritize data security: As is typical, ensure vendors are properly securing your data. In particular, check that vendors aren’t commingling information in training models in a way that could result in a data leak. (Just think about what happened with Samsung and ChaptGPT.)
● Consider IP issues: While contract clauses are not usually a copyright challenge, the output of generative AI models can be verbatim text they have seen, which could infringe on IP. This is why many software companies still aren’t having their engineers use certain generative AI tools, even though they could make them much more productive.
Author: Gaurav Oberoi