AI in the Legal Industry: What Every Texas Lawyer Needs to Know About ChatGPT and Generative AI
Generative AI has taken the world by storm. Countless industries are using various AI tools to automate certain tasks and streamline productivity. But is it a good alternative for those in the legal industry?
Recently, Goldman Sachs estimated that generative AI can automate 44% of legal tasks in the U.S., as well as 46% of administrative tasks. While this may sound terrifying to some or exciting to others, a future of “robot lawyers” is nowhere close to becoming our reality. As with most tools, generative AI still requires the oversight and input of a human to produce accurate, usable, and reliable results.
Whether you’re curious about how generative AI and ChatGPT can benefit your legal practice or you’d like to learn more about what to watch out for when using generative AI, then this article is for you.
What Is Generative AI?
Generative artificial intelligence is a term used to describe AI that can produce a wide range of content based on user-input data points. Generative AI can create anything from text and images to audio and video.
Generative AI’s recent buzz can be attributed to OpenAI’s ChatGPT tool which became available to the public on November 30, 2022. ChatGPT is a highly sophisticated “chatbot” that allows users to suggest a prompt or ask a question from which ChatGPT will produce a result.
It’s easy to see why ChatGPT has become a go-to for those in the legal field. There have already been dozens of instances where lawyers, judges, and other legal professionals have turned to ChatGPT to take care of a wide range of legal duties. When used accordingly, ChatGPT can create briefs, draft legal memos, and perform legal research.
ChatGPT vs Google: What’s the Difference?
The sudden prevalence of ChatGPT can be compared to Google. Whenever we need to do research or find answers, we go to Google and look up our query. This brings up thousands of results from various websites across the internet. From there, you can click through the results that seem most relevant.
ChatGPT does this hard work for you. You won’t have to click around and look through various sites to find your answer after you submit your query. Instead, ChatGPT will scour the internet in a millisecond and formulate a response built upon hundreds of thousands (of millions and trillions) of results, all in the blink of an eye.
The main issue here is that many tend to take this first result and use it as their own. Most of the problems with a stem from this. The results are meant to provide users with a template or guideline; they should not be the finalized product. You can even correct ChatGPT and ask for more clarity or to focus on a certain aspect of the prompt.
It hasn’t all been sunshine and rainbows, however. In far too many instances, law professionals have experienced less-than-favorable outcomes after relying too much on ChatGPT. This past June, a New York judge fined two lawyers for submitting a ChatGPT-created legal brief. Their brief was full of fake cases, made-up citations, and erroneous details.
As such, it’s necessary to exercise caution when implementing AI tools in your practice.
The Most Common Uses for AI Tools in the Legal Industry
There’s no denying that AI tools are incredibly useful. That’s why ChatGPT’s user base has grown exponentially, reaching 100 million users after just two months of becoming publicly available.
AI tools like ChatGPT have already become quite popular within the legal industry, being used for:
- Legal research
- Contract and document review
- Contract drafting
- Brief analysis
- Predictive analytics
- Jury selection
- Marketing assistance
- Video creation
ChatGPT, in particular, can be used to:
- Answer questions with well-established answers
- Explain complicated concepts in a simplified way
- Offer revisions for clarity
- Write letters
- Draft legal memos
- Assist with legal research
When properly used and monitored closely, ChatGPT can offer timesaving assistance to lawyers, but it also presents some definite risks. Simply put, ChatGPT should be seen as a tool to help you gather your thoughts or proceed when you feel stuck.
Pitfalls to Watch Out for When Using ChatGPT
To use ChatGPT to its fullest without landing in hot water, don’t let yourself get distracted by the excitement of automating these often tedious and time-consuming tasks. Take every result from ChatGPT as a suggestion, not as the end product. Dig deep, check facts, and revise as if your law practice depended on it.
Offloading important tasks to ChatGPT and not double-checking or confirming validity is dangerous. This last June, ChatGPT’s parent company OpenAI was sued for defamation after ChatGPT accused a radio host from Georgia of “defrauding and embezzling funds from a non-profit organization.” This is the first of many potential cases that will be levied against OpenAI and ChatGPT, especially since the tool takes the work of others and uses it without asking for permission or providing credit.
But let’s see what ChatGPT has to say about this topic. After asking ChatGPT itself the following question, “What are some potential pitfalls of using AI and ChatGPT,” this was the response:
[Unedited ChatGPT Response]
- Bias and Discrimination: AI models like ChatGPT are trained on vast amounts of data from the internet, which can contain inherent biases present in society. As a result, the AI might generate biased or discriminatory responses, reinforcing stereotypes or propagating harmful information.
- Lack of Contextual Understanding: ChatGPT lacks a deep understanding of context, and its responses are based on statistical patterns in the data it was trained on. This can lead to nonsensical or inappropriate answers, especially when dealing with complex or sensitive topics.
- Misinformation and Fake News: Since AI models like ChatGPT generate text based on patterns in the data, they may inadvertently produce false or misleading information. This can contribute to the spread of misinformation or fake news if not carefully monitored.
- Privacy and Security Concerns: Using AI models involves sharing data with cloud-based services, raising privacy and security concerns. Confidential information might be inadvertently exposed if not adequately safeguarded.
- Lack of Accountability: AI models operate as "black boxes," making it challenging to trace how specific decisions or responses are generated. This lack of transparency can create accountability issues when AI is used in critical decision-making processes.
- Unintended Consequences: The deployment of AI can have unforeseen consequences, especially when used in real-world applications. Unexpected or unintended outcomes may arise, affecting individuals or systems in unforeseen ways.
- Overreliance on AI: Depending excessively on AI for critical tasks without human oversight or validation can lead to undue trust in the system's capabilities, potentially compromising safety, and reliability.
- Legal and Ethical Challenges: AI applications in certain contexts may raise legal and ethical challenges, especially when it comes to issues of responsibility, liability, and the potential for harm.
- Limited Creativity and Originality: AI models like ChatGPT are trained on existing data, so they can lack genuine creativity and originality. Responses might be predictable or derivative, limiting their value in certain creative or artistic endeavors.
- Resource Intensive: Training and running AI models like ChatGPT require substantial computational resources and energy, contributing to environmental concerns.
[End of ChatGPT Response]
As you can see, even ChatGPT is aware of its own shortcomings.
While the dangers of ChatGPT are clear, there’s no denying that it can serve as a beneficial tool for legal professionals. Just make sure you don’t leave anything up to chance. ChatGPT may help you reach your intended goals more easily, but it is not the be-all-end-all for legal work. Your input, oversight, and expertise are still required.
Overcoming the Ethical Dilemmas Lawyers May Face When Using Generative AI
There’s no denying that ChatGPT and AI tools are here to stay. As such, legal professionals and law firms alike must be aware of the implications and best uses for generative AI in the field.
It’s also crucial to understand the ethical dilemmas AI tools present. If used without care, AI tools may clash with the lawyer’s duty to provide competent representation. To get ahead of any issues, establish an internal policy at your firm regarding the use of AI. This way, you can be prepared for the inevitable changes to the legal industry. Make sure to adopt policies and procedures for yourself as well.
Generative AI is The Future — Prepare Yourself Today
This is just the tip of the iceberg when it comes to generative AI. Before long, ChatGPT’s current iteration will seem like child’s play when compared to what’s to come. Make sure you’re prepared for what the future holds because the genie of generative AI won’t be going back into the bottle.
- The Rise of Artificial Intelligence in the Legal Profession
- SXSW: Generative AI — Where Creative and Tech Innovation Meet
- Do Android Lawyers Dream of Electric Billable Hours - The Use of AI in Civil Litigation
- Legal Issues in Artificial Intelligence for the Entertainment Industry
- Will Robot Lawyers Really Take Our Jobs? What Artificial Intelligence Means for the Legal Profession
- Have the Robot Lawyers Finally Arrived? Practical Concerns and Ethical Dimensions of ChatGPT
Otto Nicli is part of the State Bar's Web team and serves as the blog writer for the Texas Bar Practice website. He also plays a part in marketing and video production. In his free time, he enjoys watching Top Chef with his wife, collecting records, reading, and going to shows.