Skip to main content
(This article appeared in Journal 28,3, Fall 2023)

“We have only bits and pieces of information, but what we know for certain is that at some point in the early 21st Century all of mankind was united in celebration. We marveled at our own magnificence as we gave birth...to AI.”1  

These lines, spoken by heroic Captain Morpheus in the 1999 movie The Matrix, tell the story of [spoiler alert, though it’s arguable that no such warning is needed when talking about a 24-year old movie because, well, if you haven’t seen it by now you probably aren’t planning on seeing it anyway] the war between humans and artificial intelligence driven machines. The Matrix itself turns out to be a computer program created by machines that is used to create a fictitious reality for humans as a means of control. Morpheus utters this dreary description of mankind’s history during his explanation to newly-freed Neo about how the world he once knew is gone, and how he must now embrace the reality that artificial intelligence changed everything about human existence.

Fast-forward to today, what is now appropriately described as the early part of the 21st Century—on March 13, 2023, the fourth version of ChatGPT was released for public use, and with it a barrage of comments, news reports, and articles spawning reactions from extreme excitement to absolute dread. For the uninitiated, ChatGPT is a version of generative artificial intelligence (AI). (Think of ChatGPT as the Bitcoin of AI projects, i.e., it’s the most popular one out there, but it’s not the only one out there.) “Generative artificial intelligence” is different from “semantic artificial intelligence,” which most readers will recognize they have employed quite often in their own lives and practices. Very broadly, semantic AI programs can sort through and organize existing data sets or libraries of information. For example, when a lawyer uses an online legal research tool to find a relevant case on a particular topic or point of law, the lawyer is using a semantic AI program to sort through thousands (millions?) of cases and organize the results from most relevant to least relevant. Generative AI, on the other hand, goes beyond the already complex and sophisticated search of semantic AI programs and actually creates/generates plain language answers and analysis in response to prompts or questions provided by human users.

ChatGPT has gone through several iterations thus far and will continue to develop further in the coming months and years. The most recent version, however, made headlines due to its advances in those aforementioned “generative” AI capabilities. Most notably, it was recently reported that ChatGPT obtained a passing score on the Uniform Bar Exam, with the program responding to the exam’s narrative essay prompts with apparently accurate and organized legal analysis.2 And just in case you were wondering, no, the program does not rely on an active Internet connection to function. Rather, the program was trained (educated?) by its programmers using a vast amount of text databases from the Internet; and it produced correct answers to bar exam questions in a “closed universe” scenario.

Are you scared yet?

Needless to say, these advances in AI are remarkable. And over the past weeks, the ethics staff has received a number of calls from lawyers asking how ChatGPT and the emergence of artificial intelligence impacts a law practice. After all, ChatGPT has been proven to effectively complete tasks ranging from drafting college-level essays to template contracts and wills. It’s also been reported to be unreliable at times: In June 2023, a federal judge in New York fined two lawyers for relying upon ChatGPT to draft a legal argument that the lawyers then included in their brief filed with the court; the AI-generated analysis, it turned out, was full of fictitious case citations created by the program to support its argument, and the lawyers did nothing to review or validate the work product of the program.3 That instance led a federal judge in Texas to prohibit lawyers from using ChatGPT or other generative AI programs in drafting legal briefs—when filing a brief with the court, lawyers in that district now have to also file a certificate attesting that their work product was not created by AI.4

The range of benefits and detriments is vast, but before your head explodes, take comfort in knowing that lawyers have been using AI for some time. (As noted before: has anyone out there ever searched for a relevant case on Westlaw or Lexis?) Our use of yet another new, evolving technological tool need not be met with fear or discouragement —we’ve adapted before, and we need to adapt again. But there are a handful of ethical considerations to note as we venture into this new world of employing AI in a law practice. So, let’s dive in: What are the ethical considerations for a lawyer’s use of artificial intelligence in a law practice?

Answer:

The use of artificial intelligence in the legal profession raises several ethical considerations for lawyers. While AI can provide valuable support to legal practitioners, it is essential for lawyers to remain mindful of their professional obligations and ethical duties. Some of the key ethical considerations include:

1. Competence: Lawyers have an ethical duty to provide competent representation to their clients. As AI becomes more integrated into the practice of law, lawyers should maintain a basic understanding of AI technologies to leverage them effectively and appropriately. This includes knowing the strengths and limitations of AI tools and recognizing when human intervention is necessary to ensure the quality of legal services.

2. Supervision: Lawyers have a responsibility to supervise the work of nonlawyers—including AI systems—to ensure compliance with ethical obligations. This involves monitoring the AI’s performance, understanding the AI’s decision-making process, and validating its output to ensure accuracy and reliability.

3. Confidentiality: Lawyers must protect client confidentiality and maintain attorney-client privilege when using AI tools. This includes ensuring that any AI tools or systems used in the practice have adequate security measures in place to protect sensitive client information from unauthorized access or disclosure.

4. Communication: Lawyers have an ethical duty to keep their clients informed about the status of their matters and to communicate with them in a manner that allows the clients to make informed decisions. When using AI tools, lawyers should be transparent with clients about the extent of AI involvement in their work and should communicate any potential risks or limitations associated with the use of AI.

5. Diligence: Lawyers have an ethical obligation to act diligently in representing their clients. Relying solely on AI without human input or oversight may lead to mistakes or oversights, potentially resulting in inadequate representation. Lawyers must exercise due care when using AI tools to ensure they are fulfilling their ethical duties.

6. Avoiding Unauthorized Practice of Law (UPL): Lawyers must ensure that the use of AI does not result in the unauthorized practice of law by nonlawyers. While AI can assist with various tasks, lawyers should be cautious not to delegate tasks that require legal judgment or expertise to AI systems, as this may constitute UPL.

7. Billing: Lawyers have a responsibility to bill clients fairly and transparently. When incorporating AI tools into their practice, lawyers should consider how the use of AI affects billing practices and ensure that clients are not overcharged for services provided by AI.

8. Bias and fairness: AI systems may inadvertently perpetuate or exacerbate biases present in the data used for training the algorithms. Lawyers must be vigilant in identifying and addressing potential biases in AI tools to ensure they provide fair and unbiased legal services to their clients.

9. Responsibility and accountability: Lawyers must remain responsible and accountable for the work they produce, even when using AI tools. This includes acknowledging errors that may arise from the use of AI and taking corrective measures as needed.

By addressing these ethical considerations, lawyers can harness the potential of AI in their practice while maintaining their professional obligations and upholding the highest ethical standards.5

By the way, everything written above between the word “Answer” and the footnote ending the preceding paragraph was drafted by ChatGPT in response to the prompt, “What are the ethical considerations for a lawyer’s use of artificial intelligence in a law practice?”

Now are you REALLY scared?

Generally, I don’t disagree with the answers listed above, though I think the nuance of a lawyer’s professional responsibility when it comes to artificial intelligence needs a bit more exploration. (I also shudder at the idea of ChatGPT assigning personhood to itself when reflecting that lawyers “have a responsibility to supervise the work of nonlawyers, including AI systems[,]” but I digress.) To that end, the State Bar’s Ethics Committee intends to delve into this topic in the coming months. Whether the committee produces a new formal ethics opinion or some other guidance on this ever-evolving issue remains to be seen, but keep an eye out for future updates on the committee’s efforts regarding AI.

For those who are anxiously awaiting an answer on the interaction between a lawyer’s professional responsibility and AI, let’s cut to the chase: Nothing in the Rules of Professional Conduct prohibits a lawyer from using machine learning or artificial intelligence tools in a law practice. However, like other law practice resources, a lawyer must use these tools competently (Rule 1.1), ensure that confidentiality is preserved (Rule 1.6), and review/supervise the work product generated (similar to a lawyer’s duty of supervision per Rule 5.3). A lawyer needs to be particularly careful when using a public artificial intelligence tool (like ChatGPT) because any client-specific information provided to the public tool could be subsequently used or potentially revealed by the program, breaching the lawyer’s duty of confidentiality. Depending on the circumstances of the representation, a lawyer may also need to consult with a client prior to delegating certain tasks to an AI program or process, similar to a lawyer’s responsibilities when outsourcing legal support services to foreign assistants. See 2007 FEO 12. And, of course, a lawyer must be transparent with a client when billing for work assisted by AI. After all, AI may very well reduce a previous 60-minute task to six minutes (or less); in such a scenario, a lawyer must accurately and honestly bill based upon the time actually spent on the task, and any efficiencies created by the lawyer’s use of AI must be passed on to the client. See Rules 1.5, 7.1, and 8.4(c).

There is no way to un-ring this bell. The issues will incessantly evolve and grow in complexity, but the Ethics Committee and staff counsel will continue to explore the integration of AI into the legal profession. In the meantime, be careful out there: If you’re going to employ AI in your practice, be sure to do so competently and securely, and review the program’s work product as if it were done by a summer intern (there’s potential, yes, but it’s not quite there and may even be riddled with errors). Ultimately, every lawyer that relies upon AI will be responsible for its product and the implications thereof. And if this technology ever evolves into the equivalent of a first-year associate or higher, we can collectively “marvel at our own magnificence”...while also updating our resumes. 

Brian Oten is the ethics counsel for the State Bar, and the director of the Legal Specialization and Paralegal Certification programs.

Endnotes
1. The Matrix (Warner Bros. 1999).
2. Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score Nearing 90th Percentile, ABA Journal, bit.ly/3KBnl8o, march 16, 2020.
3. Lawyers Fined for Filing Bogus Case Law Created by ChatGPT, CBS News, June 23, 2023, cbsnews.com /news/chatgpt-judge-fines-lawyers-who-used-ai.
4. Texas Judge Bans Filings Solely Created by AI After ChatGPT Made Up Cases, CBS News, June 2, 2023, cbsnews.com/news/texas-judge-bans-chatgpt-court-filing.
5. “What are the ethical considerations for a lawyer’s use of artificial intelligence in their law practice?” ChatGPT-4, April 4, 2023, chat.openai.com/chat.

Back to top