Should Law Schools teach ChatGPT?
posted by Neil E. Klingshirn | January 29, 2023 in Employment Law
ChatGPT is a pretrained, large language model that uses deep learning to recognize, summarize, translate, predict and generate text and other content. Since ChatGPT is pretrained on everything publicly available on the internet, it can generate text that looks and sounds like information available on the internet.
ChatGPT generates convincing, conversational text. ChatGPT and other large language models (LLMs) do that by predicting, or guessing, how a human would respond to a prompt (e.g., “are non-competes enforceable in Ohio?”).
This, though, is the central weakness for ChatGPT and other LLMs. Their responses are, at best, good guesses. If trained long and well enough, their guesses can get very good. But no matter how much training they get, reducing the chance of a wrong answer to zero is, statistically speaking, almost impossible. That is why self-driving cars still crash in unpredictable ways. This weakness makes ChatGPT unsuitable for generating accurate, reliable answers in a knowledge domain, like the law.
Should it be Taught?
Since ChatGPT cannot be trusted to provide accurate, reliable information, should it be taught at all? I say yes, but the focus should be on identifying the legal tasks ChatGPT does and does not do well, and why. A lesson plan might be, for example, to grade ChatGPT on its accurate and reliable performance of the following tasks:
- Answering legal questions;
- Summarizing legal research;
- Interviewing witnesses;
- Drafting legal documents; and
- Writing persuasive arguments.
Based on my experience, I would suggest the following:
Answering legal questions, “C”
ChatGPT is not an accurate, reliable or suitable method for legal research now, and probably will not be in the future. First, it has not had access to case law or been trained on legal topics. Westlaw and Lexis are probably doing that now, but even when fully trained, ChatGPT and LLMs are just guessing.
So how can earn a “C”? ChatGPT knows as much about the law as anyone who has read everything on the internet, which is a lot. Therefore, ChatGPT can make a pretty good guess as to general topics. It will provide a solid answer on whether non-competes are enforceable in Ohio. However, it doesn’t know what the Fair Competition doctrine is in a tortious interference claim.
ChatGPT can point you in the general direction of a legal answer, but don’t trust it. It does not know truth from fiction and can only provide a start.
Summarizing legal research, “B”
LLMs do a good job at summarizing. I asked it to summarize the FTC’s proposed rule banning non-competes. It did that well.
I only gave it four stars because, again, ChatGPT is not trained in the legal domain. Once it or other LLMs are trained in case law and legal analysis, it will likely do a great job at summarizing cases and finding specified fact patterns. However, there are other forms of AI that already do that well, so it remains to be seen whether ChatGPT will unseat them.
Interviewing witnesses, "D"
ChatGPT is trained to answer questions, not ask them. However, that is a matter of training. My guess is that, once trained, an LLM will get good at interviewing witnesses. Unless asked evaluate answers for truth, which it cannot do, ChatGPT should be able to interview witnesses effectively, and summarize what it learns from them.
Drafting legal documents, “C”
I asked ChatGPT to draft a non-compete agreement. It gave me a bare bones document that covered only the most basic topics covered by a typical non-compete. I then asked it for other topics that I should consider in a non-compete agreement. It listed most of the topics you see in non-competes. Nonetheless, it did not suggest some important topics, like a fee shift. In addition, ChatGPT is only guessing what a non-compete will look like, and the topics it could cover. It is not able to analyze the need or wisdom of including a particular topic.
Writing persuasive arguments, “C” to “B”, depending on the prompt.
ChatGPT is great at sounding human, and you can prompt to use the tone of an advocate, a diplomat or your cranky Uncle Jess. Therefore, it can argue. However, again, it has not been trained in the law, and it does not know your facts, so you have to give it the law and your facts in the prompt. Again, though, what you get is just a good guess as to how a human would argue the law on those facts. Thus, even when trained in the law, LLM legal arguments will only be as good as the facts on which they are prompted.
Conclusion
Artificial intelligence, including ChatGPT, can summarize legal research and draft basic documents. It or other LLMs will get better at that with training. With training, they should also be effective at interviewing witnesses and writing arguments.
However, LLMs have limitations so serious that relying on them, especially ChatGPT, will be malpractice. Given this, law schools should teach how to use artificial intelligence in the practice of law, with an emphasis on the limitations and suitability of various forms of AI for specific tasks.
Related Articles
- Unions can only Waive their Members' right to file Suit against their Employer if the Waiver is Clear and UmistakableBelieve it or not, a union can waive its members’ right to take their employer to court for vi...
- Lawsuit against Steakhouse Highlights the Realities of Sexual Misconduct in the WorkplaceLocigno worked as a bartender and server for Zach’s Steakhouse, a restaurant owned by the Zach...
- How to Escape the Consequences of an Inaccurate Job DescriptionKenneth Camp (age 61) worked for Bi-Lo, LLC (“Bi-Lo”) as a grocery stock clerk for 38 ye...
- Why All Employees Should Keep Track of Hours WorkedSteven Robinson was a maintenance worker at Roberts Hotels Management Detroit (RHMD). Although Robin...
- Can Employers Legally Influence the Political Leanings of Their Employees?Not according to Brad S. Meyer of Zashin & Rich, who recently wrote a great article on the subje...