UW Law efforts position students to work alongside rapidly expanding AI.
At University of Wisconsin Law School, students don’t just learn about law as it is in the books. Sure, understanding precedent, legal history and everything else is important to a well-rounded legal education. But equally as important is looking to the present and even the future of law. And right now, dominating that space is AI. Over the last year, the Dean’s Office formed an AI Working Group to explore how AI could influence the legal industry and what that impact might be on future generations of lawyers and legal scholars.
Artificial Intelligence, or AI for short, is most visible in the form of “generative AI” and large language models — algorithms that learn from oceans of data to approximate human communication and serve as the engine for chatbots such as ChatGPT — and other AI models that perform similar computing tasks to create images and videos.
From coursework on whether and how to effectively engage with AI software to offering Continuing Legal Education Opportunities (CLE) programs to alumni and others in the legal field, UW Law hopes to prepare audiences for whatever the future may hold. The committee of 11 has been exploring the topic of AI and law and how best to prepare students to succeed in using and working with this rapidly expanding tool by exploring AI’s perils, pitfalls and opportunities.
Exploring the Value of AI
“Let’s be clear here, we don’t know what AI will look like in five or 10 years,” said Andrew Turner (referred to as A. Turner for the remainder of this piece), chair of the committee. “We’ve only been talking about AI for one year and we’ve already seen how much it has changed and adapted; we can’t feign certainty about the sort of progress it’s going to make in the next year, let alone into the future.”
The value of AI is still a mystery, but its impact is acknowledged throughout the legal community.
According to the 2023 Future Ready Lawyer Report conducted by Wolters Kluwer, 43% of respondents saw AI as an opportunity (25% viewed AI as a threat and 26% viewed it as both an opportunity and a threat). In fact, some U.S. Appeals Courts have recently proposed lawyers certify review of AI use in filings, and a U.S. District Court in the Northern District of Texas stated “all attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”
“We want to ensure we’re teaching our students to use the technology responsibly to advocate for their clients, but also to understand how the technology works, its limitations and how it is evolving.” — Andrew Turner
“It poses many questions and generates concerns because AI programs are starting to at least simulate things lawyers do, like draft contracts or give us answers to questions,” A. Turner said. “How good AI programs may become at these tasks and what that means for law practice and education are issues we’re grappling with.”
In fact, lawyers the world over have already begun implementing AI in their toolbox, using the programming for:
- research
- drafting, brainstorming and writing
- image and voice creation and duplication
- eDiscovery for sifting through Electronically Stored Information
- transcription and note-taking analysis
- summarization of cases, dockets and statutes
- and more.
And, while it can’t replace human legal researchers, it can supplement it, added Kristopher Turner (referred to as K. Turner throughout), associate director of public services, UW Law Library.
Perils and Pitfalls
Though AI companies are working hard to improve their programs and models, as of now, AI still has serious limitations that law students need to understand and manage.
For instance, most large language model AIs have been trained on the universe of the internet, which is full of bias and misinformation. Because of this, it frequently produces outputs that reflect those pre-existing societal biases back to the user. In short, if you feed AI “garbage,” you’ll get interesting results because it doesn’t know what is garbage and what’s not, explained committee member and IT Director Eric Giefer. This can be even more problematic because those biased answers come with a sheen of credibility because they are generated by advanced artificial intelligence models.
“But it’s really just its weakness and limitations on display,” explained Desmund Wu, who’s led some of the committee’s efforts in understanding and educating students on AI’s inherent bias and limitations.
Another issue is that large language models aren’t typically aware of their own limitations. When they lack a good answer to a question, they often simply make up credible sounding information in what’s referred to as “hallucinations.” These are often very convincing, which presents another major challenge to AI use in a legal context.
AI companies are investing heavily trying to solve these problems in various ways, including by training them using law-specific content.
Other perils and pitfalls the committee is highlighting include privacy issues (for example, companies reviewing and using information put into the AI by users), copyright concerns, the potential loss of originality/creativity and the impact on students’ abilities to learn core skills, among others.
Whether and to what extent AI companies can overcome these challenges is still an open question. Even members on the AI committee are split in their predictions about just how far AI is likely to progress in the coming years.
AI to Play a Growing Role in Legal Work
Despite these challenges, based on its research and conversations with leaders in the legal community, the committee predicts that AI will continue to play a growing role in legal work.
“In order to properly use AI, users have to understand the pitfalls around using AI so they don’t fall into them,” explained Wu. “Many attorneys have gotten in trouble for using AI to generate briefs that contained made-up case citations. When asked to submit the cases for the made-up citations, the AI then fabricated entire cases.”
As a result, the committee believes it is crucial for law students to learn to use AI responsibly and effectively without undermining the development of core writing, research and analysis skills.
Beginning last spring, 1Ls learned the core concepts of AI through various classes. As 2Ls and 3Ls, they have the opportunity to take those introductory concepts and learn how to use AI ethically and appropriately through courses like Advanced Legal Research, explained Bonnie Shucha, Law Library director.
“Students already know what generative AI is — but that’s just the beginning,” said Shucha. “We’re teaching students to critically engage with the technology — understanding its mechanics, ethical dimensions and its transformative impact on legal practice. Through hands-on exploration with various AI platforms, they learn to scrutinize the data, craft precise prompts, safeguard sensitive information and validate outcomes. This critical assessment of AI is essential in today’s legal landscape.”
Also being discussed is the ethical use of AI.
It was a good “fountain of discussion for the students,” explained K. Turner.
“For example, how do you balance a law office and billable hours with AI? It’s an ethical question that’s still unfolding as we speak,” he said.
Students are very interested in implementing AI in a fair and equitable way, K. Turner continued.
“We don’t know what the future is, but we’re trying to ensure our students are ready to tackle whatever that future might look like.” — Eric Giefer.
A few areas they’re really interested in exploring are how AI could help with access to legal services, for example, and how the use of AI might lead to a bigger division in the legal field, such as some law firms having access to the tool and others not.
“We’re seeing a lot of interest and demand in this area from our students, and I can only imagine it will continue to grow,” said K. Turner.
While using AI appropriately does lessen the problem of hallucinations, it doesn’t eliminate it.
“AI is a wonderful tool, but human intervention is still needed at this point,” said K. Turner. “But as we’re talking, it was only introduced in the fall of 2022, and it’s changed dramatically. So, by next fall, we could be seeing huge advancements. It’s exciting but also intimidating maybe for those who aren’t used to being tech reliant.”
The committee has worked with faculty to develop both model and bespoke classroom policies around the use of AI. As of now, each faculty member has the freedom to craft a policy that works best for their subject area. For instance, a clinical program may be more permissive with AI use provided students disclose how they used it, whereas a substantive law professor might choose to prohibit it entirely for graded work or exams.
While the climate around AI is ever evolving, the Law School is working diligently on how to position students to work alongside AI in an otherwise unknown future.
“The only known thing is AI is not going away, but how will things look in five years?” said Giefer. “Think of MySpace. It was so popular, but it’s gone, but the concept is still with us today, just evolved. We don’t know what the future is, but we’re trying to ensure our students are ready to tackle whatever that future might look like.”
It is crucial for law students to learn to use AI responsibly and effectively without undermining the development of core writing, research and analysis skills.
AI a Focus of UW–Madison’s New RISE Initiative
The new Wisconsin Research, Innovation and Scholarly Excellence (RISE) Initiative is designed to address complex challenges of importance to Wisconsin and the world. The Initiative, which launched in February, will focus first on artificial intelligence.
“UW–Madison has always upheld its responsibility to innovate for the public good, a principle no less central to the university’s mission now than it was when the Wisconsin Idea was articulated nearly 120 years ago,” said University of Wisconsin–Madison Chancellor Jennifer Mnookin. “As the grand challenges of the 21st century come into focus, so do the opportunities for a university with UW–Madison’s breadth and depth to muster interdisciplinary research, education and action to rise and meet them. That is what the RISE Initiative is designed to do.”
At a research institution like UW–Madison, AI is opening new frontiers for scholars working across various disciplines. Advancing AI with RISE Initiative support will put UW–Madison in a leadership position in a field that is rapidly expanding.
According to a UW-Madison news release, through strategic additions of people and resources, “RISE will build collaborative networks of faculty to tap high-priority research and development funding, make transformative discoveries with real-world impact and equip students with the knowledge and skills to extend that impact into their careers.”
The university expects to recruit between 120 and 150 faculty through the RISE Initiative over the next three to five years. According to Provost Charles Isbell, the strategically focused commitment of RISE will allow UW–Madison to accelerate the hiring of promising faculty and build on its existing strengths.
“Some of today’s students will become tomorrow’s leading AI innovators, and all of today’s students need to learn to use AI-powered tools creatively, effectively and ethically,” said Isbell. “RISE AI is not just about research. By adding resources and focusing on this major challenge, we will naturally be able to carry the energy into the classroom.”
Visit the Wisconsin RISE Initiative website for more information.