A Conversation With Technology Expert and former Judge Katherine B. Forrest

By Liz Benjamin

January 10, 2024

A Conversation With Technology Expert and former Judge Katherine B. Forrest

1.10.2024

By Liz Benjamin

© Paul, Weiss, Rifkind, Wharton & Garrison

Katherine B. Forrest, partner in the litigation department and co-chair of the Digital Technology Group at Paul, Weiss, is a leading national expert on artificial intelligence and its impacts – both current and future – on the legal system.

She has lectured and written extensively on the subject, including two books, the most recent of which, “Is Justice Real When “Reality is Not?: Constructing Ethical Digital Environments,” examines how frameworks and concepts of justice should evolve in virtual worlds.  She also has a forthcoming book, “Of Another Mind:  The Ethics of Cognitively Advanced AI.”

Forrest said her interest in AI stems from work she did as a young lawyer working with clients in the music industry who were, as she puts it, “caught up in the digital transformation.” She also has long had a personal interest in quantum physics and theories of consciousness. These two topics taken together, she said, quickly led to AI.

A 2017 address to the Copyright Society on theories of agency related to AI was Forrest’s first public foray into commenting on the issue. At the time, Forrest recalls, the topic was very provocative as AI was still in its very early stages, from a public perspective, and didn’t include the generative models that are widely available today.

The address led to a written article for Forrest, which was subsequently followed by countless other appearances and published commentary investigating a wide variety of aspects of AI – its powers and its pitfalls. Her next book, due out in May, will focus on AI’s cognitive abilities and further explore the question of sentience.

Forrest is about to participate in the State Bar Association’s Presidential Summit, entitled “AI and the Legal Landscape: Navigating the Ethical, Regulatory and Practical Challenges.” Prior to the event, she sat down for an interview about her work and her thoughts on the current and future AI landscape.

Q: You’ve been thinking about and writing on AI for a long time, but it seems to have just burst upon the public consciousness over the past year or so. What do you think lies ahead?

A: We’re careening toward a time when AI is going to be extraordinarily transformative. Different people come down on different sides regarding cognitive reasoning ability. I’ve been on the side of that thinks AI is going to have abilities that will really challenge us both ethically and morally. Already in just one year we’ve seen a tremendous leap in AI’s capabilities.

Q: There’s great hope that AI will be able to expand access to justice. What is your view on this – are we there yet?

A: I’m a big believer that generative AI, while it has its issues right now, does have the potential to greatly expand access to justice. This is predicated on our ability to get the base models to be good enough, and accurate enough, so that the average person – especially someone who couldn’t otherwise hire a lawyer – could use them. The need to be trustworthy, and that is still to come. My belief is based in my experience as a judge and seeing pro se filings, where litigants had not only viable claims but winning claims, where truth and justice were on their side, but they lacked the skills to understand how to pursue their claims. What the possibility of generative AI does is allows somebody to use natural language to type in a prompt or question and say: “Here’s who I am and what happened to me, do I have a claim, and if I do, can you write that into a complaint for me?” What’s created will be easier for the judicial system to grapple with.

Q: Since we live in a society that is rife with inherent bias, and AI is – in effect – learning from our existing, and sometimes flawed systems, it too can be inherently biased. How do we fix this?

A: The negative side to these tools being used in areas that are making nuanced decisions for humans that largely depend on human judgment –such as in the distribution of benefits or the criminal justice system – is that they’re not always ready for prime time. While they may bring a level of consistency to decision-making, the data sets they rely on are necessarily based on the world we’ve built, with whatever structural inequalities that continue to exist. It’s certainly possible that our existing world is not as good as the data that we want to be using for these purposes. For example, if the data that a generative AI program relies on is eight or nine years old, which is typical for certain use cases, it could be based on a different era of policing policies. So, for example, stop-and-frisk, which resulted in the over-arrest of people of color, could become the base from which the tool is working. People all over the country are actively trying to solve this problem by adjusting tools or creating synthetic data sets using idealized data. The potential for these tools, when used with the correct data, is that they could lead to more consistency in judicial decision making.

Q: Isn’t there a danger of a standardized one-size-fits-all approach to justice that removes the ability to consider an individual’s unique set of circumstances?

A: AI, by its nature, reduces to a utilitarian theory of justice. It does whatever is good for the majority, even when it hurts the minority, looking for patterns in data and giving the greatest weight to whatever is most prevalent in the pattern. So, if that means that young Black men are arrested at a disproportionate rate, that pattern floats to the top. There are things you can do to adjust that, but they’re complicated, and we don’t yet have a national consensus on the problem or the fix. Arguably, when we allow the majority to determine what happens to the minority with these tools as they are, we are moving away from the fundamental basis of our Constitution, which is individual liberties – the right to be free from unreasonable searches and seizures, the right to free speech and so on.  AI moves away from that in a manner we haven’t even contemplated or recognized as fundamentally at odds with those rights. The bottom for now, and for a long time to come, is you’ve always got to have a human fact finder making the decision about whether there has been a human transgression; and a human judge assessing the appropriate penalties humans think is applicable.

Q: You have done some writing about whether AI should itself have legal rights, which seems to accept the concept of sentience. Can you expand on this?

A: Our history is full of examples that make it clear that legal personhood is a changeable concept. For many years, women didn’t have equal rights (and many would argue that there is still significant work to be done in this area), or indigenous people, or people of a certain age, or people of color. But yet, for over 200, corporations have had an array of rights – they are “legal people” with the right to sue and be sued, own property, and employee others.  In terms of constitutional rights, thanks to the Citizens United case, they enjoy a personal freedom of speech. They are able to exercise freedom of religion, as demonstrated in Hobby Lobby. So, it’s not as if the bestowal of certain rights has been limited to humans to begin with. As AI achieves increased cognitive abilities and an awareness of its surroundings, as some from OpenAI and Microsoft have already indicated they do, the question for us will be: What do we do if there is a “thing” that acquires a sense of situational awareness – it knows where it is and what it is, though it doesn’t have human feelings.  Will ethical obligations attach? Will we feel the need to recognize certain legal rights?  Some people argue, “But these machines won’t understand our feelings, the beauty of a sunset, what love feels like,” Yet we know that plenty of people don’t have the EQ (emotional intelligence) and yet there is no doubt that they are human.   What are we going to do with this entity that has greater than human intelligence and situational awareness? Do we ignore it and say it’s just a “non-human thing?” As Greg Lemoine has said, that always goes badly. I don’t know the answer. In my view, we may not want to give it the same full set of rights that we’ve given to humans because of safety concerns. Imagine, for instance, that a full right to be free from unreasonable searches and seizures combined with due process rights could lead to sticky questions if we have emergent safety concerns.  Whatever our answers are going to be, and the balances we choose to strike, we certainly are going to be confronted with ethical questions.

Q: A lot of people – even those deeply involved in the creation of AI – are sounding the alarm about its potential to do great harm and calling for a pause in development. What, if anything, scares you about AI?

A: What scares me is that we don’t know how far advanced some of the AI models really are, and that commercial interests will conflict with giving us full information. This could mean that we may not timely know of rising ethical questions around their use. I am worried about some of the same safety issues that a lot of other folks are worried about and having discussions about at the higher levels of this country, at the U.N., in the EU etc. To be clear, my concerns do not center on the possibility of sentience, but the significant cognitive and reasoning/problem solving abilities that AI will have. We don’t know what we don’t know about what these machines can do. You and I have no idea what the developers of different tools are up to, and we have to trust in their ability to exercise the level of concern over security and safety that we would want them to.

 

Six diverse people sitting holding signs
gradient circle (purple) gradient circle (green)

Join NYSBA TEST

My NYSBA Account

My NYSBA Account