Can Robots Be Sued? Q&A With Cory Fisher

The Shook AI Task Force's white paper uncovered some unease among in-house counsel about implementing artificial intelligence into their businesses. Shook Partner Cory Fisher wants to help, whether that means providing counsel on potential legal liabilities or reassuring clients that AI can be manageable. Listen below as he sheds additional light on the task force's findings.



Tell us the number one concern for GCs in terms of AI according to this survey.

It really isn’t a surprise—it came back as privacy and security. Sixty percent of those surveyed came back with that response. Really, if we look into why that might be the reason, AI is built on data. Data drives everything to do with artificial intelligence. When you have that much data to have an effective artificial intelligence system, it’s stored somewhere by someone, and it’s very telling. Any time we have that large mass of data, we always are going to have concerns about the privacy and data associated with it.

What was next, in terms of the survey?

Forty-five percent of the respondents came back saying there appears to be a lack of regulation and standards associated with artificial intelligence, and that is a very understandable reaction by general counsel. Part of it is the whole enigma associated with artificial intelligence—it just sounds so different.

What we all really need to do is just step back, take a breath and realize the problems that AI is being used to address are problems that have always existed. We might be changing the actors or the types of people associated with the responses—associated with those problems—but we just need to take a step back, take a breath, and look at it from a rational basis.

How will the use of AI impact general counsel?

One of the things we need to keep in mind: a lot of the use and implementation of AI is being driven by the business side. So the executives or C-Suites are saying, “Hey, please find efficiencies. Find new ways to achieve a previously performed task.”

There is this constant drive within a company or corporation to implement it. What general counsel need to do is understand that this is moving forward. So what is my client, my company or my organization doing with AI? Start to look at it holistically—how is it all approaching? And what do we need to be proactively thinking of rather than reactively?

Large law firms are developing AI practices now but Shook’s AI focus takes a unique role. Tell us about it.

You know, that’s true. A lot of task force or industry groups look at it from a myopic group or kind of focused look, but instead, Shook is approaching it differently with a multidisciplinary approach, where we have professionals in the intellectual property, privacy and data security, product liability, class action and public policy realms. For example, in our D.C. office, Cary Silverman is taking a lead on identifying how artificial intelligence impacts all aspects of clients and corporations.

Any final thoughts for general counsel?

One of the stumbling blocks that I hear when we have conversations with general counsel is that there is a confusion. What we need to remember is that there are two sides to the AI coin. The first side deals with artificial intelligence as a tool used by legal professionals. It drives efficiencies within the legal department and drives efficiencies with an outside counsel. But the other side of the coin, and one we are actually talking a lot about here in this podcast, is when the client or corporation or organization implements AI. What are the resulting liabilities and challenges that a general counsel will have to face because someone chose to implement artificial intelligence?