top of page

Who Bears the Risk? Agentic AI and the Widening Liability Gap in Agency and Contracts




Source: Photo by Unsplash


Introduction


Agency law is facing challenges as artificial intelligence (AI) advances towards complete autonomy. Traditionally, agency law relies on the notion of legal personhood, which allows an individual or entity to engage with the legal system. However, as AI evolves from generative AI to agentic AI, this traditional notion of personhood is becoming more important. Because the current legal system is built on the assumption that an agent is a “legal person,” the rise of agentic AI raises the question: “Who bears the risk if AI is not a legal person?” Courts are beginning to grapple with this issue as it slowly evolves in different legal fields, and this analysis further examines the impending reality and gaps agency law must face in the rise of AI.


Evolution from Generative AI to Agentic AI


In order to understand the realities and gaps agency law is facing, it requires a brief examination of AI and its evolution. 


At its simplest, AI refers to technology that can simulate traditional human functions, such as decision-making, creating, and reasoning, without significant human oversight. However, AI has grown significantly since it was first discussed in Alan Turing’s paper “Computing Machinery and Intelligence.” Currently, AI has taken two major forms, generative AI and agentic AI. Each of these differs significantly in its independence, reasoning, and decision-making.

On one hand, generative AI is capable of creating new content, such as text, images, and code, by learning patterns from large datasets of user inputs and through machine learning models. These models are trained via self-supervised learning, where the system learns linguistic structure by predicting the next word in a sequence. When a user issues an input into the system, the model then generates outputs by selecting words based on the input and learned patterns. Generative AI does not locate stored answers, but rather constructs a response via sequential prediction


On the other hand, agentic AI is not limited to just generating outputs; but rather, is capable of interpreting instructions and engaging in autonomous decision-making, such as determining steps in a process, creating a plan, and executing it with minimal guidance. Agentic AI operates on large datasets using machine learning models. This operation differs from generative AI in that it goes beyond sequential prediction by incorporating memory, reasoning, and decision frameworks to act on information retrieved from user inputs.


This evolution from supervised systems to autonomous decision-makers challenges the traditional notions of agency law.


Agency and the Liability Gap with Legal Personhood Status


Agency is the creation of a principal-agent relationship in which the principal gives the agent the authority to act on the principal's behalf. In this relationship, the agent can bind the principal to an agreement as long as the agreement was within the agent’s actual authority, or in some cases, with the agent’s authority reasonably perceived by a third party. 

Historically, agency law required the agent to be a legal person, either an individual or a corporate entity. However, with the rise of agentic AI, the application of agency law encounters some difficulties as AI lacks the status of legal personhood, but is more so categorized as “property” because the systems’ owners must access them in order for AI to operate. This “liability gap” makes it challenging to determine whether AI has the legal authority to bind a principal and, if it does, determine which party bears the risk of liability. This is further complicated by the “black box” nature of such systems, which makes it impossible for humans to explain or interpret how AI systems make decisions further widening the liability gap. 

This liability gap is creating challenges for legislation and forcing courts to determine where to assign liability when AI systems are involved. 


For example, in Mobley v. Workday, Inc., 740 F.Supp.3d 796 (2024), the United States District Court for the Northern District of California analyzed whether Workday, a third-party AI vendor, could be liable for employment discrimination under Title VII and various federal civil rights laws. The Court allowed the plaintiff’s claim to move forward on the theory that Workday could be held liable as an agent of an employer when its AI hiring tools are used for traditional hiring functions (i.e. screening resumes). While this case is currently under review, the decision shows the possible reality that legal personhood status may not be as essential to the liability analysis of AI systems as the functions that the acting agent performs. In this case, it was screening resumes, but in another case, it may be entering into contracts.  


Contracts in the Age of AI


As courts face more cases involving AI, parties are likely looking to contractual language to resolve potential liability preemptively. Additionally, risk allocation in contract creation is even more critical now than ever before because parties to a contract have to be strategic in how they allocate risk. Parties must anticipate unpredictability and opacity in contracts involving AI systems


Indemnity clauses

Indemnification clauses create a legal obligation between two parties, where one party agrees to take on the other party's liability based on specific contractual terms. However, even when interpretable AI systems are involved, making it easier for a user to understand and interpret a system’s decision, it remains unclear how to pinpoint the exact conditions that trigger an indemnity clause and how to classify them, such as determining who directly caused a harm, and when such harm took place. Because indemnity clauses typically require a defined event, the lack of one creates a liability gap that the clause may not cover. Therefore, if parties can further define the situations or functions the system performs, they could likely relieve potential exposure. 


Data/IP Rights Clauses


AI systems rely heavily on the data they analyze and interpret to operate. As such, contracts should clarify data privacy and intellectual property rights to better outline and protect against potential liability. Lack of clarity or ambiguity could lead to the unintentional use of confidential or private data, or data that falls within a party’s intellectual property rights, like in Andersen v. Stability AI, which involved various intellectual property claims surrounding Stability AI’s generative AI tool. By inserting clear language regarding the aforementioned rights, parties would likely be better positioned to avoid unnecessary liability in the ever-evolving world of AI. 


Conclusion 


As AI systems evolve into more interpretable and explainable systems, such systems will continue to pose challenges to the traditional notions of agency law due to their autonomous decision-making nature and lack of formal legal personhood, creating liability gaps that current legal frameworks aren’t designed to address. Legal frameworks are slow to catch up, judicial interpretation proceeds on a case-by-case basis, and legislation requires slow deliberation. This slow movement widens the gap in real-time practice. In the meantime, parties will have to remain strategic in managing and allocating risk to avoid creating ambiguities in contracts clauses and agency assignments.


*The views expressed in this article do not represent the views of Santa Clara University.

Comments


bottom of page