top of page

From Copilot to Decision-Maker: What Happens When AI Agents Enter Fiduciary Roles?

What is “agentic” AI? 

Recent advances in artificial intelligence have introduced a new category of AI tools often described as “agentic.” Earlier generations of AI were largely assistive: they help users find information, summarize documents, and draft text, all while leaving execution to the users themselves. Agentic tools go further by actually executing tasks on the user’s behalf. 

Agentic tools are increasingly appearing across industries in the form of AI agents, particularly in fields like personal investing. A new investing platform called Public has developed an AI agent that uses user prompts to guide investment strategy. The AI agent is also capable of placing stop-loss orders, buying protective puts, and moving idle cash into higher-yielding assets such as bonds. Although Public insists that its AI agent does not make independent decisions, its ability to directly execute trades blurs the line between tool and advisor.

AI agents are also reshaping the legal industry. A new law firm called Crosby uses AI agents to review contracts before a human lawyer evaluates the AI agent’s output. This workflow has allowed Crosby to move away from the traditional billable-hour payment structure, in which lawyers charge clients based on the amount of time spent working on a matter, and toward a per-contract payment structure. This shift allows the firm to align its financial incentives with those of its clients, who want deals to be closed faster. 


Industry Application

Labor-intensive industries are deploying Agentic AI tools to assist with expensive and highly repetitive tasks, particularly in portfolio management and contract review. Professionals spend significant time executing routine tasks in these areas, even though those tasks still require a degree of judgment. By automating both execution and parts of decision-making, AI agents offer companies a way to reduce costs while increasing efficiency.

In the investment context, platforms like Public market their AI agents as tools that can streamline portfolio management. By executing trades, placing stop-loss orders, and reallocating idle funds, these systems reduce the amount of time and expertise required from the user. This creates a more accessible experience for investors while allowing firms to serve a larger number of users without expanding their advisory workforce. As a result, investment platforms can scale more easily, shifting from a labor-intensive model to one that more closely resembles software-based services.

A similar dynamic is present in legal services. Crosby’s use of AI agents to review contracts significantly reduces turnaround time for routine transactional work. Faster contract review is particularly valuable in high-volume deal environments, where delays can slow down business operations. 

In addition, Crosby’s move away from the billable hour toward a per-contract pricing model reflects how AI can reshape not only how legal work is performed, but also how it is priced. Fixed or per-task pricing structures may offer greater predictability for clients and align incentives toward efficiency rather than time spent. 

The broader takeaway is that agentic AI allows firms to increase output without a corresponding increase in headcount. Traditionally, both finance and law have been constrained by the number of professionals available to perform work. AI agents weaken that constraint by embedding execution into software, enabling companies to grow without scaling labor at the same rate. 

However, these benefits come with meaningful risks. Because AI agents operate in high-stakes environments, errors can be costly. A flawed investment decision or an overlooked contractual issue can lead not only to financial loss, but also to reputational damage for the company. As firms rely more heavily on AI-driven processes, the potential impact of mistakes increases, making trust and reliability central to the long-term viability of these business models.


What is a fiduciary duty? 

A fiduciary duty arises when one party places trust and confidence in another and that other party accepts responsibility to act on the first party’s behalf. A fiduciary relationship requires the first party, often called the “agent”, to act in good faith and in the best interest of the party, often the “principal”. Attorneys owe fiduciary duties to clients, and stock brokers with advisory power owe a fiduciary duty to their investors. Therefore, if AI is making meaningful decisions inside fiduciary relationships, the law is bound to eventually question who is acting for whom. Companies like Public and Crosby do not want their AI tools to be considered agents that perform fiduciary duties because it exposes them to the risk of breach of those duties. 

Public’s strongest argument against fiduciary AI is that its agents do not independently manage investors’ accounts. According to Public, the investor/user of the platform dictates investment strategy, reviews the workflow, and can stop or edit it at any time. Even so, the question remains: if the platform translates a broad prompt into real portfolio action, how much discretion is it effectively exercising? 

The SEC has stated that unlimited investment discretion can indicate a relationship “primarily advisory in nature,” though limited discretion may not. A relationship is primarily advisory when “such a level of discretion by a broker-dealer is so comprehensive and continuous that the provision of advice in such a context is not incidental to effecting securities transactions.” Therefore, the more the AI appears to make a series of decisions, rather than merely executing explicit user instructions continuously, the harder it is to maintain that it is just a tool. 

Crosby’s position is more precarious. If Crosby’s AI is simply helping licensed attorneys work faster, a court would likely view it as permissible technology-assisted law practice. However, if the AI is performing legal analysis, making independent legal conclusions, and the human lawyer is simply rubber-stamping the result, the model starts to look more legally vulnerable. That is, the AI agent could be at risk of engaging in the unlicensed practice of law, for which the supervising human attorney could be disciplined and/or liable. 


Who bears responsibility for the AI’s mistakes? 

The company that deploys the AI seems to be the most obvious party that would bear responsibility for its AI agent’s mistake. After all, it built and trained the system, designed the prompts and defaults, and marketed the product. Moreover, if the product is sold as competent enough to act on a user’s behalf, at a minimum, the company is at risk of misrepresentation or false advertising. The human professional may also be liable for their individual negligence in overseeing the AI. In legal services, the lawyer who signs off on the work remains the clearest accountability point because lawyers are subject to their jurisdiction’s rules of professional responsibility. In finance, exposure may turn on whether the service is framed as automation or some akin to ongoing financial advice. Finally, companies will argue that the user, by agreeing to use the service, implicitly approves of the workflows, strategies, and consequently, accepts the risks inherent in using these sorts of tools. Yet, as these tools become more sophisticated, blurring the line between AI-human interaction, companies like Public and Crosby may face greater challenges in arguing assumption of the risk.


What does agentic AI mean for junior professionals? 

Agentic AI may also disrupt the traditional recruiting and training pipelines on which professions like law and finance have long depended. Firms like Salesforce have invested heavily in agentic AI while reducing hiring and laying off employees. Salesforce argues that entry-level roles are being reimagined around “orchestration” of AI agents rather than pure execution, with employers increasingly valuing judgment, adaptability, and critical evaluation of AI outputs. Others warn this shift may hollow out the junior pipeline itself. For instance, Microsoft executives worry that agentic AI is eroding the entry-level work through which young professionals traditionally learn. In high-trust industries, that creates a longer-term risk of weaker professional services. Firms may automate the very tasks that once trained junior lawyers, analysts, and consultants, even though those professions still depend on human judgment to supervise AI and bear responsibility when it fails. 


Conclusion

AI can meaningfully reduce labor and streamline decision-making, but in finance and law, it cannot displace accountability. The ultimate responsibility still rests with a human actor who must stand behind the outcome. As a result, AI may shift how decisions are made, but not who answers for them. Efficiency may increase, but accountability remains firmly human.


*The views expressed in this article do not represent the views of Santa Clara University.


Comments


bottom of page