top of page

Why Venture Capital Firms See A Higher Investment Risk For AI Startups

Credit: Jeremy Barande | Flickr


When it comes to venture capital, the philosophy has always been to make big bets and capture huge returns. The largest return in venture capital history came from eBay, where a $6.7 million investment turned into $26 billion in just two years. This represents an annual return of 2632%. In comparison, the S&P 500 index averages around a 10% return per year.

Technology advancements have always been the focus of venture capital investors. Over the past few years, it is clear that the focus has shifted specifically to artificial intelligence and its potential for huge returns. AI is still a booming industry for silicon valley investors, but harsh realizations may begin to soften this market.

Since the internet was created, the biggest and potentially most profitable market for venture capitalists is artificial intelligence. However, like the internet, there are many unknowns in this industry. Litigation, social factors, and privacy concerns are still new frontiers for AI technology. Many of the AI models today utilize U.S. copyright protected works. Social factors such as discrimination and privacy concerns pose difficult questions for AI startups and their investors. These unknowns, as well as market oversaturation, may soften the AI market for venture capitalists and future startups.

Legal Issues

Due to the rapid adoption and implementation of AI and the uncertainty surrounding legal issues relating to AI, lawsuits based on AI are on the rise. This article will discuss recent AI litigation related to unlawful discrimination, privacy laws, and intellectual property rights.


On September 11, 2023, the United States Equal Employment Opportunity Commission (EEOC) and iTutorGroup Inc. entered into a settlement agreement in a case involving the first ever AI discrimination lawsuit.

iTutorGroup is a group of three integrated companies that provides English-language tutoring services to students in China. On May 5, 2022, the EEOC filed a suit against them for a violation of the Age Discrimination in Employment Act (ADEA) stemming from alleged discrimination in their hiring process. The complaint alleged that iTutorGroup programmed their tutor application software to automatically reject female applications aged fifty-five or older and male applications aged sixty or older.

This alleged discriminatory software was discovered when an applicant submitted two identical applications with two different birth dates, one of them listing her real age, over fifty-five years old and another that provided a younger age. She was invited to interview on her application with the birth date setting her age below fifty-five, but was rejected on her application with the birth date placing her at an age above fifty-five. The parties came to a settlement agreement requiring iTutorGroup to pay $365,000 to be distributed among applications who were automatically rejected for their age. In addition, iTutorGroup agreed to adopt an anti-discrimination policy, conduct anti-discrimination training for the hiring team, and cease requesting the birthdates of applicants.

With 79% of employers using AI in their recruiting and hiring processes as of February 2022, there is bound to be an increase in litigation alleging discriminatory hiring practices.


Many AI lawsuits have been filed due to violations of privacy rights. Class action lawsuits against well-known generative AI products on the market made by companies such as Google and OpenAI have recently been filed.

Plaintiffs in a class action lawsuit allege eleven causes of action in their 122 page complaint on September 5, 2023 against OpenAI and its largest shareholder, Microsoft, Inc. The complaint alleges a violation of the Electronic Communications Privacy Act (18 U.S.C. §§ 2510, et seq.), the Computer Fraud and Abuse Act (18 U.S.C. § 1030), the California Invasion of Privacy Act (Cal. Penal Code § 631), the California Unfair Competition Law (Business and Professions Code § 17200), and New York General Business Law §§ 349, et seq. In addition, the complaint alleges negligence, invasion of privacy, intrusion upon seclusion, larceny/receipt of stolen property, conversion, and unjust enrichment.

This lawsuit alleges that the company used stolen personal information from hundreds of millions of internet users to create ChatGPT, an AI text generator that provides detailed responses when given a prompt by a user. In addition, the complaint alleges that the company has used data from ChatGPT users.

Privacy concerns when using AI are substantial; therefore, with the rise of AI development and usage, it is safe to assume that lawsuits such as this one will continue to arise. The result of this lawsuit will shape how AI models are developed and it could leave the door open for many privacy issues. There may be hesitancy by investors and users to engage in transactions with companies producing AI depending on the outcome of this litigation.

Intellectual Property

Additionally, numerous AI lawsuits have been filed asserting claims of copyright and trademark infringement. These claims challenge developers’ use of data collected from the internet to train generative AI models, and courts must determine whether collecting and using publicly-available data, which may be subject to copyright protection, constitutes infringement.

In Getty Images (US), Inc. v. Stability AI, Inc., Getty Images filed suit against Stability AI asserting that Stability AI “scraped” Getty’s website for images and data in the training of its image-generating model, Stable Diffusion. Getty argues that Stability AI’s actions constitute unfair competition in violation of Delaware’s Uniform Deceptive Trade Practices Act. Getty’s complaint alleges that: (1) Stability AI reproduced Getty’s copyrighted material in connection with the training of its Stable Diffusion model; and (2) the model creates infringing derivative works as output. However, Stable Diffusion has also generated images that include a modified version of Getty’s watermark. Getty also asserts in its complaint that the modified version of its watermark violates 17 U.S.C. § 1202(a). Getty is seeking damages and an order that Stability AI destroy Stable Diffusion models trained using Getty’s content. Currently, Stability AI has moved to dismiss the complaint on jurisdictional grounds, and the motion remains pending.

With the rise of copyright and trademark infringement lawsuits, courts will have to determine the legality of “data scraping” in generative-AI litigation. Getty’s lawsuit demonstrates the potential changes and challenges with licensing of digital assets for AI. Depending on this court’s ruling, generative-AI litigation could exponentially increase if the court determines “data scraping” constitutes copyright or trademark infringement. As a result, venture capitalists may be wary in investing in AI because of these potential risks.

How can VC’s stay ahead of litigation, discrimination, and privacy concerns?

As litigation and social policies concerning AI continue to unfold, the fate of venture capitalists and this market will continue to change. Litigation may increase the risk of once profitable investments, and social factors such as discrimination and privacy may outweigh the potential upside of an investment.

Possible solutions to these issues are self-enforcement regulations imposed by venture capitals. Just last week, thirty-five venture firms signed a how-to guide in structuring AI startups. This includes running responsible startup teams, adding safety checks to software, and avoiding potential pitfalls that can lead to litigation. These regulations will be self-enforced by investors and startups, but are a promising start to an industry that still seems like the wild west.

As of now, AI continues to be a strong investment for venture capitalists, but only time will tell who will emerge as the big winners of this new frontier.

*The views expressed in this article do not represent the views of Santa Clara University.


bottom of page