top of page

An Analysis of the EU AI Act: Risks v. Rewards


Artificial Intelligence (‘AI’) has already changed the way the world works through its ability to perform tasks that would normally require human intelligence and other extrinsic resources. In the midst of rapid AI development and advancements, regulating AI in a way that minimizes harm without stifling development is a major challenge for legislators.


The EU AI Act, passed by the Parliament in March 2024, is the “first comprehensive regulation on AI by a major regulator anywhere,” gaining unanimous support from the EU’s 27 member states. A similar attempt has been made by Brazil’s Congress in September 2021, when they passed a bill creating a legal framework for AI, which still needs to pass their Senate. The EU AI Act has yet to go into effect, and will likely be rolled out in stages up until 2027.


Who Does The EU AI Act Apply To?

The Act applies to both EU countries and non-EU countries that intend to use high-risk AI systems in the EU market, and non-EU countries whose high risk AI system’s output will be used in the EU. The Act targets “developers” and “deployers” of AI systems, placing more responsibility on developers and less on deployers. A developer is anyone or any institution that develops an AI system under its own name or brand, whereas a deployer is anyone or any institution that uses an AI system under its own authority for a professional activity.


Categorization of AI Systems

The proposed AI Act assigns applications of AI to four different risk categories: unacceptable, high-risk, limited-risk, and minimal-risk. Unacceptable risk is prohibited, and “high-risk” will be most heavily regulated. Limited risk AI systems will receive less strict transparency obligations in which users should be aware that they are interacting with AI – such as with ChatGPT, deep fakes, etc. Lastly, minimal risk will be left unregulated which applies to technology like AI enabled video games and spam filters. According to Title II, Art. 5 of the act, examples of prohibited AI systems due to unacceptable risk include manipulative or deceptive techniques to distort behavior, biometric categorization systems that infer sensitive attributes, and social scoring, among others. 


The Act also makes special note of General Purpose AI (GPAI). It defines a GPAI model as any model that “displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems.” The Act places specific obligations on providers of GPAI models, with less restrictions for open-source models, and more restrictions for models that present a “systemic risk,” contingent on the amount of computation used for model training. 




Challenges of the EU AI Act

Although this act has positive intentions, there are inevitably challenges that will present themselves when trying to regulate AI. A major challenge surrounds creating rules and regulations to keep up with the speed at which AI is developing. This is known as the “velocity” challenge. In the pursuit to keep up in the corporate AI race with competitors around the globe, nations must keep themselves from becoming reckless which will require the use of “legal guardrails.” The issue arises with the fact that regulatory structures were built on “industrial era assumptions that have already been outpaced by the first decades of the digital platform era,” which means that legislation needs more “agility” to adapt to the modern corporate world. 

Another challenge revolves around conquering AI’s multi-faceted capabilities in determining what to regulate. Because this area of the law is so vast and unknown, no “one-size-fits all” regulation will suffice. A rule too vague or broad risks doing little to help curb past and ongoing digital abuses. Therefore, proposed legislation needs to be flexible, with certain sections targeted towards the different risks presented. There are also risks and benefits that come with restricting certain applications of AI versus regulating the models themselves, such as requiring more transparency. The EU AI Act seeks to categorize AI systems into different tiers and places restrictions or prohibitions accordingly, but because there is no uniform definition for many AI terms, ambiguity results. For example, whether a model falls under the GPAI definition can be unclear, but that distinction can bring large legal implications, particularly when it comes to copyright laws and attribution requirements for training data.


It is also unclear who gets to regulate AI. The EU seems to be a front-runner in AI regulation not only due to their establishment of the AI Act, but also because of the “first mover advantage to regulation.” The EU has been in the lead in regulating digital platform policy for quite some time with the establishment of the Digital Markets Act and Digital Services Act. Therefore, it is likely the EU could take a global lead on this, but whether or not other countries such as the US will be a competitor is still up to speculation. 


The goal is for the act to become a global standard, so that it could aim to impact others around the world in a positive way. However, although a final draft has been published online as of January 21st of this year and the Act has been passed in EU Parliament, formal implementation has yet to begin, and tools can be found online to view the work in progress. 


*The views expressed in this article do not represent the views of Santa Clara University.

bottom of page