The EU AI Act: How New Guidelines Redefine Prohibited Artificial Intelligence Practices
Artificial intelligence (AI) is becoming increasingly integrated in a variety of practice and business areas including law, healthcare, banking, education, information technology, and investments. According to a Pew Research survey, 55% of Americans say that they use AI on a regular basis. Although AI provides benefits like process efficiency, expedited data analysis, and cost reduction, it also introduces several risks. The training data used to program AI can be biased and discriminatory, individual data privacy can be compromised, and the use of AI in biometrics raises ethical and moral concerns. With this new integration of AI not only in the United States but around the world, interests in AI regulation and developing extensive protections for sensitive data have surged.
The European Union has taken initiative to provide regulation of AI with the EU AI Act. The AI Act came into effect August 01, 2024 and is described by the European Commission as the “first-ever comprehensive legal framework on AI worldwide.” The AI Act is designed with “risk-based” rules, where heightened risk of societal harm correlates to stricter regulation. AI risk classification ranges from “minimal risk,” such as AI within videogames, to “unacceptable risk,” which prohibits AI practices such as social scoring and biometric categorization. The AI practices in the “unacceptable risk” category that classify individuals through characteristics such as facial features, speech patterns, and fingerprints raise several privacy and security concerns and could be used to exacerbate bias, systemic discrimination and social inequality. These rules address the development, implementation, and use of AI in the EU. The European Commission outlined several next steps following the EU AI Act’s introduction, mandating that systems be fully compliant with the AI Act by August 02, 2026.
To establish legal clarity and to aid in the “consistent, effective, and uniform application of the AI Act across the European Union,” the European Commission recently published Guidelines on the prohibited AI practices from the “unacceptable risk” category The Guidelines intend to provide insight into the European Commission's interpretation of Chapter II, Article 5 of the AI Act. Article 5, whose date of entry into force was February 2, 2025, outlines the AI systems that are prohibited in the European Union and provides the legal basis of these prohibitions. The legal basis of the AI Act is grounded in the foundational treaties of the European Union. As AI continues to develop, government agencies will have to adapt and undergo regulatory reform to address the integration of AI in society. According to the Guidelines, the AI Act is founded on two preceding legal articles, Article 114 of the Treaty on the Functioning of the European Union, which provides the legal basis of the internal market, which permits the free movement of goods, services, and opportunities, and Article 16 of the Treaty on the Functioning of the European Union (TEFU), which provides the legal basis of data protection. Article 114 provides the legal basis for the other prohibitions in the “unacceptable risk” category such as harmful manipulation and deception, harmful exploitation of vulnerabilities, individual criminal offence risk assessment and prediction, untargeted scraping to develop facial recognition databases, and real-time remote biometric identification. Article 16 specifically provides the legal basis for the prohibition on the use of biometric data.
The Guidelines aim to establish notable exceptions to the prohibitions in Article 5. While there is an emphasis for data protections, the Guidelines allow for the use of AI for safety and medical purposes. In workplace or adjacent institutions, emotion-recognition AI systems can be utilized for suicide prevention, targeted neuromarketing, or mental health screenings. The European Commision makes a distinction between harmful and beneficial subliminal techniques used by AI systems through lawful persuasion; the long term benefits must outweigh any temporary discomfort. However, it is important to note these Guidelines are non-binding and are still awaiting formal adoption from the European Commission. The Guidelines represent how as AI regulation continues to evolve, so will the need for clarity as the use of AI continues to expand.
On a global scale, new ways to utilize AI are discovered every day but the risk of data breaches and systematic bias makes it imperative that countries protect their citizens from harmful AI systems. It is equally important to recognize that unclear legal regulation can also exacerbate harm enacted by AI. The Guidelines published by the European Commission demonstrate the start of legal leadership needed by government authorities to ensure AI regulation is continuously updated to reflect AI’s rapid advancement, provide legal clarity, and mitigate risks while maximizing benefits. By doing so, governments can better navigate balancing immeasurable innovation without providing the legal basis for discriminatory practices through AI systems. The changes in the European Commission’s Guidelines and future guidelines as AI regulation continues to develop internationally will balance innovation and ethics while promoting AI governance.
Dre Boyd-Weatherly is a junior at Brown University concentrating in International and Public Affairs. She is a staff writer for the Brown Undergraduate Law Review and can be contacted at dre_boyd-weatherly@brown.edu.
Veronica Dickstein is a junior at Brown University concentrating in International and Public Affairs. She is a staff editor for the Brown Undergraduate Law Review and can be contacted at veronica_dickstein@brown.edu.