Emerging Generative AI Regulations: Decoding the EU AI Act
Generative artificial intelligence, a subset of machine learning, is gaining momentum as a revolutionary technology with the ability to generate new content. Various commercial industries — from healthcare to advertising — are using generative AI to produce diverse content, and generative AI models have developed to interact directly with users as well. As models continue to advance in accuracy, numerous menial tasks can be delegated to generative AI, increasing efficiency and innovation across various domains. However, generative AI models, like any technology, have limitations and pose certain risks. These models raise fears of plagiarism, inaccuracy and misinformation, infringement of privacy, and inherent bias. In response, governments worldwide are increasingly seeking legislative measures to address issues and fill the current regulatory void. In April 2021, the European Commission proposed the first EU AI regulatory framework, called the AI Act, which was subsequently approved by Parliament on June 14, 2023. A final version of the law is expected to be adopted by the European Union at the end of 2023, making this one of the first major governmental steps to regulate AI and mitigate the risks around it.
Even before the emergence of generative AI, the regulatory landscape for AI technology was notably limited. Little legislation covered artificial intelligence and its complexities, with an even greater gap in regulation pertaining to generative artificial intelligence. In the United States, the absence of concrete regulations is evident, marked only by an Executive Order issued in October that outlines goals and principles surrounding AI use without establishing specific regulatory measures. The hope is that the U.S. will follow the lead of the EU in developing comprehensive regulations to address the challenges and risks posed by advancing AI technologies.
Generative artificial intelligence is a sophisticated form of machine learning. Machine learning describes computer models that can code themselves, training themselves to make predictions on their own. A model is taught to detect patterns in training data and then extrapolate from there. Evaluation data is also input into the model to assess the model’s accuracy and refine its performance. The model executes its function on a subset of the data, subsequently verifying its performance against the expected values within the dataset. This iterative process aids the model in identifying errors and improving itself so that the model can accurately classify given data. So, a machine learning model can learn to act independently, without a human programming it explicitly. Generative AI models employ neural networks to learn from vast amounts of training data effectively. These networks, functioning as complex functions with variable values, allow models to prioritize data and draw conclusions.
While generative artificial intelligence models can be extraordinarily beneficial, without proper limits, they can be misused by bad actors or have intrinsic issues. Data that is input by users or creators to an AI model could be plagiarized. Some machine learning AI systems, such as ChatGPT, use data taken directly from the Internet. This can include copyrighted or other protected material, and these models use the information without consent and often do not identify the sources they use. When ChatGPT answers a user’s query, it uses information taken from websites, but it does not inform the user where it obtained the information and what information was used to answer the question. Models like this might therefore be violating intellectual property laws.
An AI algorithm also has no way of analyzing what data is accurate; it trains itself just from the data inputted. When AI models are trained from Internet data that includes incorrect information, the models can then spew back the same incorrect statements, misleading their users. These models give inaccurate information an appearance of legitimacy and can make it more difficult for users to trust what they see in the future.
The data inputted could also be biased, causing generative artificial intelligence models to produce biased data. If there are biases in the data given, the AI will presumably reflect those, perpetuating racial biases in healthcare, law enforcement, and tech. Researchers at John Hopkins University and Georgia Institute of Technology tested this, programming AI robots to scan people’s faces and then designate which faces the AI guessed were criminals. Across the board, the robots labeled the Black faces as criminals. Users have also already noticed that ChatGPT seems to have sexist and racist biases, reflecting the biases that exist on the Internet.
In addition to the shortcomings of the technology itself, generative AI can also be misused for unethical or even criminal purposes. These models can be easily used to create “deep fakes,” or images or videos of fake events. This can be used for illegal mimicking, phishing, and cybersecurity attacks. In addition to the direct damage this would cause, such inappropriate and illegal use of these models could cause people to be even more distrustful of the news and media, as it becomes more and more difficult to gauge its veracity.
As concerns intensify and AI applications expand, the European Union is taking regulatory action with the proposed AI Act, advocating for a comprehensive framework to address the evolving challenges of AI technology, including the specific considerations surrounding generative artificial intelligence. The act would require companies to do risk assessments on their systems before the technology can be used, disclose what data was taken to create their system, and implement safeguards to prevent AI systems from generating illegal content. As the current draft of the act does not explicitly mention generative AI, the European Union is actively refining and negotiating a change to the Act to include specific regulations around generative artificial intelligence.
The proposed change creates a three-tiered approach to regulating models, classifying models into three categories with varying levels of regulatory scrutiny. The first category encompasses all models and requires developers to document the model and its training process, including the results of “red-teaming,” a process where models are pushed into bad behavior. Developers are also required to provide a summary of content used in developing their models, detail how copyright issues were addressed, and provide mechanisms for individuals to opt out of having their content used. The EU would also conduct evaluations based on all models.
The second category places further restrictions on “very capable” systems, judged based on the amount of computing power needed for their training. Models that can conduct a certain number of floating point operations per second, with the threshold decided at a later date, would be deemed “very capable.” Developers of these models must introduce systems to help discover systemic risks and the EU will require compliance tests to be conducted through independent auditors.
The third category consists of generative AI systems that are widely used, specifically those that have more than 10,000 registered business users or 45 million registered end users. The EU would place the same requirements on these models as the second category.
The European Union is poised to become a trailblazer in artificial intelligence regulation, with the expected adoption of the AI Act by the end of the year. As the Act confronts critical challenges with generative AI, the hope among experts is that Congress and other nations, including China, Japan, and Brazil will legislate restrictions that align with the EU AI Act, fostering international collaboration to ensure the responsible and secure application of artificial intelligence.
Sylvie Watts is a sophomore concentrating in political science and computer science. You can reach her at sylvie_watts@brown.edu
Kourtney Beauvais is a sophomore at Brown University, concentrating in International and Public Affairs. She is an editor for the Brown Undergraduate Law Review and can be contacted at kourtney_beauvais@brown.edu.