Racing Against AI: How Courts are Ill-Equipped to Handle Technology
In the race between AI and the law, the latter is clearly falling behind. AI is currently the fastest-growing technology in the world, and its influence can be felt in every industry. The courts are no exception. As AI lawsuits continue to appear on the dockets of federal courts, there is one issue that reappears often with no resolution in sight: copyright infringement. Generative AI is uniquely designed by training algorithms with a host of content such as text, data, images, and audio. Essentially, these algorithms are fed content from across the internet, and the programs will store these items in their memory. This process has sparked controversy as record companies like Sony, Universal, and Warner have all filed lawsuits for unlawful use of their copyrighted materials in these training algorithms. They are also joined by media publications like the New York Times and Chicago Tribune, who claim that their articles and content are being fed to these algorithms, and are later used in chatbot responses without due credit. As people look to the courts with emerging questions about the future of copyright law, one answer is clear: the courts have not accounted for advances in technology. Analysis of cases like Kadrey v. Meta and The New York Times v. Microsoft and OpenAI will prove that currently, the law is ill-equipped to handle such rapid developments.
Kadrey v. Meta was a lawsuit filed by authors Sarah Silverman, Richard Karey, and Christopher Golden over Meta’s artificial intelligence software, LLaMA. The authors alleged that Meta infringed on their copyrights by using their books to train the AI model without permission. They brought several claims, including direct and vicarious copyright infringement, violations of the Digital Millennium Copyright Act (DMCA), unfair competition, unjust enrichment, and negligence. The Northern District of California court made an important decision by dismissing all the claims except one: the direct copyright infringement issue which claimed that Meta had unauthorized use of the authors’ books for training. The court permanently dismissed the negligence claim, and ruled that the federal Copyright Act took precedence over the state-law claims. Finally, and perhaps most importantly, the court found that the LLaMA output did not qualify as a derivative work of the authors’ books. This means that the output is not a substantial adaptation or transformation of the original content and therefore cannot be considered a violation of the DMCA. The decision narrowed the focus of the case to the remaining issue of fair use, expected to be addressed in March of 2025.
On the other side of the country, the case of The New York Times v. Microsoft and OpenAI, in the Southern District of New York, is set to be the East-Coast’s biggest AI case. A lawsuit has been filed by The New York Times Company (NYT) against OpenAI and alleges similar copyright and intellectual property violations regarding the use of NYT content in the training of generative AI models. NYT’s complaint accuses OpenAI of direct and contributor copyright infringement, removal of copyright management information, and misappropriation, arguing that its content was used without authorization in ways that harm its business interests. In response, OpenAI filed a motion to dismiss several of these claims, stating that it seeks to streamline the legal issues and defend its use of copyrighted material by claiming fair use for training models. The case reflects broad concerns within the publishing and technology industries about intellectual property rights and sets the stage for a landmark decision to be made on fair use and copyright in AI model training.
A substantial issue arises when one considers the fundamental lack of predictability in these cases. Due to the fact that there is currently no mention of generative AI in Copyright Law, it is increasingly difficult to understand how the judges will substantiate their rulings. Currently, the U.S. Patent and Trademark Office understands that training programs “will almost by definition involve the reproduction of entire works or substantial portions thereof.” This mimics the argument in the ruling of Kadrey v. Meta, which officially ruled that generative AI outputs are not derivatives of existing work. Together, these arguments build a mutual understanding that AI does not build upon the work it is trained from, and is providing content verbatim without credit. However, it still does not indicate the rulings on fair use, which remains the crux of the issue in almost every remaining decision. If courts are to follow the indication that “fair use” requires creators to build upon the original work, developers must prove that even if the AI outputs are not derivatives of the original content, some type of new work is being produced.
As one can see, the application of current copyright and fair use laws gets convoluted when courts apply rules for human-generated content to technologically generated work. Furthermore, with a lack of substantial legislation or legal precedent surrounding generative AI, there is potential for inconsistency in decisions. Already, it can be seen that the jurisprudence over cases on the West Coast, like Kadrey v. Meta, tends to be more lenient in favor of AI developers when compared to the cases in New York. With cases piling up, and uncertainty regarding the timeline of decisions, it is evident that the legal system is struggling to keep pace with the rapid evolution of AI. Until there is comprehensive reform or a consistent judicial standard that addresses the nuances of generative AI, courts will continue to face challenges balancing the rights of content creators with the interests of technological innovation.
Navyaa Jain is a Sophomore studying Computer Science - Economics at Brown University. She can be reached at navyaa_jain@brown.edu.
Yani Ince is a senior concentrating in History and Political Science. She is a blog editor for the Brown Undergraduate Law Review. She can be reached at ianthe_ince@brown.edu