AI as a Freedom of Speech? How Biden’s Recent Deepfake Robocall Crisis Prompts Us to Consider the Future of AI Generated Content on Social Media.
On January 21st, 2024, a mysterious batch of calls from President Joe Biden rang on thousands of New Hampshire residents’ phones. In a voice uncannily like that of Joe Biden, the president urged Democratic voters to stay home, stating “it is important that you save your vote for the November election…voting this Tuesday only enables the Republicans in their quest to elect Donald Trump” The robocall even mimicked the cadence and mannerisms of the President, using classic quotes of his, such as “what a bunch of malarkey,” in an attempt to humanize the recording and persuade callers to believe its authenticity.
Following the incident, New Hampshire authorities launched an investigation to uncover the source of the 5,000 to 25,000 AI-generated calls made to potential voters. Subsequently, an investigation was initiated by the Anti-Robocall Multistate Litigation Task Force, which traced the origin of the calls to a Texas-based company called Life Corporation and an individual named Walter Monk, identified as the responsible party behind the dissemination of the fraudulent calls.
After identifying the culprit, the issue reached national urgency ,with the Federal Communications Commission (FCC) swiftly issuing a cease and desist letter to Life Corporation. In a News Release by the New Hampshire Department of Justice, the letter explicitly declares that any individual involved in voter suppression by knowingly attempting to prevent or deter another person from voting or registering to vote through fraudulent, deceptive, misleading, or spurious means is in violation of RSA 659:40 III, which is a statute that deters against the bribing, intimidation, or suppression of voters in state and federal elections.
Ultimately, this investigation culminated in the unanimous adoption of a Declaratory Ruling by the FCC that acknowledges robocalls using artificially generated voices as illegal due to their potential to scam, deceive, and suppress voters. Effective immediately, this ruling enables the FCC to heavily fine companies that use AI voices in their calls or to block the service of providers that carry them.
This Declaratory Ruling, though expedited, stands as a bipartisan victory for policymakers as it highlighted the contingency of a joint AI task force formed by members of the House of Representatives. Given the recent impact of AI technology in the dissemination of misinformation, deep fakes, and overall artistic integrity, the task force will evaluate how the evolving technology interacts with various sectors including financial services, housing and business markets, and the integrity of democratic processes overall. However, this gathering of policymakers represents just one step in the battle for technological accountability and transparency. Legislators, artificial intelligence companies, and social media CEOs have been in a protracted battle between innovation and responsible governance. Just last September, over 60 senators attended a meeting that featured a 22-person open panel including OpenAI CEO Sam Altman, X owner Elon Musk, Google CEO Sundar Pichai, Microsoft CEO Satya Nadella, and Meta CEO Mark Zuckerberg. The plan was to establish a regulatory proposal for the dissemination of AI, but discussions stalled due to differing views on the extent of regulation required.
The outcome of this open panel is crucial to consider. Although the New Hampshire and US governments were successful in implementing safeguards against intentional deception and misinformation, they were unable to extend these measures to address the perpetuation and dissemination of misinformation on social media platforms. Tech companies have implemented measures to flag deceptive AI content when it is created or distributed on their platforms, but there is currently no binding requirement or contingency to digitally police or remove its dissemination. While the US government can enforce policies influencing direct deception of voters, once such misinformation proliferates onto social media, it largely falls out of the hands of US politicians. This is because social media companies approach misinformation with an ethic of minimal intervention, for fear of restricting freedom of speech.
In fact, in 2022, Meta and X pledged to take a more hands-off approach when it came to the policing of deceptive content, with X laying off its online safety regulators. Furthermore, since Elon Musk acquired X in 2022, he has reduced the extent of content moderation by cutting 30% of his safety staff globally and significantly relaxed content moderation policies in the name of free speech.
If the US government tries to uphold content moderation, it risks promoting censorship, or even inhibiting freedom of speech. This is currently being observed in a Supreme Court case, Missouri v. Biden, where the plaintiffs alleged that the Biden administration created a “federal censorship enterprise” after White House officials and federal agencies contacted various social media companies with the intent of suppressing political views that were skewed towards conservatism. The lawsuit alleges that platforms were pressured to remove widespread disfavorable views about COVID-19 health politics, origins of the pandemic, the Hunter Biden laptop, and other controversial topics. The plaintiffs call on the Supreme Court to reevaluate the state-action question — a legal concept that explores the extent of government involvement in private entities' actions. This reevaluation is critical for maintaining the balance between protecting free speech and preventing government overreach in the realm of content moderation on private social media sites.
How does this Supreme Court case relate to the issue of robocalls and the reluctance of social media CEOs to remove such content generated by AI? There are two concerning outcomes that are important to note for the 2024 election.
One, by framing AI as an expression of free speech, it suggests that the creation, dissemination and interpretation of generated content should be protected as a fundamental right. This notion necessitates a reinterpretation of the First Amendment. While generative AI lacks human status and thus is not entitled to human rights, it is developed by humans, operated by humans, and utilized by humans to contribute to the marketplace of ideas that is protected under the First Amendment. At least, this is the case that is being made by freedom of speech thinktank Foundation for Individual Rights and Expression, who claim that “people, not technologies, have rights. People create and utilize technologies for expressive purposes, and technologies used for expressive purposes, such as to communicate and receive information, implicate First Amendment rights.”
Secondly, if excessive government intervention in digital policing is ruled unconstitutional, it sets a precedent for the state-action question, where the First Amendment could inhibit the government from censoring speech they deem unfavorable. More specifically, the flow of conduct in digital communication channels could be centralized to private entities such as generative AI companies or social media organizations. Thus, if this Supreme Court case reaches federal recognition, it may set a precedent that reshapes the landscape of digital communication, influencing not only the regulation of AI-generated content but also the broader dynamics of free speech and government influence in the digital age.
The events surrounding the fraudulent robocalls prior to the 2024 presidential primaries underscore the urgent need for innovative approaches that combat misinformation while safeguarding the integrity of democratic processes. The FCC’s Declaratory Ruling can only extend so far once content is circulated on social media platforms. Additionally, the ongoing tension between private social media entities, AI corporations, and civil servants highlights the pressing need for comprehensive solutions that address the complex challenges posed by emerging technologies and their impact in the digital ecosystem, freedom of expression, and shifting government powers. The robocall incident is merely a microcosm of the larger issues at play in our digital age. As we confront these challenges, it becomes clear that traditional regulatory frameworks may no longer suffice in addressing the intricacies of AI use and AI-driven misinformation campaigns.
Ava Matos is staff writer at BULR, and is a junior concentrating International and Public Affairs on the Development Track. You can contact her at ava_matos@brown.edu.