The First Amendment doesn’t have an AI exception
By MILES BARRY —mabarry@ucdavis.edu
Two days before the 2023 Slovak parliamentary election, an audio recording of Michal Šimečka, a Western liberal-leaning candidate, was widely shared online. The video depicted him bragging about rigging the vote, and, to make matters worse, a separate recording of him discussing raising the price of beer also went viral. The problem? Both recordings were fake.
Whether or not these recordings played a significant role in the Slovakian election results, they represent a growing issue: the use of artificial intelligence (AI) to create and spread misinformation that can interfere in the democratic process.
This has already impacted voters in the United States. In January 2024, Steve Kramer, a political consultant, reached thousands of New Hampshire voters with a robotic call impersonating Joe Biden with AI. The call, which was distributed before the Democratic presidential primary, discouraged people from voting and urged them to “save” their vote for the general election. Kramer was charged a $6 million fine for voter suppression and impersonating a candidate.
Many states have responded to this threat with legislation targeting “deepfakes” — images, audio or videos created or altered by AI which depict an individual performing actions that didn’t actually occur. As technology improves and this false media becomes increasingly authentic, deepfakes may have profound consequences for future elections. They could depict candidates accepting bribes, rigging votes, engaging in adultery or performing lewd acts — information that would fundamentally alter their public perception. This harm becomes more potent when it’s published directly before Election Day — a beleaguered, deep-faked candidate wouldn’t even be able to clarify that it’s false.
As of January 2026, 46 states have passed legislation targeting deepfakes. While some of the laws seek to curb other horrific uses of this technology — like using AI to generate pornographic images of an individual without their consent — many target election-related misinformation. In the process, many of these laws go too far: to the point where they may potentially infringe upon the publisher’s First Amendment rights.
For example, New Mexico House Bill 182 (2024) criminalizes the distribution of “materially deceptive media” (deepfakes) without a disclaimer within 90 days of an election. To be convicted, the accused individual must understand that the media is false, with the intention to alter voting behavior. In fairness, it contains carve outs for parody and for deepfakes that are appropriately labelled. But despite these limits, New Mexico’s ethics commission is concerned that the bill is unenforceable, as it will likely violate the First Amendment. The law’s intent requirement — proving that someone both knew the content was false and intended to alter voting behavior —is difficult to establish without mind-reading, and the broad definition of ‘materially deceptive’ could sweep in legitimate political commentary.
California has gone even further. In late 2024, California Governor Gavin Newsom signed a suite of aggressive bills targeting AI-generated political speech. One of these bills, the Defending Democracy from Deepfake Deception Act (Assembly Bill (AB) 2655), required social media platforms to block or label AI generated content. It was later struck down by judges for violating Section 230 of the Communications Decency Act, with the court ruling that federal law prohibits states from treating online platforms as publishers of user-provided content.
Another, AB 2839, prohibited the distribution of “materially deceptive” content within 120 days of an election. The law was so broad that it initially required even satirical deepfakes to carry a disclaimer and allowed any citizen to sue for damages. This bill was also struck down because it failed “strict scrutiny” under the First Amendment; the court found that the law was a “blunt tool” that unconstitutionally stifled the free exchange of ideas.
California’s bills were struck down for good reason. The U.S. hasn’t historically restricted speech solely because it is false or deceptive — regardless of whether it’s created by AI. Doing so would require the government to establish facts as “true” or “false.” There is longstanding legal precedent for this, rooted in the Supreme Court case United States vs. Alvarez. In that 2012 decision, the Court ruled that “falsity alone may not suffice to bring the speech outside the First Amendment.” Additionally, the U.S. has a longstanding tradition of combatting misinformation with more speech, not outlawing it.
Deepfakes represent a shift in efficiency, not a fundamental change in the nature of deceptive media. They’re faster and cheaper to produce than traditional digital manipulation, but their capacity for harm is identical to that of a doctored video or photo. Creators of doctored videos or photos are only held responsible if their creation breaks existing speech laws — against libel, fraud, voter suppression and voter intimidation, for example. Therefore, while deepfakes provide a new tool to produce certain types of speech, we shouldn’t be creating new categories of crime for “materially deceptive” content just because it’s made by AI.
We already have robust laws in place to prevent criminal speech. When Douglass Mackey tricked thousands of voters into “texting” their ballots in 2016, or when Steve Kramer used an AI-generated Joe Biden voice to discourage primary voters in 2024, the legal system was already equipped to respond. By identifying these actions as criminal conduct — specifically voter suppression, we can protect the ballot box without giving the government license to establish “true speech.”
Written by: Miles Barry—mabarry@ucdavis.edu
Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.

