A proposed Georgia deepfake election interference law could return

This upcoming legislative session, some Georgia legislators are aiming to tackle the use of deepfakes created using artificial intelligence to influence elections. (Matthew Pearson/WABE)

This past election cycle, artificial intelligence tools played a greater role than ever before.

During the 2024 election cycle, some New Hampshire voters received fake robocalls that used AI to replicate President Joe Biden’s voice telling them not to vote in the primary. The Federal Communications Commission fined political consultant Steven Kramer $6 million for paying to send out the calls.

This is an example of a deepfake, which is an artificially generated image, video or audio recording depicting someone doing or saying something that they did not do or say.



Across the U.S., several states have passed or have attempted to pass legislation to regulate the use of deepfakes in elections.

As the legislative session approaches in Georgia, some lawmakers are looking to target disinformation and misinformation, including a revamp of a previously failed bill criminalizing the use of deepfakes to influence elections.

Last legislative session, Georgia lawmakers did not pass H.B. 986, but earlier this month, the Georgia Senate Study Committee pushed for an “updated Deep Fake law to include election interference, transparency and labeling” among its list of recommendations.

One of the early critics of the bill, the American Civil Liberties Union of Georgia, argued that it could criminalize speech protected by the First Amendment of the U.S. Constitution.

Georgia State Rep. Brad Thomas, who penned the bill, told WABE that using deepfakes to impersonate political candidates and deceive voters is fraud.

“The simple fact is fraud is fraud,” he said. “And creating videos of people doing things they didn’t do, I think a lot of people feel like that’s fraud.”

State Sen. John Albers, the chair of the Senate Study Committee on AI, told WABE in an August interview that he doesn’t see AI regulation as a partisan issue.

“AI using a deepfake in order to sway an election ought to be a concern to anyone,” he said. “That should scare anybody, that anybody can get misinformation. We want elections to be safe and secure and honest and legitimate.”

State Sens. Jason Esteves and Sheikh Rahman were the two Democratic legislators on the Senate study committee. Lawmakers who backed the original deepfake legislation and served in the study committee were mostly Republican, but Esteves said this is because Republicans have control of the legislature.

According to Esteves, both Democratic and Republican lawmakers are interested in the topic of AI and will work together to understand and regulate it, especially ahead of the 2026 midterm elections.

“It’s better to get out ahead of issues like this, versus having to react after the fact, after an election has been influenced by something that is obviously fake,” he said. “The reality is that the technology continues to evolve and get better, and two years from now, deepfake videos will be much, much better than they are today, recorded audio will be much better than they are today, so we have to anticipate those issues and make sure we’re nipping it at the bud.”

Still, people may need to work out disagreements on specific provisions, Esteves added.

First Amendment carve-outs

Though it never became law, the bill underwent several changes that incorporated certain First Amendment provisions and narrowed the pool of individuals who could be in violation of the law.

The original text of H.B. 986 defined a person committing election interference using a deepfake as one who “creates, publishes, broadcasts, streams, or uploads a deepfake within 90 days of an election with the intent to deceive one or more electors for the purpose of” influencing election results or decreases a candidate’s likelihood of being elected.

The bill was “narrowly tailored” to preventing the use of deepfakes to influence elections in “bad faith.”

Under the original text of the bill, anyone found in violation of the act would have been guilty of a felony and could face between one to five years in prison and a fine of up to $50,000. The Georgia State Election Board would also “release to the public the findings of any completed investigation” of violations of the act.

Testifying in front of the Georgia Senate Judiciary Committee on Jan. 29, 2024, ACLU of Georgia First Amendment Policy Advocate Sarah Hunt-Blackwell said H.B. 986 was “introduced in good faith” and “proactive and forward-thinking,” but needed carve-outs that she said would protect First Amendment rights.

Among these recommendations were “clear and obvious disclaimers” on posts made using deepfake technology, elimination of criminal penalties, and general exemptions for news media outlets, social media platforms and satire and parody.

Hunt-Blackwell told WABE that the language of the bill left a lot up to interpretation and could have ended up criminalizing the activities of a broader portion of users than what would be protected in the First Amendment.

“The way that the bill was worded, it was really vague and it would have allowed for a lot of things to fall through the cracks,” Hunt-Blackwell said.

For example, she said, if a piece of AI-generated political content was created before the window set by the law, which is 90 days before an election, then would someone reposting the content within that 90-day window be acting in violation of the law?

Thomas, who originally drafted the bill, says no.

“It specifically says the crime was to upload with the intent to deceive,” he said. “That’s the crime. It’s very specifically written, very tightly written.”

But the person or organization who uploaded the deepfake content, Hunt-Blackwell said, could be hard to identify.

“That can be really difficult to pinpoint,” Hunt-Blackwell said. “Memes and videos and things like that go viral all the time without not necessarily knowing who the originator of that content is.”

A subsequent substitute of the bill stipulated that people who would be prohibited from creating deepfake materials to influence elections are those working for a political campaign or political organization.

Under the final text of the bill, it would not have applied to voters and residents who are not working for or affiliated with a political campaign or organization, and it would not have applied to broadcasters, online services and other similar platforms and people.

The revisions also added a clause to account for First Amendment rights, saying that act would not apply to protected activities like “satire, parody, works of artistic expression, or works of journalism by bona fide news organizations.”

Criminal penalties

While the final version of the bill did incorporate elements of the ACLU of Georgia’s other recommendations, such as First Amendment exemptions for media outlets, it also kept the bill’s criminal penalties, which the organization maintains can easily invite abuses of power by the state.

The final substitute of the bill raised the minimum sentence to two years for a violation of the law while allowing the attorney general, the person depicted in a deepfake, or a political candidate harmed by the deepfake to maintain a cause of action for injunctive relief against the person publishing the deepfake content.

In addition, soliciting someone else to publish “materially deceptive media” to influence an election would also have also been a crime under the final version of the bill.

“A criminal prosecution for a violation of this Code section shall only be initiated upon the Attorney General receiving a recommendation to prosecute from the State Election Board,” the bill read.

The final substitute of H.B. 986 stated that people can use AI-generated content in political campaign ads but need to follow certain disclosure guidelines. Otherwise, they would face a minimum $10,000 fine. “Minor editing” that doesn’t “substantially change” an audience’s understanding of the content would not have needed disclosure.

The ACLU of Georgia has argued that injunctive relief and civil claims under tort law are preferable to criminal penalties in the case of this bill. Hunt-Blackwell, the First Amendment policy advocate, said the organization understands that the intent of the legislation was to ensure the safety of elections and the accuracy of election-related information online.

“What we are always concerned about from a First Amendment perspective is the restricting of information, not the addition of information,” Hunt-Blackwell said. “So if there is a post that is AI-generated and you add some more information, add some more context to it saying, ‘Hey, just so you know, this was AI-generated,’ that is far less problematic than just either removing it altogether or trying to criminalize the person who has created that content.”

ACLU Senior Policy Counsel Jenna Leventoff said the state should not have the power to define disinformation and prosecute people by that definition.

“Misinformation, disinformation, all of that is protected by the First Amendment because when you think about it more broadly, someone has to be the arbiter of truth, and that is ripe for abuse,” Leventoff said. “So if the government is deciding what is and isn’t true, that is what they could use to stifle political dissent, to stifle legitimate discourse.”

Thomas, on the other hand, said he believes engaging in election interference and disinformation should come with a hefty price.

“That’s a matter of debate. Do people think that it’s OK to go and create a deepfake of an elected official or anybody for that matter, doing something they didn’t do with the intent to deceive other people for any reason?” he said. “To me, if it’s a major election, committing fraud to try and get somebody else not elected or to get somebody else elected for that matter, I think that should carry a pretty heavy penalty.”

“I think that one of the big things that we have to do is make sure that we’re protecting the democratic process,” he added.

Hunt-Blackwell said the ACLU of Georgia is expecting lawmakers to propose a similar bill and is willing to figure out specific language with legislators.

“We very much look forward to the opportunity to work out these kinks with legislators and trying to find a middle ground that actually works and that actually preserves our First Amendment rights as well as maintains the integrity of our elections,” she said.

Thomas said he believed the reason the bill died last session was because the General Assembly “wasn’t quite ready for AI yet.”

“As people become more aware or my fellow lawmakers become more aware of some of the issues that we’re having with AI, I think it’ll help move things through the system,” he said. “Part of the job is not just writing good legislation but helping other lawmakers understand the need behind it.”

Thomas applauded Albers’ efforts to further educate lawmakers on the uses and implications of AI by chairing the study committee.

“Our goal is always to make sure that we’re putting up guardrails to make sure that the technology is used in a ethical manner but not stifling innovation,” he said.