Georgia lawmakers are recommending that all state bodies, cities, counties and school systems create “comprehensive” policies to ensure the ethical use of artificial intelligence.
The bipartisan Georgia Senate Committee on Artificial Intelligence, chaired by Republican State Sen. John Albers, released its final report on Tuesday, which included findings and recommended policies and steps to both promote the technology’s advancement and address potential negative consequences.
The policy guidance comes one month before the start of the 2025 legislative session in Georgia on Jan. 13.
“We want to encourage innovation and great use of what will be literally the biggest thing to happen in our generation, while at the same time, we are protecting consumers and businesses and other people from what could be negative effects as well,” he told WABE in an August interview.
“I think we’ve got to figure out how to thread the needle and do that properly because our goal in government is to protect people that can serve them. At the same time, we also have in front of us some of the technology that will both cure and solve some of our greatest questions and problems of our day,” he added.
Its first and most sweeping policy recommendation proposes that every “state agency, department, team, School System, County, and City” establish a policy that delineates goals for AI use and modes of ethical use, such as ensuring privacy and transparency and avoiding biases.
The recommendation also proposes that AI policies include roles for “AI governance within the organization, including naming an AI Ethics Board or Committee,” and provide employee training programs for AI risks and responsible uses. The report also calls for AI policies to include ways to update protocols and directions for responding to AI breaches or malfunctions.
Tuesday’s report includes recommendations for adopting a data privacy law, AI use disclosure requirements, and an updated law against the use of deepfakes for election interference, which includes “transparency and labeling.” A similar law did not pass the last legislative session.
The report also supports criminalizing disinformation “with severe penalties” and enforcing the same legal liability standards for an AI product as a physical one.
“Advertising, influencing, intimidating, or coercing individuals/entities through deep fake AI has no legitimate purpose and should be identified and banned with developers held accountable,” the report reads.
Last legislative session, the American Civil Liberties Union of Georgia opposed the state bill criminalizing the use of deepfakes for election interference, arguing that the bill’s language did not adequately protect First Amendment rights.
Additionally, the report proposes adopting a statewide definition of AI, monitoring and updating regulations around AI, creating an AI state board, extending the work of the AI Senate Study Committee to next year, recommending AI tools for public entities to use and mandating ROI data and reports on such tools.
In addition to general statewide recommendations, the study committee’s report also included specific guidance related to health care, public safety, education and workforce training, entertainment, agriculture and manufacturing.
Over the past few months, the AI Senate Study Committee has heard testimony from experts in education, entertainment, agriculture, cybersecurity, health care and more during eight meetings.
Overall, the sector-specific recommendations are in favor of developing and implementing AI tools, such as working with law enforcement to adopt “appropriate uses of AI to increase the efficiency of emergency response and management,” incentivizing entertainment projects that use AI and creating grants for smaller farms to use AI.
The bipartisan committee comprises Albers, Republican State Sen. Max Burns, Democratic State Sen. Jason Esteves, Republican State Sen. Ed Setzler, Republican State Sen. Shawn Still, Georgia Institute of Technology professor Dr. Pascal Van Hentenryck, Deloitte Managing Director of Government and Public Services Robyn Crittenden and Frederic Miskawi, who leads the AI Innovation Expert Services at tech consulting firm CGI.