This story was provided by WABE content partner Georgia Recorder.
Like many Georgians, Woodstock Republican state Rep. Brad Thomas has taken the opportunity to spend some time with ChatGPT, the popular AI-powered chatbot that has become the face of an apparent renaissance in machine learning and artificial intelligence.
Thomas, chair of a new House Subcommittee on Artificial Intelligence, said his initial experiments with ChatGPT were impressive, but far from perfect.
“Let’s just put it this way, the last thing that it said to me through my questioning was, ‘Something went wrong. If this issue persists, please contact us through our help center at help@openai.com,’” Thomas said. “After I went through this, I wasn’t quite as concerned about it as I was before.”
Though the technologies are just now coming into the public spotlight, some experts predict that artificial intelligence and machine learning could be like the next printing press, radio or internet, devices that caused major disruptions in the way people live and work.
Thomas, who also serves as vice chairman for the House Technology and Infrastructure Innovation Committee, compared the new technology to the early days of the steam locomotive.
“That was a new thing, an emerging technology, and several of them blew up,” he said. “And the American Society of Mechanical Engineers came out, they set standards – by the industry, which I think is important – we’re not experts in AI, so we’re going to lean heavily on the industry and the appropriate people to make sure that we’re doing the things that make sense.”
The committee’s goal is to figure out best practices for the state before anything blows up.
“I’m glad we’re getting on top of this because the government tends to be notoriously ten years behind on everything, and we definitely don’t have ten years,” said subcommittee member Dar’shun Kendrick, a Lithonia Democrat. “We don’t have a year to be behind on this, to tell you the truth.”
Thomas said members are now in fact-finding mode, and no meetings are expected until around September or late August at the earliest.
He said he hopes to spend some time asking questions and figuring out what the state can accomplish on its own and what will require cooperation with Congress.
“I like to cast a wide net, then after that, we can drill down on whatever we feel are the most important issues,” he said.
But among his initial concerns is balancing the potential of AI to supplement the workforce in high-demand fields with the threat of new AI tools displacing jobs, as well as protecting Georgians from deepfakes, audio or visual media created by advanced technology that can appear indistinguishable from authentic media.
“I’ve been studying where you can tell an AI to generate images, and some of them could be kind of scary,” he said. “It can put people in situations they were never in. I guess I want to make sure we’re also looking at not just the good, but the bad part of it, too, so we’re going to take a holistic approach.”
Bad actors could use the technology to create images or videos of real people committing criminal or sexual acts, damaging their reputations. The process is already allegedly being used to make traditional scams much more believable.
Thomas referenced the case of Jennifer DeStefano, an Arizona mother who testified to the U.S. Senate after scammers allegedly created a deepfake of her teenage daughter’s voice and tried to solicit a ransom in a fake kidnapping plot. The girl was safe and sound, but DeStefano heard her voice scream for help in a phone call from an unlisted number.
Kendrick, an attorney and investment advisor, said she is concerned that AI-powered scams could mean more victims losing their money.
“If somebody had the voice of my mother, even though I’m not a senior citizen, I might fall for it,” she said. “So it doesn’t even really have to do with vulnerable populations so much as becoming more believable. These scams will become more believable if you can get AI-generated voices behind them.”
Other, less overtly threatening deepfakes have gone viral, including a photo of Pope Francis wearing an unusually stylish jacket. That image was originally posted in a forum specifically for AI-generated art, but soon spread to people who were not aware it was fake.
AI has also found its way into the 2024 presidential election, notably in a video tweeted out by an account associated with Florida Gov. Ron DeSantis. The video, which is critical of former President Donald Trump for not firing medical advisor Anthony Fauci, intersperses real photos of the two together with apparently AI-generated photos of Trump hugging Fauci and even giving him a gentle smooch.
Kendrick said in addition to exploring ways to provide security from scams and misinformation, she wants the committee to look at ways to make sure businesses across rural parts of Georgia are not left out of opportunities provided by new AI technologies.
“AI is going to be connected to having access to broadband and the internet,” she said. “If you’re using ChatGPT to do something, you need internet for that. Anytime you’re talking about something that’s going to be connected to the internet, you’re going to have similar issues, broadband issues.”
Georgia’s Sen. Jon Ossoff, the first millennial elected to the U.S. Senate, held a hearing of his chamber’s Human Rights Committee on AI and human rights earlier this month.
Ossoff spoke of automated jobs, AI-dominated warfare and even scarier possibilities.
“Some influential technologists and engineers, including prominent figures and prominent leaders of the industry, warn of existential risks ranging from catastrophic political destabilization to the development and deployment of weapons of mass destruction to catastrophic cybersecurity threats and to unforeseeable and unknown forms of risk that may emerge alongside more and more powerful forms of artificial intelligence,” he said.
Other committee members and speakers focused on China, who they said was prioritizing developing new AI technologies, often for surveillance purposes.
But Ossoff also spoke about the potential for AI to improve Americans’ lives.
“Our study of these technologies and associated risks should not blind us, of course, to this technology’s extraordinary potential,” he said. “For example, cancer diagnoses, the development of new, lifesaving drugs and therapies, productivity growth, and the new forms of technological innovation that AI itself could help us to unlock.”