A study conducted by institutions including the Georgia Institute of Technology and Johns Hopkins University has shown racist and sexist prejudices in a popular artificial intelligence system.
The AI, which synthesized public information as a means for data gathering, showed clear bias against marginalized groups, such as people of color and women.
In the experiment, researchers tracked the frequency at which the AI assigned titles such as “doctor,” “criminal” and “homemaker” to individuals of varying genders and races. Results showed the tendency for it to identify women as a “homemaker” over white men, identify Black men as “criminals” 10% more than white men and identify Latino men as “janitors” 10% more than white men, according to a Georgia Tech College of Computing publication. Women of all ethnicities were also less likely to be identified as a “doctor” than white men.
“One of the big sources of change I think we can make is looking at the human process of developing these technologies,” Dr. Andrew Hundt, the study’s co-author and a computing innovation postdoctoral fellow at Georgia Tech, said on Wednesday’s edition of “Closer Look.” “We need to start by assuming there is going to be identity bias of some kind or another — be it race, gender, LGBTQ+ identity [or] national origin … You need to prove you’ve addressed it before it goes out and quantify those issues.”
As artificial intelligence software becomes increasingly integrated into everyday life through vehicles of task automation like facial recognition and social media, the research team has resolved to shed light on the role of ethics in technological developments.