The emergence of Artificial Intelligence used to power harmful deepfakes is set to be addressed in the comprehensive AI Act this year, which will synchronise countries addressing the highest risk systems.

In light of this, Google has been forced to stop its latest innovation, using an artificial intelligence model to generate digital avatars of people in a move that has ignited a backlash. The AI deep-learning system shares similarities with OpenAI’s ChatGPT and deepfakes in creating realistic photographic images using users’ descriptions. The system, albeit built with intelligence, can not distinguish hateful or dangerous instructions. The depiction of ethnicities and genders has been a criticism of the technology, which cannot embrace diversity driven by humans.

As a result, the depictions interpret some people’s images within stereotypical contexts.

Google put out a statement to mitigate the backlash saying they were “working to improve these kinds of depictions immediately” and defending the technology’s ability to depict a “wide range of people”. 

Google added:

“It’s generally a good thing because people around the world use it. But it’s missing the mark here”, seeming to acknowledge some offence was caused.

A new and improved model will be released soon.