Celebrity deepfakes rank higher as the most common AI-generated attacks, as opposed to cyber attacks, Google’s DeepMind division has found.
As more celebrities are having to deny their affiliations with the activities of AI doppelgangers, the research suggests deepfakes are currently more problematic for those in public life, who can be controlled into saying or doing anything with AI.
When celebrities are exploited because they are known, this is an effort by fraudsters to deceive the public through fake videos, images and messaging often posted online. Fraud in this context can result in celebrities losing their credibility or being falsely associated with the wrong activities, for example celebrity scam advertising. This scenario is “almost twice as common as the next highest misuse of generative AI tools”, the Financial Times reported, to shape or influence public opinion”.
The other way to influence the public’s opinion is through AI meddling in election campaigns by those most recognisable leaders, as has occurred before with Joe Biden’s false voice audio appearing to appeal to voters – adding to election speculation over fears that AI will influence society later this year.
The motivations of bad actors in order of the findings included “opinion manipulation” at the top, linked to techniques like disinformation and defamation.
Below that, fraudsters choose to develop AI attacks for scams and fraud, monetisation and profit reasons, harassment, terrorism and cyber attacks.
UK voters head down to polling stations next week to have a say in the early general election – will AI be a danger for democracy?
Researchers analysed around 200 cases of misuse with AI tech between January 2023 and March 2024.














