Guest article contributed by Andrew ShikiarExecutive Director at FIDO Alliance

Phishing had become unmanageable for businesses long before the emergence of Large Language Models and generative AI. Now this cyberattack technique, already responsible for over 90% of data breaches, has been supercharged by a technology that makes it near-impossible to detect. The industry can no longer contend with fraudsters the way it has for nearly two decades. The advancement of generative AI calls for something more…advanced.

To provide some sense of the scale to the problem, the average company experiences 700 social engineering attacks per year – in which an average of 57 are aimed at the CEO. In 2022, we saw a 38% increase in the global volume of cyberattacks, reaching an all-time high in Q4 2022. In the past, many phishing attacks could be easily identified through poor grammar or localization, or through unrealistic schemes.  But now that generative AI tools have hit the scene, bad actors have powerful assets to make phishing attacks far more convincing and scalable. In other words, an already monumental problem is getting bigger and, thanks to generative AI, it’s getting smarter too.

Generative AI has changed the security game

When used for good, technologies like ChatGPT have the potential to save businesses valuable time, money and labor, thanks to its content creation and language processing abilities. However, we increasingly see it being misused and weaponised to make phishing scams that much harder to detect.

While generative AI tools can be used by cyber criminals in their public release version, we have already seen ‘innovations’ result in tools like FraudGPT and WormGPT, which have been created and shared on the dark web explicitly for use in cyber crime. These tools jailbreak the official service so that it can be used for purposes that go far beyond the technology’s intended use and bypass any restrictions. In this case, it used to develop business email compromise (BEC) attacks by creating highly convincing phishing emails and even phishing websites.

In the past, it was possible to detect a large proportion of phishing emails or text messages using the eye-test. But now, poor spelling and grammar that normally arouse suspicion are effectively eliminated, and even awkward phrasing ironed out to make phishing messages more convincing. Not only that, but they can be carried out in almost any language desired. This means phishing attacks can and will increase exponentially – in volume, sophistication and overall efficacy.  

AI experts often talk about the singularity, where AI surpasses human intelligence and control. While this remains a hypothetical scenario, we have arguably reached this point when it comes to identifying phishing and social engineering attacks. Fuelled by advancements in generative AI, it is now inevitable that a person within an organisation will at some stage inadvertently divulge their credentials as a result. 

Some will argue that businesses can fight AI with AI, adopting software that claims to identify content written by generative AI. Even ignoring the mixed results these tools provide, this is a fundamentally flawed approach. Fighting AI with AI creates another round of the same game where success relies on detecting all, or at least a significant number, of phishing attacks. This will lead to an arms race, where phishing attacks and the technology behind them will adapt and become ever more sophisticated and harder to detect in response.

Why we need to rewrite the rulebook

The problem lies in the act of trying to detect phishing emails and social engineering. No amount of training or detection software will ever be a silver bullet. Businesses, and especially leaders responsible for cyber security, need to accept that they are playing the game on fraudsters’ terms, and to begin thinking about the problem differently.

Boiling it down to its basics, the primary reason fraudsters engage in social engineering is so they can get hold of people’s credentials – in order to then take over accounts, access sensitive resources and/or perpetrate further crimes. Typically, this sort of credential attack will involve a victim being sufficiently convinced to click a link to a seemingly legitimate website, and entering their user ID and password – an approach that worked on half of surveyed enterprises in 2022

Now the fraudster is free to use these credentials on a range of accounts and in a range of scenarios to gain access to a business’ systems and ultimately extract money or data, or both. It is only by going back to the root of the problem that businesses can begin to rewrite the rules – by making credentials un-phishable in the first place.

How do we get there?

As a reminder, 74% of all breaches are caused by human error, privilege misuse, use of stolen credentials or social engineering – the vast majority of which take advantage of knowledge-based “secrets” such as passwords. By eliminating this very weak link in the corporate security chain, we can remove the possibility of fraudsters cashing-in should they succeed in duping somebody with an email or message. The good news is that technology is now available for users to authenticate themselves through simpler, yet stronger passwordless verification methods. 

Passkeys are one example of this, using cryptography coupled with on-device biometrics or PINs that people already use to unlock their phone or other devices. The result is that with just a touch of a finger or a quick facial scan, users can log into their accounts safely and seamlessly – without fear of unwittingly handing over their credentials to scammers or through spoofed websites.  Passkeys as a primary authentication method bring far greater security – and usability – than passwords.

For example, at an organisation that has adopted passkeys, should an employee follow a link to a fraudulent site they would not be able to enter a password because they simply don’t have one. It is also not possible for fraudsters to instead ask for their biometrics in an attempt to capture and use it, because the associated credentials remain hidden and secure on the employee’s device.

Device-bound passkeys, such as those found on hardware security keys from companies like Yubico, Google and many other vendors, can also function as an unphishable second factor on top of enterprise Single Sign-On platforms such as those from Okta, Duo and Ping identity. These SSO platforms enable other second factor options such as one-time passcodes sent through SMS or an authentication app; such options are stronger than a password alone, but are susceptible to social engineering – as was the case in last year’s 0ktapus attack.  FIDO Security Keys, on the other hand, feature device-bound passkeys that are immune to such attacks.  

The industry is putting its support behind passkeys, which are built upon open standards from the FIDO Alliance and W3C WebAuthn communities, having played a major role in helping develop the standards. Google recently announced that passkeys are now available for all its users to move away from passwords and two-step verification, as has Apple. Windows 10 and 11 have long supported device-bound passkeys in Windows Hello – and passkeys from iOS or Android devices can also be used to sign into sites in Chrome or Edge on Windows.

We must not let apathy reign: Inaction is Indefensible 

Many security leaders understand the impact of phishing attacks fuelled by generative AI on their business, and may already be planning to guard against this.  The solution won’t be found in technology alone – in fact, one may argue that this is as much of a communication and education challenge as it is a technical one.  These security leads need to convince key people in their organisation that a threat as old as the internet itself has become business critical, and the game has changed to such a degree that the old tactics are woefully outdated.

Others will continue to prioritise other IT and security imperatives – perhaps assuming that there’s little they can do to outwit well-armed attackers.  But such apathy should not be tolerated as it is entirely in one’s power to block the vast majority of credential attacks.  

For companies that have not yet moved to eliminate passwords and other knowledge-based credentials for user authentication, not taking action now borders on negligence as the attacks are most certainly coming, and solutions to harden one’s enterprise are readily available.  To continue using passwords or moving to a more secure technology like passkeys is a choice, afterall. And this choice will have major repercussions if not met head-on very soon.