Source: FT.com
AI-powered voice vishing scams are here and elevating the sophisticated frauds since the capabilities of AI have mastered holding a real-life conversation in the last year.
The Financial Times writer, a AI researcher at Bramble Intelligence, wrote that AI could clone human voices and impersonate victims to create a highly-manipulative scam – also known as Deepfake vishing. The scam can deceive family members, friends and colleagues.
Starling Bank says one of the most effective mechanisms to protect yourself is to devise a Safe Phrase that becomes a emergency code known between loved ones only. If victims receive a call that they are suspicious of, the advice is to hang up immediately and stay vigilant.
Clever AI systems now link up the “speech recognition, language processing and speech synthesis” functions to generate a convincing phone call. OpenAI’s RealTime API arrived on the market last year and successfully does that – connecting separate components in one tool. They tools can dialogue fluently, speak in the native language and make decisions quickly.
No longer theoretical, getting scam calls that sound exactly like family members is now a reality.
Although AI calls could be used for good, in customer and healthcare services for example, companies better be wary of being defrauded with the customer management systems they use.
The rise of AI-driven voice phishing, where fraudsters use speech-based deception to bypass voice-verification systems, underscores the critical identity challenges covered at Identity Week Europe 2026: biometrics resilience, multi-modal biometrics, and fraud-proof digital identity lifecycles. As identity systems increasingly rely on behavioural and biometric cues, the industry must focus on effective governance and layered authentication to stay ahead of evolving threats and protect both organisations and individuals.















