With artificial intelligence, today, you can truly do anything. From computer automation to industry research, through hobbies, conversations, and even future predictions, AI has radically changed the way millions of people act, think, and even work. Among these, of course, are hackers and scammers. And how could we not use such a valuable tool to create even more devious and seemingly foolproof scams?
If all this can be achieved with commonly used, legal, and even easy-to-use tools, then the game is done. And so deepfake, AI-based technology capable of replicating the faces and expressions of a real person, can easily be transformed into voice cloning: essentially, they can clone your voice and use it to simulate calls, requests, even phone purchases and vocal approval of contracts and policies. In short, a weapon that can be used for a wide variety of purposes, but with one common denominator: stealing data and/or money.
And this is precisely how the fake CEO scam was born, a telephone trick that involves perfectly replicating your boss’s voice with AI, urging you to complete a task, such as making a deposit or releasing sensitive data. A prank that has already turned into a scam worth millions of euros for several companies, both in Italy and abroad. And even the most cunning and attentive can fall for it at any time.
How the phone-based fake boss scam works and why it’s so easy to fall for it
The process is very simple, yet disturbing. Scammers collect a few seconds of audio of the manager they want to impersonate—interviews, LinkedIn videos, conferences, or even short public or voice clips—and process it with AI-based text-to-speech software. In just a few minutes, they create a credible clone, capable of reproducing intonation, accent, rhythm, and even natural pauses.
Then the operational phase begins. The victim—often an employee in the administrative or financial department—receives a phone call or voice message. On the other end is “the boss,” who speaks in a recognizable and familiar tone. The content is almost always the same: an urgent, confidential, and non-deferrable request. It could be a bank transfer to close an acquisition, a payment to a new supplier, or the immediate sending of sensitive data.
The key factor is precisely urgency. They often add credible details (CEO travel, ongoing meetings, confidential transactions) to make the request even more plausible. Once the transfer is made, the money is quickly transferred to intermediate accounts and then dispersed, making recovery nearly impossible.
In other cases, the target is corporate identity theft: logins, credentials, and documents that can be exploited for even more sophisticated subsequent attacks. Needless to say, the results are devastating.
Famous case studies that have gone viral around the world (and what they teach us)
In recent years, this type of fraud has exploded globally, both in frequency and economic impact. The numbers speak for themselves: deepfake attacks have grown by 3,000% and voice cloning fraud alone has seen a 680% increase in one year. Average losses now exceed €500,000 per case, with peaks reaching tens of millions.
One of the most high-profile cases dates back to 2019, when a manager at a British energy company transferred approximately $243,000 (about €205,000 at the exchange rate) after receiving a call with the exact voice of his CEO. The accent, the cadence, even the linguistic nuances were identical. No suspicions, no additional checks. Only after the transfer was made was the scam discovered.
Even more sensational is the case of the Asian multinational in 2025: a CFO participated in a video call with seemingly real colleagues and executives. In reality, every face and voice was a deepfake. The result? Nearly half a million dollars transferred to criminals in just a few minutes.
But what do all these incidents have in common? Some recurring signs of all scams, from phishing on down:
- extreme urgency (“need it now, we can’t wait”)
- request for confidentiality (“do not involve others”)
- non-standard procedures (new accounts, unusual transactions)
It is precisely these elements that should raise alarm bells. The problem is that traditional verification methods—like recognizing a voice, making a call, checking visual identity—are no longer sufficient. Today, seeing and hearing no longer equate to believing.
Original article published on Money.it Italy. Original title: La nuova truffa con l’AI che simula la voce del tuo capo e ti svuota il conto in banca-