Wednesday, April 23, 2025
HomeTechnologyArtificial IntelligenceNext-Gen Phishing: The Rise of AI Vishing Scams TechTricks365

Next-Gen Phishing: The Rise of AI Vishing Scams TechTricks365


In cybersecurity, the online threats posed by AI can have very material impacts on individuals and organizations around the world. Traditional phishing scams have evolved through the abuse of AI tools, growing more frequent, sophisticated, and harder to detect with every passing year. AI vishing is perhaps the most concerning of these evolving techniques.

What is AI Vishing?

AI vishing is an evolution of voice phishing (vishing), where attackers impersonate trusted individuals, such as banking representatives or tech support teams, to trick victims into performing actions like transferring funds or handing over access to their accounts.

AI enhances vishing scams with technologies including voice cloning and deepfakes that mimic the voices of trusted individuals. Attackers can use AI to automate phone calls and conversations, allowing them to target large numbers of people in a relatively short time.

AI Vishing in the Real World

Attackers use AI vishing techniques indiscriminately, targeting everyone from vulnerable individuals to businesses. These attacks have proven to be remarkably effective, with the number of Americans losing money to vishing growing 23%from 2023 to 2024. To put this into context, we’ll explore some of the most high-profile AI vishing attacks that have taken place over the past few years.

Italian Business Scam

In early 2025, scammers used AI to mimic the voice of the Italian Defense Minister, Guido Crosetto, in an attempt to scam some of Italy’s most prominent business leaders, including fashion designer Giorgio Armani and Prada co-founder Patrizio Bertelli.

Posing as Crosetto, attackers claimed to need urgent financial assistance for the release of a kidnapped Italian journalists in the Middle East. Only one target fell for the scam in this case – Massimo Moratti, former owner of Inter Milan – and police managed to retrieve the stolen funds.

Hotels and Travel Firms Under Siege

According to the Wall Street Journal, the final quarter of 2024 saw a significant increase in AI vishing attacks on the hospitality and travel industry. Attackers used AI to impersonate travel agents and corporate executives to trick hotel front-desk staff into divulging sensitive information or granting unauthorized access to systems.

They did so by directing busy customer service representatives, often during peak operational hours, to open an email or browser with a malicious attachment. Because of the remarkable ability to mimic partners that work with the hotel through AI tools, phone scams were considered “a constant threat.”

Romance Scams

In 2023, attackers used AI to mimic the voices of family members in distress and scam elderly individuals out of around $200,000. Scam calls are difficult to detect, especially for older people, but when the voice on the other end of the phone sounds exactly like a family member, they’re almost undetectable. It’s worth noting that this incident took place two years ago—AI voice cloning has grown even more sophisticated since then.

AI Vishing-as-a-Service

AI Vishing-as-a-Service (VaaS) has been a major contributor to AI vishing’s growth over the past few years. These subscription models can include spoofing capabilities, custom prompts, and adaptable agents, allowing bad actors to launch AI vishing attacks at scale.

At Fortra, we’ve been tracking PlugValley, one of the key players in the AI Vishing-as-a-Service market. These efforts have given us insight into the threat group and, perhaps more importantly, made clear how advanced and sophisticated vishing attacks have become.

PlugValley: AI VaaS Uncovered

PlugValley’s vishing bot allows threat actors to deploy lifelike, customizable voices to manipulate potential victims. The bot can adapt in real time, mimic human speech patterns, spoof caller IDs, and even add call center background noise to voice calls. It makes AI vishing scams as convincing as possible, helping cybercriminals steal banking credentials and one-time passwords (OTPs).

PlugValley removes technical barriers for cybercriminals, offering scalable fraud technology at the click of a button for nominal monthly subscriptions.

AI VaaS providers like PlugValley aren’t just running scams; they’re industrializing phishing. They represent the latest evolution of social engineering, allowing cybercriminals to weaponize machine learning (ML) tools and take advantage of people on a massive scale.

Protecting Against AI Vishing

AI-driven social engineering techniques, such as AI vishing, are set to become more common, effective, and sophisticated in the coming years. Consequently, it’s important for organizations to implement proactive strategies such as employee awareness training, enhanced fraud detection systems, and real-time threat intelligence,

On an individual level, the following guidance can aid in identifying and avoiding AI vishing attempts:

  • Be Skeptical of Unsolicited Calls: Exercise caution with unexpected phone calls, especially those requesting personal or financial details. Legitimate organizations typically do not ask for sensitive information over the phone. ​
  • Verify Caller Identity: If a caller claims to represent a known organization, independently verify their identity by contacting the organization directly using official contact information. ​WIRED suggests creating a secret password with your family to detect vishing attacks claiming to be from a family member.
  • Limit Information Sharing: Avoid disclosing personal or financial information during unsolicited calls. Be particularly wary if the caller creates a sense of urgency or threatens negative consequences. ​
  • Educate Yourself and Others: Stay informed about common vishing tactics and share this knowledge with friends and family. Awareness is a critical defense against social engineering attacks.​
  • Report Suspicious Calls: Inform relevant authorities or consumer protection agencies about vishing attempts. Reporting helps track and mitigate fraudulent activities.

By all indications, AI vishing is here to stay. In fact, it is likely to continue to increase in volume and improve on execution. With the prevalence of deep-fakes and ease of campaign adoption with as-a-service models, organizations should anticipate that they will, at some point, be targeted with an attack.

Employee education and fraud detection are key to preparing for and preventing AI vishing attacks. The sophistication of AI vishing can lead even well-trained security professionals to believe seemingly authentic requests or narratives. Because of this, a comprehensive, layered security strategy that integrates technological safeguards with a consistently informed and vigilant workforce is essential for mitigating the risks posed by AI phishing.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments