AI has empowered fraudsters to sidestep anti-spoofing checks and voice verification, allowing them to produce counterfeit identification and financial documents remarkably quickly. Their methods have become increasingly inventive as generative technology evolves. How can consumers protect themselves, and what can financial institutions do to help?
1. Deepfakes Enhance the Imposter Scam
AI enabled the largest successful impostor scam ever recorded. In 2024, United Kingdom-based Arup — an engineering consulting firm — lost around $25 million after fraudsters tricked a staff member into transferring funds during a live video conference. They had digitally cloned real senior management leaders, including the chief financial officer.
Deepfakes use generator and discriminator algorithms to create a digital duplicate and evaluate realism, enabling them to convincingly mimic someone’s facial features and voice. With AI, criminals can create one using only one minute of audio and a single photograph. Since these artificial images, audio clips or videos can be prerecorded or live, they can appear anywhere.
2. Generative Models Send Fake Fraud Warnings
A generative model can simultaneously send thousands of fake fraud warnings. Picture someone hacking into a consumer electronics website. As big orders come in, their AI calls customers, saying the bank flagged the transaction as fraudulent. It requests their account number and the answers to their security questions, saying it must verify their identity.
The urgent call and implication of fraud can persuade customers to give up their banking and personal information. Since AI can analyze vast amounts of data in seconds, it can quickly reference real facts to make the call more convincing.
3. AI Personalization Facilitates Account Takeover
While a cybercriminal could brute-force their way in by endlessly guessing passwords, they often use stolen login credentials. They immediately change the password, backup email and multifactor authentication number to prevent the real account holder from kicking them out. Cybersecurity professionals can defend against these tactics because they understand the playbook. AI introduces unknown variables, which weakens their defenses.
Personalization is the most dangerous weapon a scammer can have. They often target people during peak traffic periods when many transactions occur — like Black Friday — to make it harder to monitor for fraud. An algorithm could tailor send times based on a person’s routine, shopping habits or message preferences, making them more likely to engage.
Advanced language generation and rapid processing enable mass email generation, domain spoofing and content personalization. Even if bad actors send 10 times as many messages, each one will seem authentic, persuasive and relevant.
4. Generative AI Revamps the Fake Website Scam
Generative technology can do everything from designing wireframes to organizing content. A scammer can pay pennies on the dollar to create and edit a fake, no-code investment, lending or banking website within seconds.
Unlike a conventional phishing page, it can update in near-real time and respond to interaction. For example, if someone calls the listed phone number or uses the live chat feature, they could be connected to a model trained to act like a financial advisor or bank employee.
In one such case, scammers cloned the Exante platform. The global fintech company gives users access to over 1 million financial instruments in dozens of markets, so the victims thought they were legitimately investing. However, they were unknowingly depositing funds into a JPMorgan Chase account.
Natalia Taft, Exante’s head of compliance, said the firm found “quite a few” similar scams, suggesting the first wasn’t an isolated case. Taft said the scammers did an excellent job cloning the website interface. She said AI tools likely created it because it is a “speed game,” and they must “hit as many victims as possible before being taken down.”
5. Algorithms Bypass Liveness Detection Tools
Liveness detection uses real-time biometrics to determine whether the person in front of the camera is real and matches the account holder’s ID. In theory, bypassing authentication becomes more challenging, preventing people from using old photos or videos. However, it isn’t as effective as it used to be, thanks to AI-powered deepfakes.
Cybercriminals could use this technology to mimic real people to accelerate account takeover. Alternatively, they could trick the tool into verifying a fake persona, facilitating money muling.
Scammers don’t need to train a model to do this — they can pay for a pretrained version. One software solution claims it can bypass five of the most prominent liveness detection tools fintech companies use for a one-time purchase of $2,000. Advertisements for tools like this are abundant on platforms like Telegram, demonstrating the ease of modern banking fraud.
6. AI Identities Enable New Account Fraud
Fraudsters can use generative technology to steal a person’s identity. On the dark web, many places offer forged state-issued documents like passports and driver’s licenses. Beyond that, they provide fake selfies and financial records.
A synthetic identity is a fabricated persona created by combining real and fake details. For example, the Social Security number may be real, but the name and address are not. As a result, they are harder to detect with conventional tools. The 2021 Identity and Fraud Trends report shows roughly 33% of false positives Equifax sees are synthetic identities.
Professional scammers with generous budgets and lofty ambitions create new identities with generative tools. They cultivate the persona, establishing a financial and credit history. These legitimate actions trick know-your-customer software, allowing them to remain undetected. Eventually, they max out their credit and disappear with net-positive earnings.
Though this process is more complex, it happens passively. Advanced algorithms trained on fraud techniques can react in real time. They know when to make a purchase, pay off credit card debt or take out a loan like a human, helping them escape detection.
What Banks Can Do to Defend Against These AI Scams
Consumers can protect themselves by creating complex passwords and exercising caution when sharing personal or account information. Banks should do even more to defend against AI-related fraud because they’re responsible for securing and managing accounts.
1. Employ Multifactor Authentication Tools
Since deepfakes have compromised biometric security, banks should rely on multifactor authentication instead. Even if a scammer successfully steals someone’s login credentials, they can’t gain access.
Financial institutions should tell customers to never share their MFA code. AI is a powerful tool for cybercriminals, but it can’t reliably bypass secure one-time passcodes. Phishing is one of the only ways it can attempt to do so.
2. Improve Know-Your-Customer Standards
KYC is a financial service standard requiring banks to verify customers’ identities, risk profiles and financial records. While service providers operating in legal gray areas aren’t technically subject to KYC — new rules impacting DeFi won’t come into effect until 2027 — it is an industry-wide best practice.
Synthetic identities with years-long, legitimate, carefully cultivated transaction histories are convincing but error-prone. For instance, simple prompt engineering can force a generative model to reveal its true nature. Banks should integrate these techniques into their strategies.
3. Use Advanced Behavioral Analytics
A best practice when combating AI is to fight fire with fire. Behavioral analytics powered by a machine learning system can collect a tremendous amount of data on tens of thousands of people simultaneously. It can track everything from mouse movement to timestamped access logs. A sudden change indicates an account takeover.
While advanced models can mimic a person’s purchasing or credit habits if they have enough historical data, they won’t know how to mimic scroll speed, swiping patterns or mouse movements, giving banks a subtle advantage.
4. Conduct Comprehensive Risk Assessments
Banks should conduct risk assessments during account creation to prevent new account fraud and deny resources from money mules. They can start by searching for discrepancies in name, address and SSN.
Though synthetic identities are convincing, they aren’t foolproof. A thorough search of public records and social media would reveal they only popped into existence recently. A professional could remove them given enough time, preventing money muling and financial fraud.
A temporary hold or transfer limit pending verification could prevent bad actors from creating and dumping accounts en masse. While making the process less intuitive for real users may cause friction, it could save consumers thousands or even tens of thousands of dollars in the long run.
Protecting Customers From AI Scams and Fraud
AI poses a serious problem for banks and fintech companies because bad actors don’t need to be experts — or even very technically literate — to execute sophisticated scams. Moreover, they don’t need to build a specialized model. Instead, they can jailbreak a general-purpose version. Since these tools are so accessible, banks must be proactive and diligent.