Does AI Make Financial Scams Harder to Detect?

A few decades ago, financial scams largely came in the form of telemarketing scams or poorly-written emails from scammers claiming to be a Nigerian prince wanting to gift the recipient thousands, if not millions, of dollars, in exchange for a favor. Another popular email financial scam claimed the recipient was the sole beneficiary of an estranged relative’s inheritance. But as awareness grew of these financial scams, scammers were forced to reinvent their fraudulent strategy. 

Enter artificial intelligence or AI as it is more commonly known. While AI consists of several different technologies, the general premise is that AI technology enables computers to sense, understand, act, and learn with human-like levels of intelligence. 

AI can be used for good, but criminals are finding malicious ways to use the new technology to perpetrate sophisticated financial scams affecting consumers of all ages and demographics. And with AI, they are a lot more successful at duping their victims because the technology allows scammers to use personal information about you from social media profiles or other online information to tailor scams to each individual. 

“We call it crime 3.0,” said Haywood Talcove, CEO of the government group at LexisNexis Risk Solutions, a data analytics company that protects against identity fraud, among other services. “The use of this technology — whether it’s artificial intelligence, generative AI, as well as the deep-fake technology — has the ability to blow through most of the tools that have been set up to protect our financial institutions and our government institutions.”

For instance, with ChatGPT, criminals can now send more persuasive letters requesting money, Adam Brewer, a tax lawyer, told Yahoo Finance.

“The difference now is that it’s just more polished. Essentially, they have the computers writing the scripts or writing the letters,” Brewer said. “It’s just going to be a lot harder for the average person to detect.”

Despite numerous awareness campaigns around this new type of fraud, they are overwhelmingly successful. Data from the Federal Trade Commission showed that consumers lost roughly $8.8 billion due to fraud in 2022. That’s a 19 percent increase over the previous year.

Kathy Stokes, AARP’s director of fraud prevention, or Fraud Watch Network, pointed out that such numbers don’t reflect the true scale of the problem because most online scams go unreported.

“We don’t know how big it is,” Stokes said. “But we know it’s so much bigger than we can even imagine.”

In addition to fraudulent financial letters, scammers are using AI to create persuasive FaceTime calls, phone calls, and emails to unsuspecting victims as prospective lovers, close friends, or even government agents from the IRS, according to experts.

One popular way bad actors use AI to scam consumers is by creating people who don’t exist. In one instance, an AI-generated individual claimed to be the CEO of a crypto investment platform and, in a YouTube video, touted the company to prospective investors via an AI-generated script. But this CEO didn’t exist; it was a fictitious avatar programmed with AI to recite a convincing script.

Most recently, AI financial scams have made the news because the technology allows bad actors to clone voices that sound just like a friend or family member, known as a deepfake.

Another popular AI financial scam is a type of romance scam in which fraudsters pose as prospective lovers and fool victims out of their money. Fraudsters can use deepfake technology to alter their image and even the sound of their voice. Lonely, elderly men are particularly susceptible to this form of fraud.

“You don’t look like a 25-year-old fraudster. You look like a 40-year-old attractive female and everything fits the image, fits in the voice,” Talcove said. “That to me is when you’re looking at one of the more devastating impacts of artificial intelligence.”

He also spotlighted ransom fraud. In this particular scam, Americans might receive a call from a family member or close friend in the middle of the night urgently pleading for money.

“You’re lying in bed at night. Your phone rings and it sounds like your kid, and they’re stuck in the Bahamas and they’ve been arrested. And you need to send them $5,000 immediately,” Talcove said. “They use generative AI for replication and voice and it’s really not your kid.”

In February 2023, an Arizona woman received a deepfake call from who she believed to be her 15-year-old teenage daughter. On the call, the teenage daughter explained she had been kidnapped, and the kidnappers were demanding a $1 million ransom otherwise, she would be harmed. But as it turned out, a scammer had cloned the teenager’s voice using soundbites from local interviews the teenager participated in while the real teenager was safe and sound on a ski trip.

While older Americans were historically seen as more vulnerable to financial scams, Stokes noted that fraud doesn’t solely affect older Americans. In fact, recent FTC data found that young people now fall victim to fraud more frequently than seniors.

“AI is a problem for everyone, not just older adults, and versions of artificial intelligence have been part of fraud for a long time,” Stokes said. “But now this generative AI just really creates so much more sophistication to the way they target people.”

She added: “It’s just when that older adult is the target, they tend to lose so much more because they have those assets that they’ve saved for retirement — an insurance policy if they’re widowed and housing wealth and the criminals will go after that.”

How Can You Protect Yourself from AI Financial Scams?

Because AI allows scammers to create deepfake audio, video clips, and even spoof the number that appears on your caller ID, it’s important to learn how to protect yourself from potential voice cloning scams as well as recognize the ingredients of a scam.

One way to avoid ransom fraud is by creating a family password that a fraudster wouldn’t know, Talcove said. To deter romance fraud, adult children should advise their parents not to send money to strangers.

To protect yourself:

  • Be skeptical of the voice on the other end of the phone when you’re asked for money in any form.
  • Don’t rely on caller ID. Instead, hang up from the call and call the person at the number you know to be theirs to verify their identity.

Watch out for messages that put you in a “heightened emotional state,” Stokes suggested, such as winning lots of money or the beginning of an exciting, new romantic relationship.

For example, key signs of a scam include:

  • Pressure to act immediately
  • Use of scare tactics or enticing offers
  • An offer too good to be true
  • Demand for money, typically in an unusual form, either by wire transfer, gift card, pay app, or crypto
  • Requests for sensitive or personal information 

“Those things put us into a place in our brain, the amygdala, where it’s hard to come out of it to access logical thinking. Criminals have known that since forever,” Stokes said. “And unfortunately, the tools have become so much better for them, that they’re able to cause so much harm, but we really need people to focus on that. “

“That red flag — it’s that emotional reaction to an incoming communication. That’s the flag. That’s where to disengage,” Stokes said.

When it comes to government financial scams, Brewer added that people should be extremely suspicious of government requests that demand immediate action. Government bureaus like the IRS move slowly and are unlikely to make first contact over the phone, email, or text.

“They’re going to send letters and it is going to be a slow process,” Brewer said. “So if someone’s calling you or texting or emailing you, telling you’ve got to act in the next minutes or days or hours, that’s not the time frame the IRS operates on. So you can be sure that it’s most likely a scam.”

Ultimately, Brewer said, the best defense against fraud is awareness of the threat.

“That’s really the difference between someone falling victim and not. If you’ve heard about it, your mind is going to put that into that fraud category,” he said. “Now you’re instantly skeptical, whereas if you haven’t heard of it, your mind might run and have you send money to someone or do something you’ll later regret.”

How Financial Institutions are Using AI 

Luckily for consumers, bad actors are not the only ones using AI to their advantage. More than half of banking and other financial institutions plan to use AI capabilities to better detect fraud.

After fraud rates hit record levels in 2020, many financial institutions realized human analysts were no longer sufficient to combat AI fraud scams; they would need AI capabilities of their own, including machine learning. With machine learning, financial institutions are able to use AI technology to help them detect evolving and complex financial crimes, identify new types of fraud, as well as protect consumers and businesses from fraud losses.

Specifically, AI-powered fraud detection systems are often able to detect complex fraudulent activity that a human may miss, such as bank fraud that involves multiple accounts, devices, and locations or fraud that is spread across different channels, such as online and in-person transactions. For example, bad actors typically use stolen identities to open multiple accounts with multiple different banks and launder money through these accounts using various transactions. 

Another benefit financial institutions are finding with the use of AI technology is better fraud risk management. Thanks to predictive analysis algorithms, AI technology allows financial institutions to identify high-risk customers or transactions and alert financial institutions to potential fraudulent activity before it occurs. 

For example, AI-powered fraud detection systems can flag suspicious customer activity, such as a customer making large transactions that are not consistent with their past financial behavior.

Of course, there are flaws in using this technology. Some fraud may slip through the detection systems, and some real transactions may be marked as spam. There are also privacy concerns surrounding the collection and use of customer data in order for AI technology to learn what is normal and what isn’t when it comes to consumers’ financial habits. 

Another concern is that once scammers learn how financial institutions are using AI technology, they’ll come up with ideas to get around the technology, which is why ongoing training and refinement of these AI-powered fraud detection systems will be needed.

I Think I’ve Been Scammed

If you think you may have been scammed and paid money, there are certain steps you can take to minimize the financial damage.

If you send money or pay a ransom using:

  • Credit or Debit Card: Contact the company or bank that issued the card and report the fraudulent charge.
  • Bank Transfer: Inform your bank of an unauthorized debit or withdrawal.
  • Gift Card: Contact the company that issued the gift card and report fraud. Typically these transactions are not traceable or reversible.
  • Payment App: Report the transaction to the company behind the app. These apps usually require you to link payment with a credit or debit card. Report the transaction to the fraud department for the card used. Pay apps are not regulated, and many do not offer transfer protections, so you may be forced to rely on the goodwill of the recipient to return the funds.
  • Cryptocurrency: Contact the company you used to send the money. These transactions are not reversible, typically, and the return of funds falls on the person who received them to return them.
  • Wire Transfer: Contact the wire transfer company to see about canceling the money transfer.
    • MoneyGram: 1-800-926-9400
    • Western Union: 1-800-448-1492

If you believe you have been the target of a financial scam, you can also file a report with your local law enforcement agency and the Federal Trade Commission. To file a report with the FTC, you can visit their website at FTC.gov or call 1-877-382-4357.

Leave a Reply

Your email address will not be published. Required fields are marked *