Dive Brief:
- Artificial intelligence is now sophisticated enough that bad actors can use it, after taking over a consumer's account, in ways that simulate that person's behavior, creating another point of frustration for payments companies battling fraud, fintech professionals said.
- Advanced AI programs are now capable of mimicking the behavior of a real person, allowing the fraudsters to evade financial institutions monitoring for unusual activity, panelists said this week at the annual Money20/20 conference in Las Vegas.
- “The ability to just spin up a model and create 10,000 people that look real has gotten much higher,” said Brian Dammeir, head of payments for the fintech Plaid.
Dive Insight:
Fraud is on the rise. Americans lost $10.2 billion to fraud in 2023, a 14% increase in reported losses compared to 2022, according to the Federal Trade Commission. And fraudsters are aided in no small part by advanced technology, according to the international police organization Interpol.
Financial institutions spot account takeovers — when a hacker takes over a consumer’s credit card or checking account and makes purchases or withdraws money — by monitoring consumer behavior, Dammeir said.
If a consumer’s spending habits radically change, that’s a clue that their account was taken over, he said.
“What does each transaction look like?” Dammeir said. “What does this user look like? Does it match the other behavior that I've seen from them before?”
A hacker who can train an AI model to make transactions like a real person can better evade those safeguards, he said.
Partnerships between financial institutions are another way to battle fraudsters using more sophisticated tools, said Nicole Lauredan, partnerships leader for the payments processing software company Stripe.
The company’s enhanced issuer network, which was launched last year, helps financial institutions share its fraud-fighting tools and helps participants swap data on fraudulent transactions, she said.
Card “issuers like Capital One and Discover are leveraging that data so that they could actually improve their own machine learning,” Lauredan said.
The panelists did not provide any specific instances of artificial intelligence aiding fraudsters, but said financial institutions must be prepared for it.
Bernadette Ksepka, deputy head of product management for the Federal Reserve Bank of Boston, stressed the need to have multiple fraud checks in place.
The Fed is working on tools banks can use to spot anomalous transactions and help consumers and merchants confirm transactions, she said.
“Ultimately, I think it's going to take a multi-level safeguarding approach,” Ksepka said.