The Consumer Financial Protection Bureau warned financial institutions that poorly deployed chatbots would impede customer interaction and stand as a barrier to resolving problems, according to a report published Tuesday.
The CFPB report highlights the problem customers face when advanced technologies like artificial intelligence affect their experience as they turn to banks for assistance with financial products and services.
The bureau said it received several complaints from "frustrated customers" who tried to reach out to their financial institutions for timely and straightforward answers to their queries or raise a dispute.
“To reduce costs, many financial institutions are integrating artificial intelligence technologies to steer people toward chatbots,” CFPB Director Rohit Chopra said in a statement. “A poorly deployed chatbot can lead to customer frustration, reduced trust and even violations of the law.”
Chatbots like OpenAI’s ChatGPT, introduced in November, could generate incorrect outputs undetectable by some users, the bureau said.
The CFPB in April announced an interagency initiative, alongside the Justice Department, Federal Trade Commission, and Equal Employment Opportunity Commission, to crack down on “unchecked AI” in lending, housing and employment.
“The CFPB is thinking about the future of banking — whether it be in the metaverse or in some sort of augmented reality context,” Chopra said at the time. “We’re doing some work right now on how [generative AI] might undermine or create risks in customized customer care, to the extent that biases are introduced, or frankly, even the wrong information.”
Roughly 37% of Americans interacted with a chatbot in 2022 to pay bills, look up recent transactions or perform other functions — and that figure is expected to grow, the CFPB said.
While most of the banking industry uses simple rule-based chatbots to route customers to frequently asked questions or limited responses, some institutions have developed their own chatbots by training algorithms with chat logs and real customer conversations. Capital One’s Eno and Bank of America’s Erica are two such examples.
The products and services can be complex, which makes it harder for customers to find the information they are looking for through FAQ responses, the CFPB asserted.
“Financial institutions should avoid using chatbots as their primary customer service delivery channel when it is reasonably clear that the chatbot is unable to meet customer needs,” the CFPB said.
For its part, Bank of America is working to expand Erica’s capability to offer proactive insights based on its understanding of a client’s typical transaction patterns. But it also wants to improve the flow of transactions, to include a seamless handoff to human agents and back to the digital assistant.
“We realized, at some point, people go, ‘I’m done with that chat, I need to talk to a human,’” Hari Gopalkrishnan, the bank’s chief information officer of retail, preferred, small business and wealth technology, said in December. “What if we could just introduce the human agent right along with a chatbot, pick up from where you left the chat … and then step back and let the chatbot take over again.”
The CFPB on Tuesday highlighted some risk factors when chatbots are involved. Chatbots, for example, might provide incorrect information or might fail to identify that a customer is invoking their federal rights, or might fail to protect their privacy. Sometimes, the inaccurate information can lead customers to select the wrong products or services that they need, and sometimes the customers might incur wrongly assessed fees or other penalties based on the incorrect information.
Sometimes customers are looking for immediate answers or assistance from their banks but struggle without direct human assistance, the CFPB said.
“Instead of finding help, consumers can face repetitive loops of unhelpful jargon,” the CFPB said. “Overall, their chatbot interactions can diminish their confidence and trust in their financial institutions.”