The Consumer Financial Protection Bureau is closely monitoring how generative artificial intelligence and ChatGPT-like technologies, when used by banks, could undermine or create risks in customer care, the agency’s director, Rohit Chopra, said Tuesday.
“The CFPB is thinking about the future of banking — whether it be in the metaverse or in some sort of augmented reality context — and we're starting to look at some of the building blocks that are out there,” Chopra said Tuesday. “We’re doing some work right now on how [generative AI] might undermine or create risks in customized customer care, to the extent that biases are introduced, or frankly, even the wrong information.”
Chopra’s remarks came ahead of an interagency initiative the bureau announced Tuesday alongside the Justice Department, Federal Trade Commission and Equal Employment Opportunity Commission. The move is aimed at cracking down on “unchecked AI” in lending, housing and employment.
In a joint statement, the agencies said they would commit to enforcing their respective laws and regulations to rein in unlawful discriminatory practices perpetrated by companies that deploy AI technologies.
“Unchecked AI poses threats to us and our civil rights in ways that are already being felt,” Chopra said. “Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.”
The interagency statement comes as the growth of machine learning, generative AI and the attention surrounding text generator ChatGPT have raised questions about security and bias in a wide swath of industries.
The CFPB is monitoring how financial firms are using generative AI, and exploring ways firms might implement the technology in the future, Chopra said.
“I think we know that generative AI is most fundamentally going to affect how individuals can trust certain types of messages,” he said. “That is something we're very actively preparing for as a consumer and civil rights enforcer.”
The CFPB is working to teach tech whistle-blowers to alert the agency when their own technology may be violating the law.
Like other sectors, the banking industry is eyeing the potential benefits of deploying generative AI technology in its interactions with customers, said Peter-Jan van de Venn, vice president of global digital banking at digital consultancy Mobiquity.
The technology has the potential to radically change digital banking by creating a more personalized user experience that provides better support, he said.
“Offering personalized experiences to customers has become significantly more important as banks shift their focus to digital-first,” Van de Venn said. “The main use cases will be initially around customer service with chatbots and virtual agents serving customers in a more human way that’s tailored to their situation.”
Bank of America, for instance, is working to expand its AI-powered digital assistant Erica’s capabilities to include personalized banking, investing, credit and retirement-planning advice.
The Charlotte, North Carolina-based bank, which launched Erica in 2018, outlined plans last year to use Erica to connect customers to banking agents regarding new products and services, such as mortgages, credit cards and deposit accounts.
However, when asked about Bank of America’s vision for the tool in light of the growing popularity of ChatGPT, CEO Brian Moynihan said the lender would maintain a cautious approach.
“There is a lot of value [in generative AI], but the key question is, when can we use it without the fear of bias and where this information is coming from?” Moynihan told analysts during the bank’s first-quarter earnings call last week. “We need to understand how AI-driven decisions are made in order to stand up to our customers’ demands for us to be fair, and for us to follow the laws and regulations around things like lending.”
Meanwhile, regulators need to lay out more guidelines on how generative AI can be used in financial services, Van de Venn said.
“The big question is also what regulators will allow. First-line support on simple questions on products would be acceptable, but providing financial advice is typically bound by duty of care and other forms of regulation to protect consumers from unsuitable advice,” he said. “It’s up to the regulators to act fast and provide clarity.”
Banks curious about leveraging generative AI like ChatGPT should exercise caution, and only deploy it in non-sensitive use cases which do not reference client information or a bank’s data, said Tom Cramer, a managing director with Protiviti Digital.
“However, it can be used for educational purposes, creating high-level, general inputs to investment theses, and in supporting development of marketing language,” he said. “Banks will need monitoring practices and good governance to manage the risk of the activities.”