As regulators zero in on how lenders are using artificial intelligence and machine learning in their operations, banks need to make sure they can adequately explain and monitor the models they use — particularly if they are partnering with a vendor, said Joe Sergienko, managing director at Berkeley Research Group.
“Does the bank understand what's going on inside the model, and can they articulate that in a reasonable enough way? That's a tough hill to climb in AI and machine learning models,” Sergienko said.
The Consumer Financial Protection Bureau (CFPB) issued a warning in May for lenders that use AI or machine learning to underwrite loans or issue credit, telling companies they need to be prepared to explain to customers the specific reasons for denying an application for credit.
“Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions,” CFPB Director Rohit Chopra said in a statement. “The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.”
Chopra’s warning follows an interagency request for information released by the CFPB, the Office of the Comptroller of the Currency (OCC), the Federal Reserve, the Federal Deposit Insurance Corp. (FDIC) and the National Credit Union Administration (NCUA) in March 2021, aimed at gathering insight on financial institutions' use of AI.
As AI becomes more prevalent in financial services, explaining a particular algorithm’s results is not only something banks owe to their customers, but it’s what regulators will want to see, Sergienko said.
“Banks need to understand what these things are doing and be able to articulate it, because they're being asked to make decisions based upon the output of the models,” Sergienko said. “If they can't understand what it's doing, how are you making an informed decision? Because, ‘This is the answer the model gave us,’ is not an appropriate answer to a regulator.’”
As an increasing number of banks are using third-party vendors to deploy AI for their lending decisions, these fintech partnerships can add complexity to the issue, Sergienko said.
Partnering with fintechs that specialize in AI may be the industry’s “shiny new object,” Sergienko said, but banks need to be wary of the nascent nature of fintechs that are in the startup phase.
“Is the fintech going to be around for a while? What if it suddenly went bust?” said Sergienko, adding banks need to make sure they have a business continuity plan in place in the event a fintech shuts down.
And if a bank is choosing to use a fintech AI model, it's critical that that model is validated by a third party, Sergienko said.
“Since fintechs are startups and they're not technically regulated by the Fed or FDIC or the OCC, they haven’t traditionally had these validations done, so banks need to require it of their fintech partners or get it done themselves,” he said.
Upfront due diligence and ongoing monitoring of a vendor is critically important — particularly when that vendor’s algorithm is being used as part of an institution’s Bank Secrecy Act and anti-money laundering program, Sergienko said.
“That's a big compliance risk, if all of a sudden you're not able to get the output of the model or you can't trust the veracity of the output of the model,” he said.
An example of how a bank might add controls or monitoring would be running a more traditional model in parallel with the AI machine learning model to compare results, Sergienko said.
“Hopefully, the AI/machine learning model is better,” he added.
In reviewing some of the contracts between banks and fintechs, Sergienko said he often sees simplistic language that is devoid of the concept of using controls and validation of the AI model.
“We've actually had to work with a couple of clients to go back to the fintech and request that the validation language be added into the contract, because it's a requirement for the banks, and therefore, the banks have to require it of the fintech,” Sergienko said.
As more banks deploy AI in areas of lending and compliance, many institutions will need to step up their efforts to effectively monitor the algorithms they use, said Marcia Tal, a former Citi executive vice president, whose company, PositivityTech, uses an AI predictive model to identify prejudice in financial institutions.
“With more and more data available, and the magnitude of the data and the expectations of customers, especially in a digital environment, these tools are not going away,” Tal said. “In the same way that the resources have continued to expand in this area, so will the resources around algorithmic governance.”