Maintaining a balance between explain-ability and power may be a key to controlling bias in artificial intelligence (AI) and machine learning, a panel of experts said Tuesday at Fintech Week DC.
“Machine learning is a spectrum: The deeper you drill, the less explainable” the algorithm is, said Aneesh Varma, the founder and CEO of Aire Labs, a fintech startup that built an alternative credit scoring model.
Algorithms can be as simple as a traffic signal, which panelists described as successful because everyone knows the output. But a “black box” quandary arises when an algorithm becomes so complex that humans lose sight of the solution.
“It becomes an onion we have to unpeel,” said Melissa Koide, the CEO of FinRegLab and another panelist Tuesday.
Varma said he started Aire when he found himself outside the credit system. But he faced a fundamental question: How does a consumer build credit if no one knows the output of the algorithm?
The governing question Varma asked himself hinges on trust: “Will we be comfortable putting our own family through the process?” he asked.
Machine learning can present a "trolley problem" that’s central to ethics: You see a trolley hurtling toward five incapacitated people on the tracks, and you can pull a lever to make the trolley switch course. But on the second course, there’s one incapacitated person. Do you pull the lever to save five people to the detriment of one, or do you do nothing and save one? What if one of the people is someone you love?
In pioneering automated driving, Tesla faces a somewhat on-the-nose variation of the trolley problem. "Do we trust the output when it comes from subjective [AI]?" Varma asked.
Perhaps unwittingly, Microsoft showed the potential for harmful bias this year when it unveiled Tay, a chatbot that interacted with humans via Twitter in a crash course in machine learning. The bot was taken offline within 16 hours after it spouted racist and misogynist tweets in response to trolls.
Amazon also faces a bias challenge with its automated delivery — namely, that ZIP codes with a high African-American population get short shrift, said another panelist, Kenneth Edwards, associate general counsel for regulatory affairs at the lender Upstart.
“You cannot supplant that human judgment,” Edwards said.
The most black-and-white banking issue regarding AI bias may come in fair lending.
Edwards and Koide each expressed doubt that any AI models are set up with conscious bias. However, “there are 100 descriptions as to what is fairness,” Koide said. “There’s what’s allowed by law and what’s aspirational.”
"Is fairness a situation where everyone has the same chance [at a loan], or is it that everyone who qualifies has the same chance?" added Brian Clark, senior manager at the accounting firm EY and another of Tuesday's panelists.
A key question, Koide said, is “Do these algorithms allow lenders to better credit-risk-assess? … What does math do to allow us to control risk?”
This is not to say the problem begins and ends with the algorithm, the panelists said.
"AI may be relatively new, but these types of bias existed in traditional lending models," Edwards said.
If companies creating the algorithms were more diverse, it could solve problems downstream, the panelists said.
Panelist Albert Chang, counsel in the Consumer Financial Protection Bureau’s Office of Innovation, said he expected there would be more exploration and sandbox programs to root out bias in AI.
“Fixing” an algorithm can be nearly impossible if there are enough variables interacting, Douglas Merrill, CEO of the online lender ZestFinance, told members of the House Financial Services Committee in late June. "If you have 100 signals in a model … you'd have to compare all hundred to all other hundred, which sounds easy, except that turns out to be more computations than there are atoms in the universe — which is a bad outcome if you want an answer."
But Varma said maintaining some level of human decision-making is necessary.
“If you do all things with data, the market will kick your teeth in.”