Dive Brief:
- Treasury Secretary Janet Yellen on Thursday backed legislation that would require federal regulators to reduce the risk of disruption to financial markets from the use of artificial intelligence.
- “I think the [Biden] administration would welcome a congressional initiative in this area,” Yellen said in testimony to the Senate Banking Committee in response to a question from Sen. Mark Warner, D-VA.
- The Financial Stability Oversight Council of regulators “this year identified AI as a vulnerability that could create systemic risk, so we are working very hard to deepen our understanding of the ways in which that could happen and to monitor very rapidly changing developments to be able to define best practices for financial institutions,” Yellen said.
Dive Insight:
Warner, chair of the Senate Select Committee on Intelligence, and Sen. John Kennedy, R-LA, introduced legislation in December that would require FSOC to coordinate regulatory efforts to shield markets from the use of deepfakes, trading algorithms and other AI tools that could roil the financial system.
The bipartisan bill would require FSOC to identify and correct weaknesses in current regulations, guidance and exam standards that may hobble government efforts to eliminate AI threats to financial stability.
The Financial Artificial Intelligence Risk Reduction Act would also triple penalties for the use of AI in fraud, market manipulation and other violations of Securities and Exchange Commission rules.
“Boy oh boy, if there was ever a case to look across all the regulatory entities within the financial sector, a problem like AI I think would be perfectly suited for FSOC,” Warner said to Yellen during the hearing.
“Congress should move quickly to make sure that we’ve got a comprehensive approach for both the upside and downside of AI in the financial sector,” Warner suggested to Yellen. The legislation “is at least a good starting point for giving you the tools and, frankly, us having the guardrails,” he added.
Chaired by Yellen, the FSOC noted in an annual report released in December that financial institutions use AI for tasks such as customer service, document review, retail credit underwriting and fraud prevention and detection.
AI poses several risks, including to safety and soundness, FSOC said. The inability to explain how the technology produces its output raises challenges to consumer compliance, including the threat of bias, the council said.
“Without proper design, testing and controls, AI can lead to disparate outcomes, which may cause direct consumer harm and/or raise consumer compliance risks,” FSOC said.
The council suggested that “oversight structures” stay ahead of emerging AI risks while promoting efficiency and innovation.
The FSOC also recommended that “financial institutions, market participants and regulatory and supervisory authorities further build expertise and capacity to monitor AI innovation and usage and identify emerging risks.”
The Biden administration aims to limit a full range of AI threats across the economy, society and government. President Joe Biden in October issued an executive order creating standards for the “safe, secure and trustworthy” development of AI.
Under the order, AI companies need to notify the government when creating a system that “poses a serious risk to national security, national economic security or national public health and safety.”
The Biden administration also secured commitments from Amazon, Google, Meta, Microsoft, OpenAI, Anthropic and Inflection “to help move toward safe, secure and transparent development of AI technology,” according to a White House fact sheet.