Dive Brief:
- The Fintech Open Source Foundation, known as FINOS, published the first draft of an AI adoption framework for financial institutions Tuesday. The document details common large language model risks and governance measures to guide successful deployments.
- “The main obstacle to adoption of GenAI in financial services is the lack of a compliance framework,” FINOS Executive Director Gabriele Columbro told CIO Dive. “We absolutely need a shared standard interpretation of what AI readiness means.”
- FINOS, a nonprofit banking and technology industry group allied with the Linux Foundation, welcomed chipmaker Nvidia, credit rating agency Moody’s and model builder Protect AI to its roster of members Tuesday. Capital One, Citi, Goldman Sachs, JPMorgan Chase and Morgan Stanley are FINOS members, as are Amazon Web Services, Microsoft, Google Cloud and several other vendors.
Dive Insight:
While the data-driven financial industry is eager to implement LLM technologies, the path forward is beset with potential risks stemming from AI inaccuracies, data vulnerabilities and an unsettled regulatory environment.
The National Institute of Standards and Technology released a draft of its generative AI risk management framework in July and the European Union’s AI Act took effect in August. But as states like California wrestle publicly with regulating the technology, executives contend with worrisome opacity around best practices and legal liabilities.
FINOS is aiming to bring the same level of scrutiny to generative AI adoption as it did with cloud. The organization launched a Citi-led initiative to fashion public cloud compliance, resilience and security control standards across the industry last year and stood up the AI readiness working group in April.
“This is a great starting point for us, as an industry, to collaborate on a structured approach to the adoption and governance of AI,” Madhu Coimbatore, head of AI development platforms at Morgan Stanley, said in the Tuesday announcement.
The working group limited the scope of the draft framework to internal use of pre-built generative AI tools that are fed additional data through retrieval augmented generation and linked to external software-as-a-service endpoints. The document delineates 16 control procedures to limit 14 specific threats, including data leakage, ineffective data encryption and model hallucinations.
Columbro said the group still has to work through confusion surrounding how open-source methodology applies to LLM technologies.
“Nobody knows what open-source AI means,” Columbro said, noting ambiguity has led to “some degree of open-source washing, where people use the term too loosely.”
As work on the AI project moves forward, FINOS expects to have open-source control code ready for testing and posted to GitHub in the coming months.