The Federal Reserve, OCC, and FDIC have amended their joint model risk management guidance to explicitly exempt generative and agentic AI, giving banks a compliance green light to accelerate AI adoption—but leaving a governance gap critics say remains unfilled.
The Federal Reserve, the Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation (FDIC) have amended their joint model risk management guidance to explicitly exempt generative and agentic AI, limiting the existing framework to traditional quantitative models like credit scoring and risk assessment systems. The move is a deliberate regulatory signal: the compliance architecture that governs statistical models in banking does not, for now, govern large language models.
What Model Risk Management Guidance Is
Model risk management (MRM) guidance—anchored by the Federal Reserve's SR 11-7, issued in 2011—sets minimum expectations for how banks develop, validate, and govern quantitative models used in consequential decision-making. Banks subject to this framework must document model assumptions, conduct independent validation, track ongoing performance, and report material model failures to senior management and boards of directors.
For over a decade, this framework has governed everything from loan origination algorithms to trading risk systems and anti-money laundering models. When generative AI began entering banking operations in 2023–2024, the industry faced a compliance question with major implications: does a large language model used to draft customer communications, summarize documents, or assist with regulatory filings count as a "model" under SR 11-7? If yes, the validation and documentation burden would be substantial—potentially prohibitive for rapid deployment.
What the Regulators Decided
According to the OCC bulletin, the Federal Reserve, OCC, and FDIC have now answered that question: no. The existing model risk management framework applies to traditional quantitative models—systems that use statistical or machine learning techniques to produce quantitative outputs that directly inform financial decisions. Generative AI and agentic AI systems fall outside that definition and are explicitly excluded.
The carveout is not ambiguous language subject to interpretation. It is a deliberate policy choice, designed to reduce compliance friction and provide the banking sector with clear authorization to accelerate AI adoption without applying a quantitative-model governance framework to fundamentally different technology.
The Governance Gap the Carveout Creates
The carveout answers one question and leaves a larger one open: if generative and agentic AI are not governed by model risk management rules, what governance framework applies?
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The answer, as of this bulletin, is: none specified. Banks are expected to apply "appropriate governance and risk management" to these systems, but no binding framework equivalent to SR 11-7 currently exists for generative AI in banking. The carveout provides relief from compliance burden without providing a replacement framework.
Critics—including consumer advocacy organizations and some banking trade associations—argue this creates a regulatory vacuum at precisely the moment when banks are deploying AI agents in customer-facing roles, loan underwriting support, fraud detection, and internal compliance operations.
The concern is concrete. AI systems that generate incorrect information, produce outputs that discriminate against protected classes, or make consequential errors in financial contexts can cause significant consumer harm. The question of who is responsible for catching those errors—and how—remains formally unanswered.
Why Regulators Made This Choice
The move reflects a deliberate policy calculation: the economic benefits of AI adoption in banking—cost reduction, fraud prevention, customer service improvement, compliance automation—are substantial, and compliance friction that delays adoption has real economic costs. The agencies appear to have concluded that forcing an ill-fitting 15-year-old quantitative model framework onto large language models would impose burdens without producing meaningful risk reduction.
There is also a practical argument. SR 11-7's validation requirements—designed for statistical models with stable, testable distributions and quantitative outputs—are genuinely difficult to apply to large language models that exhibit context-dependent behavior and do not produce simple numerical outputs amenable to standard backtesting.
What Banks Should Do
The carveout does not mean governance is optional. It means banks must now build internal governance frameworks for generative and agentic AI without a regulatory template to copy. In the absence of binding rules, prudent practice means:
- Documenting intended use cases and limitations for each AI system in deployment
- Testing for bias and discrimination before deployment in any customer-facing or credit-related context
- Establishing human oversight workflows for consequential AI-assisted decisions
- Monitoring output quality on an ongoing basis, not just at point-in-time validation
- Establishing escalation paths when AI systems produce outputs that fall outside acceptable parameters
Banks that build robust internal governance now will be better positioned when—not if—specific regulatory requirements arrive.
What to Watch
The interagency bulletin notes that the agencies are actively monitoring AI use in banking and will issue further guidance as appropriate. The next developments to watch: enforcement actions against banks whose generative AI deployments cause material consumer harm, and congressional scrutiny of the governance gap if AI errors in banking reach sufficient scale to generate political attention. Those cases—not rulemaking—will likely define the next round of binding requirements.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.