The FDIC says regulators will supervise AI outcomes rather than prescribe methods — a posture designed to let banks innovate with AI while keeping oversight relevant.
Federal bank regulators plan to supervise AI outcomes rather than prescribe methods — a posture designed to let banks innovate with AI while keeping regulators relevant in a market moving faster than traditional examination cycles. FDIC Division chief Ryan Billingsley delivered that message to Congress in a speech on innovation, speed, and regulatory pace, positioning the agency as an enabler of AI adoption in banking rather than a gatekeeper.
What the FDIC Actually Said
Billingsley's core message: the FDIC will focus oversight on whether AI systems produce safe and sound outcomes — accuracy, fairness, reliability — rather than on how those systems are built or what methods they use.
That's a departure from traditional model risk management (MRM) frameworks, which typically require banks to document model development methodology, validation procedures, and governance structures in significant detail. Those frameworks were designed for statistical risk models with interpretable inputs and outputs — credit scorecards, loan pricing models, stress test simulations.
Generative and agentic AI doesn't fit neatly into that architecture. A large language model used for customer service or fraud detection doesn't have a fixed equation to document. Requiring banks to apply legacy model risk frameworks to foundation model deployments would slow adoption without meaningfully reducing risk.
The Regulatory Clarification That Preceded This
Billingsley's remarks came weeks after the OCC, Federal Reserve, and FDIC jointly clarified that the interagency guidance on model risk management — originally issued in 2011 — does not directly apply to generative or agentic AI. That clarification removed a significant source of compliance ambiguity that had been slowing AI deployment decisions at regulated banks.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The joint clarification doesn't mean AI is unregulated in banking. It means regulators will assess AI through existing safety and soundness frameworks — consumer protection, fair lending, operational risk — rather than through a documentation-heavy model validation process that wasn't designed for this class of technology.
Priority Use Cases
Billingsley identified AML (anti-money laundering) detection and fraud screening as the areas where regulators most want to see banks move quickly with AI. Both involve pattern recognition across large transaction datasets at speeds and scales that rule-based systems can't match.
The practical implication: banks that deploy AI in AML and fraud contexts will face examiners focused on detection rates, false positive rates, and operational resilience — not on the internal mechanics of how the model was trained or what architecture it uses.
What to Watch
The FDIC's posture sets up an interesting test case as AI deployments expand. Outcome-based supervision works when outcomes are measurable and the supervisor has the data and expertise to assess them. As AI moves from clearly-measurable applications (fraud detection) into more complex ones (credit underwriting, loan servicing, customer advice), the gap between what regulators can assess and what banks are actually deploying will widen.
Watch for the first significant regulatory action involving an AI system at a bank. How the FDIC and OCC characterize the failure — as a technology problem, a risk management problem, or a governance problem — will reveal how outcome-based supervision actually functions in practice.
By Hector Herrera
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.