Global banks are investing record sums in AI while deploying almost none of it at scale — held back by regulatory explainability requirements, legacy infrastructure, and model validation backlogs.
Banks Are Spending Billions on AI. Almost None of It Is in Production.
By Hector Herrera | April 27, 2026
Global banks are pouring record sums into AI — and deploying almost none of it at scale. The gap between AI investment and AI in production isn't a technology problem. It's a structural one, driven by regulatory constraints, legacy infrastructure that resists change, and explainability requirements that sit in direct conflict with how most modern AI systems work.
Financial disclaimer: This article covers industry trends and does not constitute investment advice.
Background
Banking was supposed to be an early AI success story. The sector has enormous data assets, clear quantitative use cases, and the financial resources to invest at scale. And by investment metrics, it has. The largest U.S. and European banks have collectively committed billions to AI initiatives over the past three years. New AI teams, partnerships with OpenAI and Google, in-house model development — all of it well-documented in earnings calls and annual reports.
International Banker's April 2026 analysis examines the gap between what banks are spending and what's actually running in production — and finds that deployment remains cautious across almost every function outside of fraud detection and back-office automation, where AI has been embedded for years.
The Three Blockers
1. Regulatory Explainability Requirements
The most technically demanding problem is also the most fundamental. When an AI model denies a loan application or flags a transaction as suspicious, regulators in the U.S. and EU require that the institution be able to explain the specific factors that drove the decision — in plain language, to the individual affected.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
Modern large language models and deep learning systems are, by design, poor at this. They produce outputs through billions of weighted calculations that don't map cleanly to human-readable reasons. The model might correctly identify a high-risk borrower — but explaining why in a way that satisfies fair lending requirements under the Equal Credit Opportunity Act (ECOA) or the EU's GDPR Article 22 is technically difficult and legally consequential if done wrong.
The result: banks are running AI in advisory and analytical roles where explainability requirements are softer, while keeping legacy statistical models (logistic regression, decision trees) in production for credit decisioning — even when they know the AI would be more accurate.
2. Legacy Infrastructure
Most large banks run core banking systems that are decades old. Mainframe infrastructure from the 1980s and 1990s remains the backbone of deposit accounting, payment processing, and core ledger operations at institutions like JPMorgan, Bank of America, and HSBC. Integrating modern AI with these systems isn't impossible, but it requires middleware, API layers, and data pipelines that take years to build and carry their own operational risk.
Every integration point is a potential failure mode in a system where downtime is measured in reputational damage and regulatory fines, not just lost revenue. The conservatism is rational — and it means AI deployment in banking looks far more like augmentation at the edges (customer service chatbots, document processing, fraud alerts) than the core transformation the investment suggests.
3. Talent and Governance Gaps
Building AI systems is one skill set. Running them safely in a regulated environment is another. Banks that have assembled strong AI research teams often find that the bottleneck isn't model capability — it's the model risk management (MRM) function, the compliance and validation layer that regulators require before any algorithmic system touches a consequential customer decision.
MRM teams at large banks are understaffed relative to the volume of AI systems now seeking production approval. A model can be technically ready and sit in a queue for 12–18 months while the risk team works through validation backlogs. That's not mismanagement — it's the result of regulatory requirements designed before the current pace of AI development was imaginable.
What Is Actually Deployed
The functions where AI is in genuine production deployment across most large banks:
- Fraud detection — real-time transaction scoring, anomaly detection, synthetic identity spotting. This has been running for years and is well-established.
- Document processing — loan origination paperwork, KYC (know your customer) document review, contract analysis. These reduce processing time and FTE requirements in operations functions.
- Customer service chatbots — handling routine balance inquiries, payment questions, and dispute initiation. Quality varies significantly; most major banks have deployed some version.
- Internal productivity tools — AI-assisted coding for technology teams, document summarization for analysts and compliance staff.
What is not at scale: credit decisioning, investment recommendations, regulatory reporting, and trading strategy — the high-value functions where the investment thesis is most compelling and the regulatory and explainability barriers are highest.
What to Watch
The resolution of the explainability problem will define the next phase of banking AI deployment. Watch for whether the EU's AI Act — which has specific provisions on explainability for high-risk AI systems including credit decisioning — produces regulatory guidance that is workable for modern model architectures, or doubles down on requirements that effectively lock complex AI out of core banking functions. That guidance, expected to develop through 2026 and 2027, will have as much influence on bank AI deployment as any technology advance.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.