Machine learning drives loan approvals worldwide, but legal accountability frameworks built for human underwriters haven't been updated for algorithms operating at scale.
AI Is Running Credit Decisions at Scale — The Governance Framework Hasn't Caught Up
By Hector Herrera | May 14, 2026
Machine learning now drives credit scoring, loan approvals, and access-to-finance decisions at financial institutions worldwide. The legal and governance frameworks meant to hold those decisions accountable were written for humans. They weren't designed for algorithms, and the gap between the two is widening precisely as AI adoption in lending accelerates fastest.
Legal experts writing in The Business & Financial Times this week make the case plainly: the governance imperative in AI lending is not abstract. It is a live liability question that no major institution has fully resolved, and courts have not yet issued the definitive rulings that would tell lenders exactly where the legal lines are.
How AI Got Into Lending
The shift happened over several years and is now deeply embedded. Banks and fintechs began with AI-assisted credit scoring — models that recommended decisions a human loan officer would approve or override. That human review layer has progressively thinned. Today, many consumer lending decisions — particularly at fintechs and digital banks — are made entirely by AI models with no human review at all.
The AI models themselves are diverse. Some use traditional machine learning on historical loan performance data. Others incorporate alternative data: payment history on utilities and rent, smartphone usage patterns, social signals. The broader the data, proponents argue, the more accurately models can assess creditworthiness for applicants with thin or no credit files. Critics argue the same expanded data can encode historical discrimination at scale, faster and more consistently than any human underwriter.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
Where Governance Has Broken Down
Existing regulatory frameworks — the Equal Credit Opportunity Act (ECOA) in the United States, consumer credit directives in the EU, and equivalent frameworks elsewhere — were built to govern human decision-makers. They focus on prohibiting discrimination based on protected characteristics and requiring adverse action notices that explain why credit was denied.
Those requirements translate awkwardly to AI systems. When a model denies credit based on a combination of hundreds of weighted variables, the "reason" it produces may be technically accurate but practically uninformative to a consumer — or to a regulator trying to assess whether the model discriminates. The explainability problem is fundamental: many high-performing models are not designed to be interpretable.
Liability assignment is equally unresolved. When an AI credit decision causes demonstrable harm — a pattern of denials that disproportionately affects a protected class, or a risk score that systematically overestimates creditworthiness and leads to defaults — who is responsible? The model developer who built the algorithm? The institution that deployed it and accepted its outputs? The data vendor whose inputs shaped the model's behavior? Courts have not yet issued rulings that clearly assign liability across that chain.
The Governance Gap in Practice
For lenders, the governance gap creates real exposure:
- Model audit requirements vary by jurisdiction and are inconsistently enforced. Many institutions cannot fully explain their credit models to regulators on demand.
- Bias testing — checking whether a model produces disparate outcomes across demographic groups — is not universally required and is methodologically contested when it is conducted.
- Adverse action explanations generated by AI systems often do not satisfy consumer-facing disclosure requirements designed for human decision-makers.
- Third-party model risk is growing as more lenders rely on vendor-built models rather than internally developed ones, creating accountability gaps between the institution and the algorithm.
What Needs to Change
Legal experts cited in the BFT analysis point toward three requirements that don't yet exist at scale: mandatory explainability standards for AI credit models, clear liability frameworks that specify institutional responsibility for model outputs, and regular third-party audits that test models for discriminatory outcomes under realistic conditions.
None of these exist consistently in current regulation. All of them are more technically complex to implement than existing consumer credit rules. And all of them are being proposed at a moment when AI adoption in lending is accelerating, not slowing.
What to Watch
The first significant U.S. or EU court ruling that assigns specific liability to a lender for AI-driven lending discrimination will be the inflection point. That decision — whenever it comes — will reshape how institutions document, audit, and govern their models. Institutions building governance frameworks now, ahead of that ruling, are positioning better than those treating it as a distant risk.
By Hector Herrera. Published May 14, 2026.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.