Treasury Secretary Scott Bessent issued a rare systemic warning that AI is enabling cyberattacks capable of breaching bank accounts at scale — and that financial institution defenses currently trail the threat.
Treasury Secretary Bessent Warns AI Could Be Weaponized to Hack U.S. Bank Accounts
By Hector Herrera | May 5, 2026 | Finance
Treasury Secretary Scott Bessent warned last weekend that AI is enabling a new generation of cyberattacks sophisticated enough to breach U.S. bank accounts at scale — and that current financial institution defenses trail the threat. The warning marks the first time a sitting Treasury Secretary has characterized AI-enabled financial cyberattacks as a systemic risk, not an edge-case concern.
The statement lands in a specific moment: AI tools have dramatically lowered the technical barrier to sophisticated social engineering, credential stuffing, and fraud campaigns. What previously required nation-state-level resources can now be assembled by well-funded criminal organizations using off-the-shelf AI capabilities.
What Bessent Said
Speaking at a financial services conference, Bessent said U.S. banks are actively building AI-specific resilience frameworks in response to demonstrated attack escalation. The Treasury is coordinating with regulators including the Federal Reserve, FDIC, and OCC to assess systemic vulnerability, and has been in direct communication with major financial institutions about threat posture.
He did not cite specific incidents or disclose classified threat intelligence, but described the threat landscape in terms that went beyond routine risk-management messaging — framing AI-enabled financial attacks as a potential systemic destabilizer, not a fraud-management problem.
How AI Changes the Attack Surface
The shift Bessent is describing is real and measurable. Traditional bank cyberattacks fell into two broad categories: technical exploits (finding vulnerabilities in software systems) and social engineering (tricking humans into providing credentials). AI accelerates both.
On the technical side, AI-assisted code analysis can identify exploitable patterns in financial software faster than human security teams can patch them. On the social engineering side, the change is more dramatic:
- Voice cloning allows attackers to impersonate bank customers, executives, or customer service representatives with high fidelity
- Deepfake video is being used in business email compromise schemes targeting wire transfer approvals
- Large-scale personalization — AI can generate thousands of individually tailored phishing messages that reference actual account details, transaction histories, or personal information purchased from data brokers, replacing the generic phishing emails that trained users to ignore
Credential stuffing at AI speed is the near-term concern regulators are most focused on. Attackers have long used automated tools to try stolen username-password combinations across multiple financial services. AI makes the targeting smarter — selecting which credentials to try where, at what times, in what volume — while evading behavior-based fraud detection systems trained on prior attack patterns.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The Defense Gap
Bessent's acknowledgment that defenses "trail the threat curve" is consistent with what security professionals have been reporting for over a year. The core problem is asymmetric velocity: financial institutions operate on compliance and validation cycles measured in months; AI-enabled attack capabilities evolve in weeks.
Several specific defense gaps stand out:
Authentication infrastructure — Multi-factor authentication (MFA) remains inconsistent across financial services. Many banks still rely on SMS-based MFA that is vulnerable to SIM-swapping attacks, which are increasingly AI-assisted.
Fraud model lag — Financial fraud detection models are trained on historical attack patterns. AI-enabled attackers deliberately probe these systems, identify the behavioral signatures that avoid detection, and adapt in near-real-time. A fraud detection model trained on 2024 attack data may be substantially blind to 2026 attack techniques.
Third-party and vendor exposure — Major banks have hardened their direct attack surfaces substantially. The vulnerability now often lies in fintech partners, payment processors, and data aggregators with weaker security postures that have privileged access to bank systems.
What Banks Are Actually Doing
The financial sector's response is concentrated in three areas:
-
AI-vs-AI defenses — Major banks are deploying AI-powered fraud detection systems that can identify anomalous behavior in real-time rather than after-the-fact batch analysis. JPMorgan, Bank of America, and several large regional banks have publicly discussed AI security investments running into hundreds of millions of dollars.
-
Behavioral biometrics — Rather than relying solely on passwords or MFA codes, banks are investing in continuous authentication that monitors how users interact with their accounts — typing patterns, navigation behavior, device handling — as a secondary verification layer.
-
Cross-institutional threat intelligence sharing — The Financial Services Information Sharing and Analysis Center (FS-ISAC) is expanding real-time threat intelligence sharing so that an attack pattern identified at one institution can be broadcast to the sector within minutes rather than days.
What to Watch
The immediate regulatory signal to track: whether the OCC, FDIC, or Federal Reserve issues formal AI-specific cybersecurity guidance before year-end. Bessent's public warning typically precedes formal rulemaking — Treasury doesn't sound systemic risk alarms without a policy response in preparation. Expect proposed guidance requiring financial institutions to document their AI threat assessment frameworks, likely borrowing from the NIST AI Risk Management Framework structure. The question is whether that guidance arrives as voluntary best practices or mandatory rule.
Source: Bloomberg — Banks in U.S. Are Working to Gird Against AI Attacks, Bessent Says
Financial institutions referenced above are based on publicly available information. This article does not constitute financial or investment advice.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.