Legal & Compliance | 5 min read

Who Is Liable When an AI Agent Signs a Bad Contract? Courts Begin Confronting Agentic AI Law

Businesses are deploying AI agents to negotiate and execute contracts autonomously, but no court has definitively settled who is liable when those agents make damaging deals.

Hector Herrera
Hector Herrera
A law office featuring Contract, contracts, related to Who Is Liable When an AI Agent Signs a Bad Contract? Courts
Why this matters Businesses are deploying AI agents to negotiate and execute contracts autonomously, but no court has definitively settled who is liable when those agents make damaging deals.

Who Is Liable When an AI Agent Signs a Bad Contract? Courts Begin Confronting Agentic AI Law

By Hector Herrera | May 4, 2026 | Legal

Businesses are authorizing AI agents to negotiate and execute contracts on their behalf, and no one has definitively settled who is legally responsible when those agents make unauthorized, damaging, or simply wrong deals. Courts have not yet issued clear rulings on the question. Existing agency law — built around human agents acting on behalf of human or corporate principals — doesn't map cleanly onto systems that act autonomously at machine speed.

Background

An AI agent, in the commercial sense, is a software system that can take sequences of actions to accomplish a goal — browsing the web, sending emails, making API calls, filling out forms, negotiating terms — without requiring a human to approve each individual step. The user sets the goal and the constraints; the agent figures out the path.

For commercial purposes, this creates a novel situation. When a human employee negotiates a contract, centuries of agency law provide a framework: the employee is an agent of the employer (the principal), the employer is bound by agreements the employee reaches within the scope of their authority, and the employee can face personal liability for acting outside that scope. The system works because humans can be held accountable, can understand and communicate their authority limits, and can exercise judgment when situations are ambiguous.

AI agents can execute transactions at speeds and volumes that no human agent could match, in domains where the AI may have been given broad rather than specific authority. Legal analysts at Baker Donelson have flagged this as one of the most significant unsettled areas in commercial law for 2026.

The Core Legal Problem

Traditional agency law asks three questions when an agent makes a deal:

  1. Did the principal authorize the agent to make this kind of deal? (actual authority)
  2. Would a reasonable third party believe the agent had authority to make this kind of deal? (apparent authority)
  3. Did the principal benefit from the deal even if they didn't authorize it? (ratification)

These questions assume an agent who can be questioned, who has a mental state, who understood or should have understood the scope of their authority. AI agents don't have mental states. They don't "understand" their authority — they operate within parameters, and when situations fall outside or at the edge of those parameters, they may proceed or fail in ways that are difficult to predict or explain after the fact.

The result is a tripartite liability ambiguity: If an AI agent negotiates a contract that turns out to be damaging, liability could potentially rest with:

  • The user or enterprise deployer who authorized the agent and set its parameters
  • The AI developer or platform whose system executed actions the parameters didn't clearly contemplate
  • The counterparty who contracted with an AI agent and is now claiming the principal shouldn't be bound

Courts haven't sorted this out, and the outcomes of the cases that eventually make it to litigation will depend heavily on which jurisdiction hears them, what the underlying contract says, and how broadly courts are willing to extend traditional agency principles to autonomous systems.

Where Companies Are Today

In the absence of settled law, businesses authorizing AI agents for commercial tasks are managing the risk through contract language rather than waiting for courts or legislatures to act.

Common approaches being used in 2026:

  • Explicit authority scope in AI deployment agreements — companies are beginning to specify, in contractual language with their AI vendors, exactly what categories of action the AI system is authorized to take and what requires human approval
  • Human-in-the-loop checkpoints for high-value or high-risk transactions — the AI handles routine negotiations; a human must approve before any commitment above a defined dollar threshold or covering certain contract terms
  • Counterparty disclosure requirements — some companies are requiring that AI agents identify themselves as AI when entering negotiations, both for ethical reasons and to establish clear notice that a human principal exists and is accountable
  • Indemnification clauses in vendor agreements — enterprise customers are pushing AI vendors to accept indemnification obligations for agent errors that fall outside specified parameters, with varying success

None of these approaches fully resolves the underlying legal ambiguity, but they create a documentary record that is relevant to liability allocation after the fact.

The Specific Risks Worth Understanding

Unauthorized commitments. An AI agent instructed to "negotiate the best terms possible" on a software procurement deal may agree to a multi-year commitment or an auto-renewal clause that the authorizing executive didn't contemplate. Whether the company is bound by that agreement turns on authority scope — and if the AI's parameters were broad, the company's ability to disclaim the commitment is limited.

Fraud and misrepresentation. If an AI agent makes factual claims during a negotiation that turn out to be false — not because it was programmed to lie, but because it hallucinated or misread the available information — who bears liability for the fraudulent misrepresentation? Current law has no clean answer.

Consumer protection. The Federal Trade Commission has signaled that businesses are responsible for the claims and commitments their AI systems make, regardless of whether a human reviewed each one. That position, if it holds up, substantially resolves the liability question in consumer contexts in favor of the deploying enterprise — but it's regulatory guidance, not settled case law.

Jurisdictional arbitrage. Different states and countries will reach different answers. A contract negotiated by an AI agent between a California company and a German counterparty, with arbitration in Singapore, will involve multiple legal frameworks with different approaches to AI agent authority — creating exactly the kind of complexity that enterprise legal teams are poorly equipped to manage at AI-agent speed.

What to Watch

The first significant court rulings on agentic AI contract authority are likely to emerge from commercial disputes where the AI's action is clearly documented — which means the cases most likely to produce useful precedent are ones where a company's AI agent made a commitment that the company then attempted to disclaim. Watch for case filings in Delaware (where most commercial entities are incorporated), California (where most AI companies are based), and New York (where most large commercial contracts are litigated). The Uniform Law Commission is also examining whether model legislation for AI agent authority is worth drafting — a legislative approach that would provide clarity faster than waiting for cases to work through the courts.


Source: 2026 AI Legal Forecast: From Innovation to Compliance, Baker Donelson

Key Takeaways

  • By Hector Herrera | May 4, 2026 | Legal
  • The result is a tripartite liability ambiguity:
  • The user or enterprise deployer
  • The AI developer or platform
  • Common approaches being used in 2026:

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron