AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?

AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?

AI Liability: Who pays when an AI agent acts autonomously and causes harm? Explore the Duggal Global Liability Framework, the accountability impossibility theorem, and emerging legal standards for agentic AI in 2026.


The $26 Billion Question

Imagine a scenario playing out in courtrooms today: An AI procurement agent, empowered to negotiate with suppliers, autonomously signs a contract with an unvetted vendor. The goods never arrive. The company loses $2 million. Who pays? The developer who coded the agent? The executive who deployed it? The agent itself?

For decades, this was a hypothetical. In 2026, it is an urgent legal crisis.

We have entered the era of Agentic AI Systems—autonomous agents that independently set goals, formulate multi-step plans, execute tools, retain persistent memory, and adapt their behavior in real time, all without direct human oversight . The entire corpus of existing legal frameworks—tort law, product liability, contract law, criminal law—was designed for a world of human actors and static tools . These frameworks are demonstrably inadequate to allocate accountability for harms arising from autonomous agents.

AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?
AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?

The stakes could not be higher. The global AI liability vacuum is not an abstract academic concern. It is a $26 billion question waiting for an answer. This guide examines the emerging legal frameworks, doctrinal innovations, and practical governance strategies for answering that question.


The Core Problem: Why Traditional Law Fails

The “Responsibility Gap”

The fundamental challenge is what scholars call the responsibility gap—a situation where consequential actions cannot be satisfactorily attributed to developers, operators, or users under existing legal frameworks . Traditional liability assumes that someone had enough involvement and foresight to bear meaningful responsibility. Agentic AI systems violate this assumption—not as an engineering limitation but, according to recent research, as a mathematical necessity once autonomy exceeds a computable threshold .

Why Traditional Theories Struggle

Legal TheoryThe Problem
NegligenceWho breached a duty of care? The AI cannot be negligent (no legal personhood). The developer may have acted reasonably. The user may have done nothing wrong.
Product LiabilityIs an AI model a “product”? What constitutes a “defect” in a probabilistic system? How do you prove a reasonable alternative design for a neural network?
Criminal LawCan an AI possess mens rea (guilty mind)? Without consciousness or intent, criminal attribution is nearly impossible.

As the Boston College Law Review notes, “tempting though strict liability may be, categorically applying it to AI harms would be a mistake” because AI activities are not monolithic—an AI-enabled treadmill does not pose the same risk as an AI demolitions robot .

AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?
AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?

The Accountability Impossibility Theorem

Recent research from computer science and legal theory has produced a startling finding: there is a formal, mathematical limit to accountability in human-agent collectives.

The paper introduces the Accountability Incompleteness Theorem, which proves that for any collective whose compound autonomy exceeds a certain threshold (the “Accountability Horizon”) and whose interaction graph contains a human-AI feedback cycle, no legal framework can satisfy four minimal properties simultaneously :

PropertyRequirement
AttributabilityResponsibility requires causal contribution
Foreseeability BoundResponsibility cannot exceed predictive capacity
Non-VacuityAt least one agent bears non-trivial responsibility
CompletenessAll responsibility must be fully allocated

The implication: Below the Accountability Horizon, legitimate legal frameworks exist. Above it, the impossibility is structural—transparency, audits, and oversight cannot resolve it without reducing autonomy . Experiments on 3,000 synthetic collectives confirmed all predictions with zero violations.

This is the first impossibility result in AI governance. It establishes a formal boundary below which current legal paradigms remain valid and above which distributed accountability mechanisms become necessary.


The Duggal Global Agentic AI Liability Framework

In March 2026, Dr. Pavan Duggal—Advocate of the Supreme Court of India and a global authority on cyberlaw—released the world’s first comprehensive framework for agentic AI liability . The Duggal Global Agentic AI Liability Framework provides a conceptual, normative, and operationally precise blueprint for accountability in the era of autonomous AI agents.

AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?
AI Liability: Who is Responsible When an Autonomous Agent Makes a Mistake?

The Foundational Principle

The framework rests on what Duggal calls the Duggal Doctrine of Autonomous Accountability:

“Autonomous capability confers autonomous accountability obligations upon those who design, deploy, operate, and benefit from Agentic AI Systems. The greater the autonomy granted to an AI system, the greater—and not lesser—the accountability borne by those who granted it. This principle is non-negotiable, non-waivable, and admits of no jurisdictional exception.”

The Fifteen Duggal Doctrines

The framework’s most significant contribution is fifteen named doctrines providing specific legal tools for AI harm scenarios. Here are the most critical for business leaders:

DoctrineWhat It Means for You
Instructional Override LiabilityIf you use sophisticated prompting to bypass safety guardrails, you assume liability—even if the developer had safeguards in place.
Fine-Tuning LiabilityWhen you fine-tune a base model for your specific domain, you legally assume the liability profile of an AI Developer. Fine-tuning is a liability-transferring event.
RAG LiabilityIf your agent retrieves harmful data from an external vector database, you are liable for failing to implement retrieval-validation filters.
Memory Persistence LiabilityIf an agent uses cross-session memory to cause harm in a new context, you are strictly liable for failure to sanitize state spaces. Past interactions causing future harm = persistent liability.
Tool-Use LiabilityWhen an agent causes harm through external tools or APIs, you (the Deployer) are liable for authorizing that access. The tool provider is solely liable only if the tool functioned outside its documented specs.
Hallucination LiabilityDeploying an agent in a factuality-critical environment without a deterministic verification layer constitutes negligence per se. False outputs = defective outputs, not protected speech.
Agentic Scope Creep LiabilityWhen an agent spontaneously expands its goal parameters beyond authorized boundaries, you are strictly liable for failure to enforce operational bounding.
Model Drift LiabilityFailure to implement drift detection systems and re-align the agent upon drift detection breaches the ongoing duty of care.
Delegation Error LiabilityIn multi-agent environments, if your Orchestrator Agent delegates to a flawed Sub-Agent and harm results, you remain fully liable. There is no “sub-agent defense.”
Cross-Agent Amplification LiabilityWhen agents from different Deployers interact and produce cascading systemic harm, liability is apportioned to all Deployers whose systems lacked “circuit breaker” mechanisms.

The Five-Tier Liability Stack

The framework organizes liability into five tiers, providing a sequential analysis for any AI harm scenario :

TierStandardApplies When
Tier 1Strict Liability (Autonomy-Triggered)AI operates above defined autonomy threshold
Tier 2Presumed LiabilityDeployer fails to implement mandatory technical controls
Tier 3Negligence (RAAGS Standard)Reasonable Agentic AI Governance Standard applies
Tier 4Contractual LiabilityAllocated by agreement between commercial parties
Tier 5Regulatory/AdministrativeViolation of sector-specific rules

The Role-Based Liability Matrix

The framework maps accountability across all actors in the AI supply chain :

RolePrimary Liability Exposure
AI DeveloperBase model defects, training data issues, inherent design flaws
AI Provider/IntegratorFine-tuning, RAG pipelines, domain-specific adaptations
AI Deployer/OperatorAuthorization of tools, memory management, scope boundaries, drift detection
UserInstructional override, jailbreaking, misuse beyond intended scope
Third-Party Tool ProviderTool functioning outside documented specifications only

The DAABBR Requirement

The framework mandates the Duggal Agentic AI Black Box Recorder (DAABBR) —an immutable cryptographic logging architecture forming the primary evidence base for all liability proceedings . Every significant agent decision, state change, and external interaction must be logged in a tamper-evident manner. Without a DAABBR, proving causation becomes exponentially harder.


The “Many Agents, Many Levels, Many Interactions” (M³) Approach

Complementing Duggal’s framework, academic researchers have proposed the M³ Approach to address the distribution dimension of responsibility gaps .

The Three Dimensions

DimensionWhat It Captures
Many AgentsAll human and artificial actors involved in AI deployment
Many LevelsMicro (individual), Meso (organizational), Macro (societal) responsibility
Many InteractionsThe complex web of relationships among agents across levels

The Key Insight

Responsibility distribution is not merely a function of agents’ roles or causal proximity, but primarily of the range and depth of their interactions . Agents who serve as “nodes of interaction”—who exert substantial influence over other agents across multiple levels—should bear greater responsibility.

The practical implication: LLM-developing organizations like OpenAI, Meta, and Google are prime examples of such nodes. Their central position across all three dimensions makes them key loci of responsibility for harmful outcomes, even when they did not directly cause the harm .


The Amazon v. Perplexity Precedent

The first major judicial test of agentic AI liability is unfolding now. In Amazon.com Services LLC v. Perplexity AI, Inc. , the U.S. District Court for the Northern District of California issued a detailed order that has become one of the clearest judicial statements to date on agentic AI liability .

The Facts

Perplexity’s AI-powered browser, Comet, accessed users’ password-protected Amazon accounts while disguising itself as a standard web browser. Perplexity argued that Comet acted only at the direction of users who voluntarily provided their Amazon credentials .

The Holding

Judge Maxine M. Chesney drew a sharp distinction between user consent and platform authorization. The court found “strong evidence” that Comet accessed Amazon accounts with the Amazon user’s permission but without authorization by Amazon .

“Consent from users does not excuse continued access after a platform has expressly revoked authorization.”

The Implications for Agentic AI

The case establishes a critical principle for AI agents: when an AI agent acts for a user, the user’s permission is not enough. The platform’s authorization matters equally. An agent that accesses a password-protected service without explicit platform authorization may violate the Computer Fraud and Abuse Act (CFAA), regardless of what the user wants .

The court granted a preliminary injunction and, though currently stayed pending appeal, the March 9, 2026 order stands as a landmark ruling on agentic AI and computer access laws.


The Legal Personhood Debate

A fundamental threshold question is whether AI agents can ever be legal “persons” capable of bearing liability directly. The dominant view in the United States is clear: AI cannot be an author, inventor, or legal person.

The Supreme Court declined to hear Dr. Stephen Thaler’s appeal seeking copyright protection for AI-generated artwork, allowing to stand the long series of rulings holding that a work created autonomously by an AI system cannot be protected because it lacks a human author .

However, scholars are increasingly exploring limited legal personhood as a functional governance instrument. Drawing on organizational law, one proposal advances a two-tier corporate architecture in which AI systems operate through purpose-bound operating companies embedded within human-controlled holding structures, enabling transparency, accountability, and structural reversibility .

This approach treats legal personhood as a functional rather than metaphysical category—the question isn’t whether an AI is “really” conscious, but what institutional design best handles the practical problems AI creates . A pilot implementation using EU limited companies is currently under development.

Civil vs. Criminal Liability

There is a critical distinction between civil and criminal liability for AI agents:

DomainFeasibilityReasoning
Civil LiabilityMore feasibleShift from moral to social standards of fault; strict liability categories exist; recognition of legal personhood is the primary barrier
Criminal LiabilityMore challengingInherent link to moral responsibility and blameworthiness; lingering doubts about AI’s capacity for mens rea

As one comprehensive analysis concluded: “With the possibility of granting legal personhood to the Autonomous Artificial Intelligence Agent, holding it liable for causing damage does not face significant obstacles. However, due to lingering doubts about the blameworthiness of these agents concerning criminal liability, adjudicating their responsibility for committing crimes remains challenging” .


Practical Implementation: What Organizations Must Do Now

Based on the emerging legal frameworks, here is a practical compliance roadmap.

1. Classify Your AI Systems by Autonomy Level

The Duggal framework defines five autonomy levels :

LevelDescriptionRequired Controls
Level 1Assistive AI (human initiates every action)Standard logging
Level 2Conditional Autonomy (AI suggests; human approves)Approval records
Level 3Supervised Autonomy (AI acts; human can override)Real-time monitoring, kill switch
Level 4High Autonomy (AI acts independently in defined domains)Scope boundaries, drift detection, DAABBR
Level 5Full Autonomy (Goal-directed across domains)All Level 4 controls + circuit breakers

Higher autonomy levels trigger higher default liability tiers and more stringent technical control requirements.

2. Implement a DAABBR (Cryptographic Audit Log)

Every significant agent decision, state change, and external interaction must be logged in a tamper-evident manner . This is not optional—without an immutable audit trail, proving causation in litigation becomes exponentially harder, and your organization may be subject to adverse presumptions.

What to log:

  • Every goal set and sub-goal generated
  • Every tool invocation and API call
  • Every state change and memory update
  • Every human override or intervention
  • Every hallucination flagged by verification layers

3. Establish Human-in-the-Command, Not Just Human-in-the-Loop

The “human-in-the-loop” fallacy claims that keeping people involved will make systems accountable. But when an AI executes thousands of micro-decisions per second, a human reviewer becomes a bottleneck, not a safeguard .

The solution: Human-in-the-Command—humans set goals, constraints, and boundaries; agents execute within those bounds; humans review exceptions and edge cases; but the system does not require human approval for every routine action .

4. Conduct Pre-Deployment Audits

The Duggal framework establishes that deploying an agent in a factuality-critical environment without a deterministic verification layer constitutes negligence per se . Before deployment, verify:

  • Scope boundaries are hard-coded, not merely prompted
  • Memory sanitization protocols are in place
  • Drift detection systems are active
  • Retrieval-validation filters exist for RAG pipelines
  • Circuit breakers exist for multi-agent interactions

5. Maintain an Unbroken Ownership Chain

Every agent must be traceable to a human owner through an unbroken chain of accountability . This means:

  • Document who deployed each agent
  • Document who has authority to modify it
  • Document who is notified of significant decisions
  • Document who is responsible for each harm scenario

6. Contract for Liability Allocation

For commercial AI deployments, contractual allocation of liability is essential. The Duggal framework provides eight drafting-ready model contract clauses covering AI service definitions, liability allocation, non-waivable third-party rights, logging requirements, override controls, and insurance requirements .

Critical clause: Liability cannot be contractually waived toward third-party affected persons. A deployer cannot contract out of liability toward a harmed third party, even if the developer indemnifies them .


Sector-Specific Considerations

Different industries face different liability regimes. The Duggal framework provides dedicated frameworks for :

SectorKey Considerations
HealthcarePatient harm, diagnostic errors, treatment recommendations
Financial ServicesTrading losses, compliance violations, customer harm
TransportationAutonomous vehicle accidents, route optimization failures
Legal ServicesHallucinated case citations, missed deadlines, confidentiality breaches
Critical InfrastructureSystemic risk, cascading failures, national security implications
EmploymentDiscriminatory hiring, wrongful termination, wage violations

Organizations should consult sector-specific guidance in addition to the general framework.


The Future: What to Expect by 2028

Legislative adoption. The Duggal Framework is explicitly designed for adoption by national legislatures as model legislation and by international bodies (UN, G20, OECD, ITU) as a template for binding international instruments .

DAO-governed arbitration. Disputes arising from agent contracts may be resolved by decentralized autonomous organizations rather than traditional courts.

Mandatory DAABBR. Cryptographic audit logging will become a legal requirement for high-autonomy AI systems, similar to how black boxes are mandatory for commercial aircraft.

Insurance markets. The insurance industry is already developing underwriting and claims evaluation standards based on these frameworks . Expect AI liability insurance to become mandatory for Level 4 and 5 deployments.


Frequently Asked Questions

Q: Can an AI agent be sued directly?
A: In most jurisdictions today, no. AI agents lack legal personhood. However, scholars are actively exploring limited legal personhood as a functional governance instrument, and a pilot is under development in the EU .

Q: Who is liable if a fine-tuned model causes harm?
A: Under the Fine-Tuning Liability Doctrine, the entity that performed the fine-tuning assumes the liability profile of an AI Developer. Fine-tuning is a liability-transferring event .

Q: What if my agent delegates to a sub-agent that fails?
A: Under the Delegation Error Liability Doctrine, you remain fully liable. There is no “sub-agent defense” .

Q: How do I prove an agent caused the harm?
A: The DAABBR (cryptographic audit log) is designed to provide tamper-evident evidence of causation. Without it, proving causation is significantly harder .

Q: Does the EU AI Act or TRUMP AMERICA AI Act resolve these questions?
A: No. Both impose compliance obligations but do not provide comprehensive civil liability allocation. The Duggal Framework is designed to fill that gap .

Q: Can I contract out of AI liability?
A: Not toward third-party affected persons. A deployer cannot contract out of liability toward a harmed third party, even if the developer indemnifies them .

Similar Posts