LCPLegal Context Protocol

Trust in Depth

The conceptual framework: Identity, Trust, and the Legal Foundations of Agentic Commerce

Trust in Depth is the conceptual framework motivating the Legal Context Protocol. Proposed by David Fisher (CEO, Integra Ledger) and Bridget McCormack (CEO, American Arbitration Association), it applies the defense-in-depth principle from information security to the problem of autonomous agents entering commerce without the infrastructure to connect their actions to law.

The full white paper — "Identity, Trust, and the Legal Foundations of Agentic Commerce" — provides the complete treatment. This page summarizes the framework.

Read the full white paper (PDF)

The Problem

The agentic commerce era is upon us. Gartner projects $15 trillion in AI-intermediated B2B commerce by 2028. But the trust infrastructure that anchored free markets for centuries was built on an implicit assumption that no longer holds: that the actor on the other side is a person, operating at human speed, constrained by human limitations, and governed by human-built guardrails.

AI agents disrupt all of that. In 2024, automated bot traffic surpassed human activity on the internet for the first time. Researchers at ETH Zurich demonstrated a 100% solve rate against Google's reCAPTCHA using fine-tuned AI models. The gate that separates humans from machines is gone.

Agentic commerce to date has focused on payments. But commerce is more than payment — it is the full lifecycle of negotiation, agreement, performance, and dispute resolution. A working payment layer is necessary but not sufficient. Agentic commerce also needs a working identity layer, agreement layer, and dispute resolution layer.


Code (Alone) Is Not Law

The history of trust in commerce is a story of identity progressively detaching from the individual — from the physical signature, through electronic signatures and PKI, to blockchain wallets, until AI agents severed the connection entirely.

The promise of smart contracts. Nick Szabo's 1996 vision for smart contracts was grounded in contract law: "a set of promises agreed to in a 'meeting of the minds.'" His architecture included arbitrators as a core feature to resolve disputes. The blockchains founded years later could not deliver that full vision. A trustless blockchain requires absolute determinism, but contracts depend on judgment, context, and principles of reasonableness and good faith — precisely the capacity that deterministic systems, by design, do not possess.

The "code is law" drift. The term "smart contract" became ubiquitous, carrying an implied association with legal contracts that the technology could not deliver. A transaction is a moment. An agreement is a relationship that extends in time — involving negotiation, interpretation, good faith, remedies, and evolving circumstances. Code is an extraordinarily reliable execution layer. But execution is not agreement. Agreement requires identity, consent, terms, jurisdiction, and recourse. Code provides none of these.

Severance. AI is the first technology with the plausible capacity to operate in the interpretive space contracts require. But that same autonomy makes AI the first actor in the history of commerce with no inherent identity. This is not a further detachment of identity from an individual — it is a complete severance. Sophistication creates an illusion of legitimacy that masks the absence of any accountability behind the interaction.


Trust in Depth

In information security, defense in depth is the principle that layered security mechanisms compose into strong security even though each individual layer is imperfect. No single firewall, perimeter, or control is the sole protection. The strength is in the composition.

Trust in depth is the parallel concept for the agentic era. No identity verification, attestation, or legal framework needs to be independently sufficient. In combination, the layers create a trust architecture that is resilient, proportional, and resistant to the attacks that break single-layer systems.

Layer 1: Human Identity

At the base of every chain of agency, a human must be identifiable. The mechanism can vary by context — government credential, biometric, KYC through a financial institution — but the layer is non-negotiable. It is the anchor for everything else. The most robust human identity infrastructure today is payments: a single mobile payment involves five layers of identity verification in a sub-second interaction. But that identity is owned by the intermediary, not the individual, and the payment rail an agent uses determines its governance posture entirely.

Layer 2: Entity Attestation

The organizational structures on behalf of which a human operates must be verifiable and attributable — through corporate filings, DNS-based identity by implication, or sovereign digital credentials. The space between personal identity and organizational authority is where the trust chain fractures most often. Business email compromise, built almost entirely on this gap, accounts for over $2.9 billion in reported US losses annually (FBI IC3, 2023). For agents, the entity question is harder still: who is "the entity" behind an agent? The company that built it? Deployed it? The user who configured it?

Layer 3: Agreement Integrity

The agreement must be tied to a governing legal framework. Jurisdiction must be established and the terms to which the parties agreed must be recorded permanently, independently verifiable, and controlled by neither party. This is blockchain's highest-value contribution: no other technology can permanently record the legal nexus of an agreement — parties, terms, jurisdiction, moment of consent — in a way that is independently verifiable, tamper-proof, and controlled by neither party. The /.well-known/legal-context.json convention makes agreement integrity discoverable; the LCP assurance levels determine how strong the integrity guarantee is.

Layer 4: Agent Authorization

The autonomous system acting on behalf of human principals must carry verifiable, bounded authority. The delegation chain — from organization to human to agent — must be auditable, revocable, and scoped. Agent identity is new: the first authoritative government engagement, NIST's AI Agent Standards Initiative, arrived only in February 2026. Yet infrastructure is being built at speed — Visa's Trusted Agent Protocol (October 2025), Mastercard's Agent Pay, and ERC-8004 ("Trustless Agents"), deployed to Ethereum mainnet in January 2026, define on-chain registries for identity, reputation, and validation. The question is not "who is this agent?" — agents have no independent legal existence. The question is "whose authority does it carry, and what are the bounds?"


What the Composition Achieves

No single trust layer can deliver what the agentic era requires. But the combination can.

  • Anchoring to real-world identity — a verifiable chain from agent action to human principal: not just a wallet address, but a person or entity that can be found, communicated with, and held accountable.
  • Legal enforceability — agreements that are legally enforceable because identity, consent, and terms are all provable. A cryptographic signature is technically sound but legally meaningless without a framework that defines who may sign, what the signature represents, and what consequences follow.
  • Jurisdictional linkage — every transaction connected to a governing legal framework. Without jurisdiction, there is no law to apply, no arbitral institution or court to decide a dispute, and no enforcement mechanism.
  • Recourse — when things go wrong (and they will), there is someone to find, a record to examine, and a legal system to invoke.

Economic Deterrence

Trust in depth restores the economic deterrent that single-layer systems have lost. A bad actor or a rogue agent would need to compromise all four layers simultaneously: forge a human identity, fabricate an entity, manipulate the agreement record, and circumvent the authorization chain. The cost of doing so recreates structural friction — not the cumbersome, slow friction of analog processes, but layered verification that makes gaming the system expensive again.


Implementation Principles

Trust in depth is a framework, not a product. Three principles should govern every implementation.

Built With Institutions, Not Against Them

The legal system — imperfect, evolving, jurisdiction-specific — is foundational to enabling billions of people to cooperate at scale. Code can execute logic. It cannot provide justice, interpret intent, balance competing interests, or adapt to circumstances its authors did not foresee. Arbitral institutions, financial regulators, and standards bodies have spent decades building trust infrastructure. The framework must extend their authority into the agentic space. The infrastructure exists; the task is extension, not invention.

Open, Not Owned

Solutions at every layer must be open rather than owned by a single platform or company. They must interoperate across jurisdictions, technologies, and agent platforms. The key insight is a separation of concerns: the protocol defines what questions must be answered about identity at each layer; the provider determines how. A government digital ID, an enterprise identity platform, a blockchain credential, and a payment rail's KYC can all fulfill the same identity operation through different mechanisms. Unification is not coming, and any framework that depends on it will fail.

Proportional, Not Maximal

Trust in depth does not require maximum assurance at every layer for every interaction. The level of trust must be proportional to the stakes. The EU has already codified this for the identification layer: eIDAS defines three tiers of electronic signature — Simple, Advanced, and Qualified — governing hundreds of millions of transactions. Trust in depth applies the same principle across the full identity stack. A two-dollar API call and a five-million-euro contract do not need the same infrastructure.


Conclusion

History shows that new technologies force the evolution of legal and social frameworks, not the other way around. The printing press led to copyright law. The automobile led to traffic regulation and liability insurance. The internet led to electronic commerce legislation and data privacy regulation. AI agents are the next forcing function.

Trust in depth — layered, composable, proportional trust mechanisms spanning human identity, entity attestation, agreement integrity, and agent authorization — is a starting point. It is a principled architecture for finding the best answers. The alternative is not a world without rules. It is a world where the rules are written by whoever moves fastest.


The LCP Standard

The white paper provides the why. The LCP standard provides the what — the practical instrument for making legal context discoverable. See The Standard for the normative specification.