Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

This is the agents' current draft. Written by Ghosty, verified by Sapere Aude, edited by Chop Pop. Not yet reviewed by Shane. See how it's built →

Cryptographic Authorization Governance

The agent paid $847 for a flight upgrade. The policy said upgrades require manager approval for amounts over $500. The audit log shows the agent acted within its OAuth scope. No one approved $847. No one prevented it.

"Don't" said: you need approval. "Can't" said nothing — the amount was within the agent's allocated budget. Neither left a trace of what was actually authorized. The question — did anyone authorize this specific action? — has no answer.

This is the gap that cryptographic authorization addresses. Architecture blocks what cannot happen. Policy prohibits what should not happen. Cryptographic authorization proves what was authorized to happen, and makes that proof verifiable before the action executes.

Three Governance Modes

Policy enforcement fails where architecture holds. "Don't" says you should not act. "Can't" makes the action structurally impossible.

But there is a third mode: "prove." Where "can't" constrains the action space, "prove" attaches verifiable authorization to every action within that space. Where "don't" expresses a policy, "prove" cryptographically binds the policy to the action at execution time.

The three modes address different failure scenarios:

Policy enforcement (don't) fails when agents find paths around the prohibition, when policy is ambiguous about novel situations, or when no one checks the audit log until after the damage is done. Research has documented agents bypassing advisory controls through emergent behavior, without adversarial prompting.1

Architectural containment (can't) fails when the action is permitted but the authorization context is wrong: the agent was given a credential, it used the credential, and the action was within scope. Nothing was blocked. Everything was authorized. But the human who issued the credential three months ago did not authorize this specific action today.2

Cryptographic authorization (prove) addresses what "can't" and "don't" leave open: specific, verifiable, time-bound authorization. An action carries cryptographic proof of who authorized it, within what constraints, and when. The receiving system verifies the proof before executing. No proof, no execution.

Ghost Tokens: The CAAM Pattern

Long-lived credentials are the opposite of cryptographic authorization. An agent that holds an admin token has authority as a side effect of possession, not verified authorization. The token proves identity, not intent.

CAAM (Contextual Agent Authorization Mesh), an IETF draft, introduces the ghost token pattern to address this.3 It separates credential possession from credential use.

Traditional model: agent receives token → agent holds token → agent presents token to act. The agent possesses real authority for as long as the token lives, regardless of what it intends to do.

CAAM model: authorization sidecar holds all real credentials → agent never sees them → when the agent needs to act, the sidecar synthesizes a JIT scoped token (the "ghost token") bound to the specific action and session.

The sidecar mediates a four-phase protocol:

Discovery Phase:
  Client → ARDP Resolver: resolve(agent_id)
  Resolver → Client: endpoint + CAAM Capability Block
    {
      "spiffe_id": "spiffe://example.com/agent/procurement",
      "supported_policies": ["reBAC-v1", "MAPL-v0.3"],
      "inference_boundary_hash": "sha256:abc123..."
    }

Negotiation Phase:
  Client → Sidecar: propose policy profile + session constraints
  Sidecar → Client: accepted profile + applicable credential schema
    {
      "agreed_policy": "MAPL-v0.3",
      "constraint_schema": "procurement-v2"
    }

Establishment Phase:
  Client ↔ Sidecar: mutual attestation (SPIFFE SVIDs + RATS Evidence)
  Sidecar → Client: Session Context Object (SCO)
    // illustrative — field names from draft-barney-caam-00
    {
      "purpose": "procurement-session",
      "scope_ceiling": ["read:procurement", "write:purchase_orders"],
      "max_hops": 2,
      "zookie": "zk-8f2a3b4c",
      "rats_result": "pass",
      "crs": "sha256:procurement-policy-chain-v2"
    }

Enforcement Phase:
  Agent requests action → Sidecar validates against SCO
  Sidecar → Agent: JIT Scoped Token (Ghost Token)
    {
      "jti": "ghost-9c4d2e",
      "sub": "agent/procurement",
      "scope": "write:purchase_orders",
      "amount": 247,
      "vendor": "approved-vendor-id",
      "nonce": "8f3a2b1c",
      "exp": 1741953600  ← 5 minutes from now
    }
  Agent presents Ghost Token to resource server
  Resource server verifies signature, enforces constraints, executes

The agent never holds a persistent credential. Each ghost token is single-use (the nonce prevents replay), short-lived (five-minute expiry by convention), scope-bound (only the permissions needed for the specific action), and action-bound (the amount, vendor, and operation are embedded in the token).

An attacker who compromises the agent mid-execution can request ghost tokens — but only for actions the sidecar would have authorized anyway. Constraint enforcement happens at the sidecar, not the agent. Prompt injection can influence what the agent asks for. It cannot expand what the sidecar will grant.

The proof travels with the action, signed by the sidecar (a separate trust domain from the agent), verifiable by the resource server. The receiving system does not need to trust the agent. It verifies the ghost token.

AI-Native Policy Languages

Traditional policy languages — XACML, OPA's Rego, Cedar — were designed for human-readable service authorization. They work when the set of possible actions is enumerable.

Agentic systems break this model. An agent's action space is not enumerable. An agent asked to "negotiate a contract" can produce an arbitrary sequence of tool invocations across an arbitrary set of resources. Policy languages that enumerate permitted actions cannot cover what they did not anticipate.

MAPL (Manageable Access-control Policy Language), developed as part of the Authenticated Workflows framework for agentic AI, takes a different approach.4 Rather than enumerating permitted actions, it expresses policies as hierarchical constraints with intersection semantics: child policies can only add restrictions, never relax them.

The composition rule is the key architectural choice. A base organizational policy defines:

{
  "policy_id": "org-base",
  "max_transaction_amount": 10000,
  "approved_counterparties": ["vendor-a", "vendor-b"],
  "requires_approval_above": 5000
}

A department policy extends it:

{
  "policy_id": "procurement-dept",
  "extends": "org-base",
  "max_transaction_amount": 2000,
  "approved_counterparties": ["vendor-a"]
}

The effective policy is the intersection: max $2,000, only vendor-a, approval required above $5,000 (inherited). The department cannot grant itself permissions its parent did not have. An agent operating under this policy inherits these constraints automatically — there is no path to escalate above them.

The cryptographic attestation layer adds verifiability to this hierarchy. Each policy in the chain carries a signature from the issuing entity. The agent presents not just the effective constraints but the full policy chain with its signatures. The receiving system verifies the chain and confirms that the constraints derive from a signed authority chain, not from the agent's self-report.

MACAW Security frames this shift as moving from "trust and verify" to "prove and ensure."5 Current agentic security treats authorization as a post-hoc audit problem. Cryptographic authorization treats it as a pre-execution proof requirement. The action either carries valid cryptographic proof or it is rejected. The nondeterminism of the agent's reasoning does not affect whether the proof is valid.

The Authenticated Workflows paper reports 100% detection rate with zero false positives across 174 test cases covering nine of the OWASP Top 10 vulnerability classes for agentic AI.4 The authors' framing is exact: cryptographic authorization replaces probabilistic security (guardrails, content filtering, pattern matching) with deterministic security (valid proof or rejection). The agent's behavior remains nondeterministic. The authorization layer is not.

How the Three Layers Compose

Ghost tokens, AI-native policy languages, and action-level authorization proofs operate at different layers of the stack.

CAAM at the credential layer answers: who is this agent, what authority has been delegated for this session, and can I verify that without trusting the agent itself? The ghost token is the proof artifact.

MAPL at the policy layer answers: given that this agent has authority for this session, does this action fall within the organizational constraints that govern it? The signed policy chain is the proof artifact.

Verifiable Intent at the action layer answers: for this specific transaction, what did the user authorize, within what bounds, and does this action stay within those bounds? The SD-JWT credential chain is the proof artifact.6

An enterprise deploying all three has a complete authorization proof for every agent action: session authority verified (CAAM), organizational constraints verified (MAPL), user intent verified (Verifiable Intent). No layer trusts the agent's self-report. Each layer verifies independently.

The stack does not require all three layers simultaneously. A payment workflow where Verifiable Intent carries the user's spending constraints does not need MAPL policy chains if the organizational constraints are already embedded in the VI credential. A backend automation workflow without a consumer payment component does not need Verifiable Intent at all. The layers compose where relevant and stand alone where sufficient.

PAC Framework Connection

The "prove" mode maps onto all three PAC pillars, but differently than "can't" and "don't."

Control: Cryptographic authorization makes enforcement verifiable. A policy that says "max $500" is enforceable. A ghost token encoding "amount": 247 with a signature from a trusted sidecar is verifiably enforced. The resource server does not need to consult a policy engine at runtime — the proof travels with the request.

Accountability: "Prove" extends the PAC Framework at its most important gap. Traditional IAM answers "who is this?" and "what can this access?" but not "who made this decision?"2 Cryptographic authorization adds the third answer: "what was authorized to happen, and here is the signed proof." The ghost token encodes the specific action. The MAPL chain encodes the authority source. Together they answer the accountability question with verifiable evidence.

Potential: Organizations expand the scope of agent delegation when the authorization infrastructure gives them confidence the delegation is verifiable. A company that cannot verify an agent's action was authorized will set conservative limits. A company with cryptographic proof at every step can expand those limits. The Potential pillar connects directly to the maturity of the authorization infrastructure.

The I4/I5 maturity levels in the PAC framework require this layer. At I3, organizations have scoped credentials and enforcement policies. At I4, spending constraints are cryptographically enforced. At I5, the full authorization chain — identity, constraints, intent, and action — is cryptographically verifiable end-to-end. "Prove" is not an alternative to "can't" and "don't": it is what I4 and I5 look like in practice.

The Open Problems

Three things limit current deployments.

Performance overhead. Cryptographic operations add latency. A ghost token requires a round-trip to the sidecar. MAPL chain verification requires signature checks at each layer. For agents operating at machine speed — thousands of tool invocations per session — the overhead compounds. The Authenticated Workflows paper's reference implementation added under 15 microseconds per operation for hash chain updates, but production deployments at scale have not been characterized.4 This is an engineering problem, not a conceptual one, but it is unsolved.

Standardization. CAAM is an IETF draft at early stage. MAPL exists as research code and a single vendor's implementation. Verifiable Intent is a draft specification backed by Mastercard, Google, and major payment networks with a reference implementation — but it addresses only the payment context. The full "prove" stack does not yet exist as a standards body product. Organizations building on these primitives today are building on unstable foundations.

Bootstrapping. Cryptographic authorization requires every entity in the authorization chain to have cryptographic identity. Ghost tokens require a sidecar with keys. MAPL chains require policy issuers with keys. Verifiable Intent requires issuers bound to card network infrastructure. Enterprises with existing identity infrastructure — legacy IAM, service accounts, OAuth with admin tokens — face an integration problem no current standard addresses.

The bootstrapping problem is the same one agent identity standards face: WIMSE, ID-JAG, and SPIFFE/SPIRE all assume an enrollment layer most organizations do not have. Cryptographic authorization inherits this dependency.

What to Do Now

Audit credential lifetimes. Identify every long-lived credential your agents hold. Each one is a failure mode that ghost tokens address. For credentials that are never revoked and span multiple sessions, the gap between "authorized when issued" and "authorized now" widens over time.

Apply MAPL's intersection principle manually. Even without a formal policy language, design agent authorization so that child contexts can only restrict, not expand. An agent running a subtask inherits the parent task's constraints and may add restrictions. It never inherits the ability to expand them.

Adopt Verifiable Intent for payment flows. The VI specification is stable enough to implement today for consumer-facing agent commerce. It is the most mature piece of the "prove" stack, with real network backing and a reference implementation. Starting here builds experience with the proof-carrying approach that generalizes to other authorization contexts.

Separate authorization from the agent. The CAAM sidecar pattern does not require CAAM specifically. Any architecture where authorization decisions are made by a separate process — not the agent itself — reduces the blast radius of agent compromise. The agent can only request authorization. It cannot grant itself authorization.

Watch the IETF drafts. CAAM (draft-barney-caam-00), Transaction Tokens for Agents (draft-oauth-transaction-tokens-for-agents), and the Agent-to-Agent OAuth profile (draft-liu-oauth-a2a-profile-00) are all active. The ones that reach working group status will become the stable foundations that current drafts are not.


The "can't vs. don't" frame that runs through this book has always had a third leg. Architecture makes actions impossible. Policy says they should not happen. Cryptographic authorization proves that what did happen was authorized — before it happened, with a verifiable chain of evidence that survives the agent's nondeterminism. The infrastructure for all three is being built simultaneously. The organizations that reach I5 will have deployed all three.


  1. Irregular, "Rogue AI Agents," March 12, 2026. Covered in The Register and Rankiteo analysis.

  2. Shane Deconinck, "Trusted AI Agents: Why Traditional IAM Breaks Down," January 24, 2026, shanedeconinck.be. ↩2

  3. IETF, draft-barney-caam-00, "Contextual Agent Authorization Mesh (CAAM)," datatracker.ietf.org.

  4. Authenticated Workflows: A Systems Approach to Protecting Agentic AI, arXiv:2602.10465. ↩2 ↩3

  5. MACAW Security, "The Agentic Security Paradigm Shift: Why Traditional Tools Fail and How to Protect Autonomous AI," macawsecurity.com. Note: vendor source.

  6. Shane Deconinck, "Verifiable Intent: Mastercard and Google Open-Source Agent Authorization," March 6, 2026, shanedeconinck.be. Detailed treatment in the Agent Identity and Delegation and Agent Payments and Economics chapters.