On March 3, I organised an LFDT Belgium meetup, Trusted AI Agents by Design: From Trust Ecosystems to Authority Continuity, as a follow-up to our first meetup last September. This post summarizes the key ideas, and you can watch the full recording here.
Coding agents have become significantly more capable since last September, and models keep improving. But the tension remains. Most agent capabilities stay in proof of concepts because the moment you want to unleash them in your infrastructure, or across organizations, you hit trust and security questions that don’t have good answers yet. Unlike traditional software that executes predefined logic, agents create their own intent. That’s where all the potential comes from, but it’s also something our IT systems were never designed for.
Our IAM infrastructure was built for humans, within organizational boundaries. Agents aren’t human, and their potential extends well beyond those boundaries. So for this meetup I brought in two speakers working on complementary pieces of this puzzle.
The Three-Legged Stool
The first speaker, Wenjing Chu from Futurewei Technologies, chairs the AI and Human Trust working group at Trust over IP and co-authored the Trust Spanning Protocol (TSP), a foundational protocol for trusted communication. Wenjing opened with three moments that set the stage: the OpenClaw episode showing what happens when personal AI agents run unsupervised, the growing conviction that “software is dead” as agents replace human-facing interfaces, and supply chain risks around who controls your data when everything runs through remote APIs. All three point to the same gap: agents are moving fast, but the trust infrastructure underneath isn’t keeping up.
Wenjing then described three interdependent pillars for trustworthy AI systems.

- Guardrails: what we can enforce outside the model
- Alignment: whether the model tends to follow the right path
- Governance: the rules and regulations we impose
Governance is ineffective without the other two. Alignment is probabilistic by nature. So the talk focused on guardrails: the deterministic systems we can build today.
Whether agents use human credentials directly or are given their own service accounts, they operate in environments built for humans. Patterns like On-Behalf-Of help within a domain, but they still rely on the existing token model and don’t carry authority across trust boundaries. And when agents get their own accounts, those accounts typically get broad, static permissions that don’t match the fine-grained, task-specific authority they actually need. Everything happens in infrastructure that assumes a human is in the loop.
Trust Spanning Protocol: Identity and Delegation for Agents
The solution that TSP proposes starts with two additions to the agent architecture: a wallet and verifiable identifiers. Instead of reusing the human’s credentials, the agent gets its own identity. It presents itself as “I am agent of so-and-so, here’s the authorization I got from them.” That authorization can be verified, scoped, and made accountable.
The verifiable identifiers are central to the design. They’re long-term and durable, supporting key rotation with pre-commits so agents can build verifiable history over time (and eventually, something like reputation). All communication between identified agents is both authenticated and private: who said what to whom is preserved for accountability, while content and metadata stay protected. Credentials can be layered on top: you bootstrap with basic verifiable identifiers and then build up through message interactions at higher layers.
Beyond that, the protocol is engineered for practical adoption: binary encoding with text mapping, agnostic to identifier types and payload formats, and designed for easy cryptographic agility. The implementation is compact (currently in Rust) and hosted at the Open Wallet Foundation.

TSP is deliberately thin. It’s the bedrock layer, not the building itself. Agent protocols like MCP and A2A can run on top of it. Replace MCP’s transport layer with TSP, introduce a wallet and identifiers, and you get what Wenjing calls “TMCP”: the same JSON-RPC calls, but now every interaction is authenticated, signed, and traceable. The higher layers become simpler because the foundation handles identity and trust.
If the foundation is solid, credential exchange becomes simple. If not, complexity multiplies at every layer above.
Authority Continuity: The Paradigm Shift
Nicola Gallo from Nitro Agility, who co-chairs the Trusted AI Agents working group at the Decentralized Identity Foundation, came at the problem from a different angle: distributed systems engineering.
Nicola’s starting point: agents aren’t a new kind of security subject. They’re workloads. Smart workloads, but workloads nonetheless. They get replicated, rescheduled, they’re short-lived, and their identifiers are unstable. Yet the authority they carry can’t be unstable.
From there, Nicola’s key observation: AI agents didn’t create new security problems. They just exposed assumptions we were hiding behind. No perimeters, no stable topology, no single owner, no shared configuration trust. The confused deputy, ambient authority, privilege escalation: these aren’t new bugs. They’re consequences of the model we’ve been using.
And the root of that model? We treat authority as an object. We create tokens, store them, transfer them, consume them. Whoever holds the token can exercise the authority. A stolen token works. A replayed token works. A token used in an unintended context works. Possession equals authority.
This works within a perimeter. But agents removed the perimeter. And in distributed systems with asynchronous operations and messaging, the token model collapses: how do you pass tokens when you don’t know the next worker, when it might not come alive before the token expires, when encryption requires shared infrastructure?
The industry workaround: service accounts and access keys that create authority under their own identity. And that’s exactly where confused deputy is guaranteed.
From Possession to Continuity
Nicola’s paradigm shift reframes the structural elements:
- Identity: represents a subject
- Intent: the desired action of that subject
- Authority: identity + intent (created when an identity expresses a will)
- Workload: the executor that continues or creates authority
- Governance: can stop, restrict, or leave authority unchanged, but never expand it
Authority exists only when execution preserves the origin.

This is PIC: Provenance, Identity, Continuity. The new primitive is proof of continuity instead of proof of possession. Each execution step forms a virtual chain: the workload proves it can continue under the received authority, satisfying the guardrails (department membership, company affiliation, etc.). The trust plane validates this at each step and creates the next link. Authority can only be restricted or maintained, never expanded.
An important nuance: to continue authority, a workload doesn’t need its own identity. It just needs to prove it can operate within the received authority’s constraints. But to create authority, you need an identity and an expressed intent. That distinction is what makes the model work.
Under this model, the confused deputy isn’t detected or mitigated. It’s eliminated. And that’s mathematically proven. If Alice asks an agent to summarize a file she doesn’t have access to, the agent can’t execute under its own authority: the continuity chain carries Alice’s original permissions. The only way to access that file is to create new authority, which is a deliberate act with its own accountability, not an accidental confused deputy.
Nicola also showed that performance isn’t a blocker: executing a continuity chain takes microseconds, comparable to a token exchange call. The overhead is a deployment concern, not an architectural one.
Where They Meet

The Q&A revealed how these approaches complement each other.
TSP solves the cross-domain trust problem. How do you verify who you’re dealing with across organizational boundaries? Verifiable identifiers, authenticated channels, delegation that travels with the request.
PIC solves the authority propagation problem. Once you’re inside a system, how do you ensure that the permission scope doesn’t expand as work passes between agents, APIs, and workloads?
Both approaches share a conviction: existing web protocols (HTTP, OAuth, TLS) are mature and valuable, but insufficient for a world where agents replace many human-software interactions. The workflows are fundamentally different. These are machine protocols operating at scale, without a human clicking through an interface. And unlike human employees whose roles change occasionally, agents perform diverse, one-off tasks that can’t be pre-categorized into static permission sets. Authorization needs to be dynamic, fine-grained, and task-specific.
And both are designed to work with existing infrastructure, not replace it. PIC can use OAuth as a federated backbone, embedding its causal authority in custom claims. TSP is agnostic to identifier types, making it compatible with systems like EUDI wallets and verifiable credentials.
The Cross-Domain Challenge
One exchange stuck with me. Nicola described a scenario where an agent is authorized to “close a deal” at one company, meaning: sign, reject, or renegotiate. When that authority crosses to a counterparty where “close a deal” means only sign or reject, the semantics diverge. The agent might negotiate when it was only expected to accept or reject.
This isn’t unique to agents. The same problem exists when federating OAuth scopes across identity providers. But agents amplify it because they operate dynamically across domains we can’t anticipate.
Solving this requires not just identity and authority, but shared understanding of what actions mean across boundaries. That’s a layer that neither TSP nor PIC claims to fully solve today. But by getting identity, communication, and authority propagation right at the foundation, the semantic layer above becomes tractable.
What’s Clear
The most compelling insight from the meetup: we need to stop solving new problems with old ontologies. Configuration can’t fix a broken model. Token-based authority was designed for a world with perimeters and human operators. That world is fading.
The path forward requires:
- Separating agent identity from human identity
- Authority that is continued, never recreated
- Guardrails that are structural, not advisory
- Protocols designed for machine-to-machine interaction at scale
Both TSP and PIC are open, composable, and early enough for the community to shape. If you want to get involved: the AI and Human Trust working group at Trust over IP is developing the TSP-based agent architecture, and PIC is being developed with a formal model and growing community around it.
The fabric for trusted agents won’t come from any single effort. But the pieces are starting to fit together.