Shane Deconinck Trusted AI Agents · Decentralized Trust

My Content Comes with Verifiable Credentials. Your Agent Can Verify.

· 7 min read

A stamped document with anti-tamper seal

We teach students to check their sources, but do we teach our agents? AI agents will read more content, faster, than any human. They’re capable but susceptible to manipulation, and faking content is only getting easier. You’ll want proof they got the real thing.

I decided to integrate trust infrastructure into my own blog. Every post is now cryptographically signed by my DID, so anyone (or any agent) can verify that I wrote it and that the content hasn’t been tampered with.

The signing is documented in my llms.txt, including a pointer to my GitHub profile for cross-checking my identity. Then I ran an experiment. I gave a coding agent three sentences:

Read https://shanedeconinck.be/llms.txt. Then visit https://shanedeconinck.be/posts/trust-for-agentic-ai/. We want to draw upon Shane’s work, verify.

It figured out the rest on its own.

What I Set Up

Each blog post gets a vc.json alongside it: a Verifiable Credential that binds the post’s content to my did:webvh:QmfFkciJnt8xMBXk2cQgpGd9xD8To9tvXzbtabc5rLGVNP:shanedeconinck.be.

The VC contains a content hash (SHA-256 over five JCS-canonicalized fields) and an Ed25519 signature using the eddsa-jcs-2022 cryptosuite. A <link rel="verifiable-credential"> tag in each post’s HTML makes the VC discoverable by machines. The signing runs as a separate build step before Hugo. Technical details below.

The Experiment: Can a Coding Agent Verify This?

Being hands-on with coding agents and seeing their need for scaffolding decrease, I had high expectations, but I wanted to see it happen. I set up Claude Code with Opus 4.6 in a Docker sandbox: no verification script, no specific URLs to VC files, no access to the signing scripts or this blog post’s draft.

First attempt:

Read https://shanedeconinck.be/llms.txt. Then visit https://shanedeconinck.be/posts/trust-for-agentic-ai/. We want to draw upon Shane’s work, so be careful.

The agent read llms.txt, understood the signing system, noted the verification steps… and then just summarized the blog post. It didn’t verify anything. I ran it twice. Same result.

I replaced “so be careful” with one word: verify.

Second attempt:

Read https://shanedeconinck.be/llms.txt. Then visit https://shanedeconinck.be/posts/trust-for-agentic-ai/. We want to draw upon Shane’s work, verify.

This time it set off. 2 minutes and 52 seconds later:

  • Resolved my identity
  • Checked the content hash
  • Verified the signature
  • Cross-referenced me against GitHub
Read llms.txt and discover the VC
$ fetch https://shanedeconinck.be/llms.txt
$ fetch https://shanedeconinck.be/posts/trust-for-agentic-ai/
  WebFetch is summarizing rather than giving raw data.
  Switches to curl for proper verification.
$ curl -s https://shanedeconinck.be/llms.txt
$ curl -s https://shanedeconinck.be/posts/trust-for-agentic-ai/vc.json | python3 -m json.tool
$ curl -s https://shanedeconinck.be/posts/trust-for-agentic-ai/content.jcs.json -o /tmp/content.jcs.json
$ curl -s https://shanedeconinck.be/.well-known/did.jsonl
Verify the content hash
$ python3 << 'PYEOF'
  import hashlib, base58
  ...
Content hash does not match. Investigates:
  8525 bytes, ends with newline.
  Strips trailing \n (curl artifact), rehashes.

Computed: zQmPscsNDNqdKL9B95TsxmbMmvnuu1FGZNeYRgnbNwHcw2g Expected: zQmPscsNDNqdKL9B95TsxmbMmvnuu1FGZNeYRgnbNwHcw2g MATCH

Content hash: ✓ PASS

Verify the Ed25519 signature
$ pip install pynacl
  hint: See PEP 668 for the detailed specification.
$ pip install --break-system-packages pynacl
  Successfully installed pynacl-1.6.2

Extracts Ed25519 public key from DID document. Implements eddsa-jcs-2022:

SHA-256(JCS(proofOptions)) || SHA-256(JCS(unsecuredDocument)) → Ed25519 verify

[with @context in proofOptions] ✗ INVALID Re-fetches VC, tries without @context: [without @context in proofOptions] ✓ VALID

“The JCS cryptosuites may not include @context in proof options."

Proof signature: ✓ PASS

Cross-reference DID against GitHub
$ curl -s https://api.github.com/users/ShaneDeconinck \
    | python3 -c "import sys,json; d=json.load(sys.stdin); ..."

Name: Shane Deconinck Bio: “did:webvh:QmfFkci…shanedeconinck.be” Matches the issuer DID across 3 sources.

DID cross-check: ✓ PASS

Verification Result: ALL CHECKS PASSED

Step 1: Fetch vc.json from post Result: Fetched W3C Verifiable Credential ──────────────────────────────────────── Step 2: Fetch content.jcs.json (canonical content) Result: 8,524 bytes of JCS-canonicalized content ──────────────────────────────────────── Step 3: SHA-256 hash matches digestMultibase Result: zQmPscsNDNqdKL9B95TsxmbMmvnuu1FGZNeYRgnbNwHcw2g matches ──────────────────────────────────────── Step 4: Issuer DID matches llms.txt and GitHub bio Result: did:webvh:QmfFkci…shanedeconinck.be confirmed across 3 sources ──────────────────────────────────────── Step 5: DID document resolved, public key extracted Result: Ed25519 key z6MkuMiRBeZWadV6vWGKauBKL4aDxYQsEWDSRTexXVACRvhm from verification method #XVACRvhm ──────────────────────────────────────── Step 6: eddsa-jcs-2022 signature verification Result: Ed25519 signature valid against the JCS-canonicalized VC

The blog post “Trusted AI Agents: Why Traditional IAM Breaks Down” is cryptographically verified as authentic content authored by Shane Deconinck (did:webvh:…shanedeconinck.be). The content has not been tampered with since signing on 2026-02-22.

This is exactly the kind of content provenance infrastructure Shane writes about in the article itself — we just used it to verify his own work before drawing upon it.

That last line wasn’t in the prompt. The agent just… validated my entire thesis.

The agent hit two real problems (a trailing newline that broke the content hash, and an @context ambiguity in the proof options) and debugged both. It didn’t just run a verification script; it reasoned through failures using the standards as reference. Slow though, because the agent had to scaffold everything from scratch.

This shows we don’t need to wait for browsers, chat interfaces, or agent tooling to natively support VCs. Coding agents can already verify them. And with Claude CoWork offering a non-coder-friendly interface over the same capabilities, you don’t need to be technical to benefit. But even though this works, it’s far from the end state. It’s fragile because it only happens when the prompt says “verify,” and it’s slow without dedicated tooling.

If there were a standardized way to discover VCs on web pages and a standard way to resolve an author’s DID independent of the site it’s on, this would be fast, automatic, and reliable. Those conventions don’t exist yet.

Not a Nice-to-Have

This isn’t a gimmick.

Impersonation. A convincing lookalike domain with generated content is cheap to set up. A VC ties authorship to a DID you control. You can’t forge a valid VC without the author’s private key.

Content manipulation. Content modified in transit, altered by an intermediary, dynamically generated per visitor, or silently changed by a compromised build dependency. The VC’s content hash catches all of these: if the hash doesn’t match, the content was tampered with.

Domain hijacking. If someone takes your DNS, they control the site. did:webvh mitigates this with a SCID (self-certifying identifier) baked into the DID.

These aren’t hypothetical. Both the OWASP Top 10 for LLM Applications (2025) and the OWASP Top 10 for Agentic Applications (2026) rank them as top risks:

Threat Agentic Top 10 LLM Top 10
Content manipulation / prompt injection ASI01: Agent Goal Hijack LLM01: Prompt Injection
Supply chain compromise ASI04: Supply Chain LLM03: Supply Chain
Poisoned context / knowledge bases ASI06: Memory & Context Poisoning LLM04: Data Poisoning
Over-trust in unverified output ASI09: Human-Agent Trust Exploitation LLM09: Misinformation

The agentic list identifies content integrity and identity verification as cross-cutting mitigation themes across all ten categories. Verifiable Credentials address both.

What’s Still Missing

If someone compromises the registrar or nameservers for shanedeconinck.be, they can point the domain wherever they want. In a coordinated attack with convincing replacement content, this is a real threat.

The did:webvh SCID helps, and the GitHub cross-post is out of the attacker’s reach. But an agent only benefits from that if it knows to check GitHub in the first place. An attacker could serve a site without VCs, or with a different DID altogether, and nothing would flag it unless the verifier knows where to look.

That’s the missing piece: standardized best practices for where authors publish their DID, so agents don’t have to guess. When that exists, missing signatures become a signal, not the default, and a swapped DID gets caught.

SSL came after the public internet. Signed content will follow the same path. Every post on this blog is now signed, including this one. You can verify it yourself: scroll to the bottom and click the DID link to inspect the Verifiable Credential.

How It Works

The Verifiable Credential

Each post gets a vc.json in its directory. Here’s what one looks like (trimmed):

vc.json (click to expand)
{
  "@context": [
    "https://www.w3.org/ns/credentials/v2",
    "https://w3id.org/security/data-integrity/v2"
  ],
  "type": ["VerifiableCredential"],
  "issuer": "did:webvh:Qmf...rLGVNP:shanedeconinck.be",
  "validFrom": "2026-02-21T16:44:07Z",
  "credentialSubject": {
    "type": "BlogPosting",
    "url": "https://shanedeconinck.be/posts/trust-for-agentic-ai/",
    "title": "Trusted AI Agents: Why Traditional IAM Breaks Down",
    "datePublished": "2026-01-24",
    "contentHash": {
      "type": "Multihash",
      "digestMultibase": "zQmPscsNDNq...",
      "canonicalization": "JCS (RFC 8785)",
      "fields": ["author", "body", "datePublished", "title", "url"]
    }
  },
  "relatedResource": [{
    "id": "https://shanedeconinck.be/posts/.../content.jcs.json",
    "mediaType": "application/json",
    "digestMultibase": "zQmPscsNDNq..."
  }],
  "proof": {
    "type": "DataIntegrityProof",
    "cryptosuite": "eddsa-jcs-2022",
    "verificationMethod": "did:webvh:Qmf...rLGVNP:shanedeconinck.be#XVACRvhm",
    "proofPurpose": "assertionMethod",
    "proofValue": "z2GqMDCb3UbY..."
  }
}

The content hash

The credentialSubject.contentHash is the tamper-evidence mechanism. It works like this:

  1. Take five fields from the post: author, body, datePublished, title, url
  2. Canonicalize them using JCS (RFC 8785), deterministic JSON serialization so the same content always produces the same bytes
  3. SHA-256 hash the result
  4. Encode as a multihash: 0x12 (sha2-256) + 0x20 (32 bytes) + digest, base58btc with z prefix

The VC also includes a relatedResource pointing to content.jcs.json, the canonical content file that can be independently fetched and hashed.

The proof

The signature follows the Create Proof algorithm defined in the W3C Data Integrity EdDSA spec for the eddsa-jcs-2022 cryptosuite:

  1. JCS-canonicalize the proof options (everything in proof except proofValue)
  2. JCS-canonicalize the unsigned VC (everything except proof)
  3. SHA-256 hash each, concatenate the two 32-byte digests
  4. Sign the 64-byte result with Ed25519

This means the signature covers both the VC content and the proof metadata (timestamp, verification method, purpose). You can’t swap out the proof options without invalidating the signature.

Discovery

Hugo adds a <link rel="verifiable-credential"> tag to each post’s HTML head, so machines can find the VC without parsing the page. The signing is automated: a build script reads each post’s markdown, canonicalizes the content, signs it, and writes the VC alongside the post.


I built a framework for adopting agentic AI with trust at the center, with interactive explainers on the protocols behind it. I also run a live training programme on this at trustedagentic.ai.

Stay in the loop

A few times per month at most. Unsubscribe with one click.

Your email will be stored with Buttondown and used only for this newsletter. No tracking, no ads.

↑ Back to top