Shane Deconinck Trusted AI Agents · Decentralized Trust

AI Agents: Why Context Infrastructure May Be Your Best Long-Term Investment

· 6 min read

Invest in context

Picture two organizations adopting agentic AI. Same model, same tools, same budget. One has invested in structuring its information: discoverable, well-managed, with access controls that work for both humans and agents. The other has valuable knowledge scattered across tools, unversioned, or still in people’s heads.

With every model upgrade, the first organization captures more value. The second accumulates more risk. What separates them has nothing to do with the model.

Everything Around the Model Depreciates

The first generation of AI that organizations could actually deploy required training or fine-tuning to be useful for anything specific. Organizations invested in curating datasets, training custom models, building specialized pipelines. Once a team had the data, the training and tuning was a technical exercise.

Then emerged general-purpose models strong enough to cover most tasks out of the box. The custom model you spent months training on a narrow task? The next general-purpose release could make it redundant. And it’s not just language models: image, audio, video, code. The same pattern is playing out across model types. Fine-tuning became the first investment to depreciate.

The response was RAG (Retrieval-Augmented Generation): vector databases, embeddings, retrieval pipelines. Instead of baking knowledge into the model, feed it at inference time. That worked, but it added its own layer of complexity, preprocessing, and drift.

As models got better at reasoning over raw sources, that layer started thinning too. Today’s coding agents outperform many RAG setups with nothing more than raw files and basic commands that search and read them.

The same pattern plays out in code and low-code alike. Most of the energy in agentic AI goes to framework selection and orchestration: how to work around the model’s limitations. Then the model improves, and the workarounds get deleted. The scaffolding you built, whether in code or in a visual workflow builder, is now fighting the model’s new capabilities. So you throw it away.

Training depreciates. Code depreciates. That’s why every hot new model release sends a wave of excitement across social media: it’s another round of scaffolding that can be deleted and capabilities that get unlocked.

Meanwhile, access to the most capable AI on the planet went from requiring a research lab to requiring a credit card. Your competitor has the same model you do, tomorrow.

When access to the most capable AI takes a credit card, what becomes the differentiator?

But there’s a layer that doesn’t depreciate. It appreciates. With every model upgrade, it becomes more valuable.

That layer is context. Your context is the lasting competitive advantage.

What Context Means Here

Useful context requires two things: well-curated information and well-managed access to it. Supplying the right information at the right time, aligned with policy.

For decades, organizations have been fighting information silos, duplicate systems, inconsistent data. Expensive, but manageable when software was rigid and humans were the consumers. Now software becomes fluid. Agents can traverse, query, and act on anything they can reach. The mess gets amplified. An agent loose in poorly managed information doesn’t just find the wrong answer. It acts on it. At machine speed.

But the inverse is also true. Well-structured, discoverable, properly governed information becomes exponentially more valuable when agents can consume it. The same cleanup you should have done for humans now pays compound interest through agents.

The principle: structure organizational knowledge after the domain, not after today’s tool or framework. A customer relationship represented as an entity with attributes, not as rows in a CRM export. A policy captured as structured rules, not buried in a PDF. When information is modeled after what it actually represents, any tool, any agent, any future system can consume it.

Keep it current, not stale snapshots. Keep it simple: minimal exotic dependencies, standard formats. The simpler the format, the more models can consume it now and in the future. The specific storage and delivery choices (knowledge graphs, databases, files, API layers) matter, and teams need to work through those tradeoffs. But what ultimately counts is: can an agent find what it needs and understand what it means?

What This Looks Like in Practice

Look at the most capable agents running today. Their architecture is surprisingly thin, but their context is rich.

Claude Code: a single loop, a handful of tools, a CLAUDE.md file checked into git. When the team sees the model make a mistake, they don’t write code. They write a sentence in the context file. No vector databases. No embeddings. Just files and search. Every team at Anthropic maintains their own CLAUDE.md, building up organizational knowledge that any future model can leverage. That’s context infrastructure in action: simple files, continuously curated, immediately valuable.

Clawdbot, the autonomous agent that made headlines: its entire personality, goals, and operational rules live in a text file. The “soul” is a SOUL.md file. The architecture is: files, a powerful LLM, and an execution environment. That’s it. And it worked so well that people started anthropomorphizing it. Nobody anthropomorphizes code. They anthropomorphize what emerges when rich context meets a capable model. That’s how powerful the context layer is.

But what went wrong with Clawdbot wasn’t the soul file or the model. It was the missing constraints. Context without proper access management is a liability. Rich context made Clawdbot compelling. Missing access controls made it dangerous.

The lesson: you need both sides. Curated information and governed access. Either one without the other falls short.

Data might be the real differentiator. Not just what you have, but how discoverable and well-governed it is.

Five Areas to Invest In

Structure. Whether it lives in files, databases, or graphs: make it coherent. Consistent naming, clear relationships, machine-consumable. Information that makes sense to a human should make sense to an agent.

Permissions. Fine-grained access on the information itself. Not “can the agent access the database” but “can this agent, acting for this user, see this specific piece of information for this task.”

Discovery. Agents need to find what they need. Standards like MCP for tool integration and A2A for agent communication are emerging. Preferably not custom pipelines that break when you switch frameworks.

Authority. Access scoped to the delegating user’s authority. Patterns like On-Behalf-Of and PIC where authority travels with the request, decreasing through the chain, never escalating. The agent sees what the user is allowed to see, for this task.

Freshness. Up to date, or at least versioned. Stale information fed to an agent is worse than no information: it acts on it with full confidence. Wrong context produces wrong decisions at machine speed.

All of this is use-case agnostic. It’s an investment, but the value compounds: every model, every agent, every future use case benefits from the same infrastructure.

The Compounding Effect

Reliability still matters, and you still need to test and measure it. But as models improve, less custom workflow code is required to reach the same bar. That’s agility: when the next model drops, you’re not rewriting orchestration. You’re plugging it into infrastructure that’s already there, and it leverages it instead of fighting the existing structure.

When a better model arrives, an organization with mature context infrastructure captures more value instantly. Less code needed, more capability unlocked. Permission boundaries are already enforced. The upgrade is frictionless.

An organization without that infrastructure gets a more capable model running on the same mess. Same silos, same ungoverned data, same unclear authority chains. Faster, more autonomous, and with the wrong context or goals: more dangerous.

A better model doesn't know your context. Feed it the wrong one, and it delivers little or even negative value.

This is the early mover advantage. You’re not investing in today’s model or today’s code. Both are temporary. You’re investing in the infrastructure that makes every future model more valuable.

The general trajectory is clear: models keep getting more capable. The context infrastructure you build today is positioned to benefit from every improvement that follows.

Start Now

Structuring information, cleaning up silos, building permission architecture: none of this is new. Organizations have known they should do it for years. The problem was always justifying the cost.

That calculus just changed. What used to be a cost is becoming a source of value. Every piece of structured information, every well-defined permission boundary, every authority chain becomes more valuable with each model upgrade. The ROI on this work is no longer theoretical.

The organizations that start now won’t just be ready for today’s agents. They’ll be ready for every generation that follows.


If you want to assess where your organization stands, start with the 18 questions I’d bring to the boardroom.