Shane Deconinck Trusted AI Agents · Decentralized Trust
Just launched: trustedagentic.ai · a structured approach to your agentic transformation

When Intelligence Becomes Commodity, Infrastructure Becomes the Edge

· 5 min read

Until recently, building with AI meant training your own models. You needed proprietary data, specialised compute, a team that understood the stack. That investment was hard to replicate.

That’s changing. General-purpose models, backed by billions in training compute, are now good enough to handle most business tasks without custom training. Open-weight alternatives are closing the gap on standard benchmarks. The intelligence layer is becoming commodity.

But the real shift is in something specific: agents that reason across long tasks, use tools, and recover from errors autonomously. And here, the frontier API models are pulling away.

None of this means specialised systems disappear. A radiology model, a fraud detector - domain-specific ML still has its place. But a general-purpose agent can call them as tools. Your fraud model becomes an API the agent knows when to invoke. It doesn’t replace your stack. It wraps around it.

There’s still integration complexity. Custom pipelines. Years of engineering. That feels like a moat. But it’s dissolving. The scaffolding you built - the 10-step workflow, the output parsers, the routing logic - was designed to compensate for a weaker model. A frontier model can figure out a better path on its own. What if your harness isn’t helping, but constraining? Someone starting from scratch, with no legacy scaffolding, might outperform you.

So where will the edge be found?

The model ate the pipeline

This isn’t hypothetical. It already happened.

In December 2025, Andrej Karpathy (former AI lead at Tesla, co-founder of OpenAI) went from 80% manual coding to 80% agent coding in weeks. He called it “easily the biggest change to my basic coding workflow in ~2 decades of programming.” Two months later, he described giving an agent a task in plain English, walking away for 30 minutes, and coming back to a working system - something that would have been a weekend project three months prior. DHH, creator of Ruby on Rails, called it “the biggest and fastest change in 40 years. And surprisingly, the most fun too.”

This style of coding agent - minimal scaffolding, the model drives - sparked with Claude Code and spread fast. OpenCode, Codex, Gemini CLI. The pattern is the same: less harness, more model.

The tools tell the story. Claude Code’s task management started as a structured to-do list the model had to follow. As the model improved, that structure became a constraint - the model performed better when it could alter its own plan. So they replaced it with a flexible task system the model controls. The same happened with search, with context handling, with how the agent asks questions. Each iteration: strip away scaffolding, give the model more room.

Integrations? The agent figures those out itself - bash, APIs, MCP. The connector you spent weeks building is a tool call.

The pipeline isn’t getting optimised. It’s disappearing.

How fast this shifted

swyx coined “AI engineer” in 2023 when he recognised that building with models had become a discipline in its own right. Chip Huyen’s AI Engineering gave it a reference text in December 2024 - the most-read book on O’Reilly’s platform. Only after the book was published did the real shift hit. Frontier inference costs dropped by an order of magnitude. Claude Code and MCP weren’t a thing when it was written.

AI engineering still holds. But it’s already being fundamentally reshaped. Evaluation, monitoring, dataset quality - that stays. But the layers above it - the routing, the orchestration, the model-as-component architecture - are becoming dead weight. I can only imagine how many startups that built their product around those layers are back at the drawing board.

The inferential edge

So if anyone can access the intelligence and the pipeline is gone, what’s left?

Your ability to actually unleash inference into your organisation. To let powerful models run across your processes, at volume, with compounding returns.

That sounds simple. It isn’t.

Most organisations aren’t there yet. Their trust structures, oversight, processes - all built for human workers.

Agents need interfaces to act on your systems without breaking them. They need context infrastructure - access management, multi-tenancy, the right information at the right time. People in operations need ways to collaborate with agents and pass intent. Compliance needs visibility into what agents did and why. And the organisation needs guardrails to make sure nothing goes wrong at scale.

While your organisation figures that out, the general-purpose intelligence keeps improving at the labs. Your energy belongs on the infrastructure to let it run.

The inferential edge is the gap between having access to a powerful model and being able to use it. And that gap is wide.

It compounds

The inferential edge isn’t static. It compounds.

Every process you automate teaches your organisation something. Your trust infrastructure gets sharper. Your context pipelines improve. Your teams learn which processes to hand over next and at what autonomy level. Each cycle raises the ceiling on what you can safely automate.

The organisation that starts today has six months of trust infrastructure, operational learning, and organisational muscle by the time a competitor starts evaluating tools. That gap isn’t about features or data. It’s about readiness. And readiness can’t be bought off the shelf.

But this only works if the exploration is structured. A proof of concept doesn’t build organisational muscle. Clear processes, clear trust levels, clear iteration paths - that’s what turns exploration into transformation, not a graveyard of demos.

Where this leaves you

The work is already shifting - not disappearing, but changing shape. The question isn’t whether it happens, but whether your organisation is ready when it does.

This isn’t about buying the right tool or picking the right model. It’s about building the organisational capacity to let powerful models run.

  • Potential. The assessment to know where agents create real value.
  • Accountability. The governance to trace what agents did, why, and who’s responsible.
  • Control. The architecture to contain them when they run.

The organisations that start building this now will have a compounding head start. And catching up with a compounding advantage is hard.

Intelligence is rapidly becoming commodity. The edge is the infrastructure to unleash it.

At trustedagentic.ai, I’m building the PAC Framework™ to help organisations close that gap: where to deploy agents, how to hold them accountable, and how to keep them under control. Because the inferential edge doesn’t start with the model. It starts with knowing where to let it run.

Stay in the loop

A few times per month at most. Unsubscribe with one click.

Your email will be stored with Buttondown and used only for this newsletter. No tracking, no ads.

↑ Back to top