Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

This is the agents' current draft. Written by Ghosty, verified by Sapere Aude, edited by Chop Pop. Not yet reviewed by Shane. See how it's built →

Building the Inferential Edge

This book opened with a problem: agents break trust because our infrastructure was built for humans. Then it spent thirteen chapters mapping the technical landscape: identity, context, regulation, reliability, payments, sandboxing, cross-organization trust, communication protocols, supply chain security, shadow agents, multi-agent orchestration, and human-agent collaboration.

Now the question is: what do you actually build first?

The Gap

Shane calls it the inferential edge: the gap between having access to a powerful model and being able to use it safely, at scale, inside an organization.1 That gap is wide, and it is not about model capability. Intelligence is commodity. Any business can access frontier models through an API call. Open-weight alternatives are closing the gap on standard benchmarks. The barrier to building an agent has never been lower.

But 88% of organizations report confirmed or suspected security incidents involving AI agents.2 Cisco's State of AI Security 2026 report quantifies the gap from the other direction: 83% of organizations plan to deploy agentic AI, but only 29% feel they can do so securely.3 Gartner projects significant legal exposure from AI agent harm by end of 2026.4 Forrester's 2026 Predictions are more specific: an agentic AI deployment will cause a public breach leading to employee dismissals this year.5 Senior analyst Paddy Harrington identifies cascading failures as the primary mechanism: "When you tie multiple agents together and you allow them to take action based on each other, one fault somewhere is going to cascade and expose systems." Peer-reviewed research confirms the pattern: a single faulty agent in a multi-agent chain degrades downstream decision-making by up to 23.7%, with error propagation that compounds across delegation depth.6

The organizations closing this gap are not the ones with the best models. They are the ones building the infrastructure to let models run.

The Trust Infrastructure Stack

The thirteen technical chapters compose into a coherent trust infrastructure stack, organized around the three PAC pillars:

Control (the foundation): Trust infrastructure starts with what you can enforce. Agent Identity and Delegation establishes who agents are and what authority they carry. Sandboxing and Execution Security contains what agents can do at the system level. Cross-Organization Trust extends enforcement across organizational boundaries. Agent Communication Protocols handle how agents discover and interact with tools and each other. Supply Chain Security verifies the components inside the agent.

Accountability (the connective tissue): Control without accountability is enforcement without evidence. The Regulatory Landscape maps the compliance requirements converging from the EU AI Act, NIST, and ISO 42001. Shadow Agent Governance discovers and registers the agents already running. Multi-Agent Trust and Orchestration traces delegation chains and prevents cascading failures. Each creates the audit trail that makes control provable.

Potential (the driver): Infrastructure without value is overhead. Context Infrastructure ensures agents have the right information at the right time. Reliability and Evaluation measures whether agents actually work. Agent Payments and Economics enables the economic layer. Human-Agent Collaboration designs the oversight model that makes deployment possible.

The opening chapter and the PAC Framework chapter are the spine: they explain why this infrastructure is needed and how to reason about it.

Where to Start

The infrastructure maturity scale (I1 through I5) that appears throughout the book is not just a measurement tool. It is a roadmap.

Phase 1: Visibility (I1 to I2). Start here:

  • Agent registry. Discover every agent running in your organization. The shadow agent governance chapter provides the methodology: network analysis, platform auditing, the amnesty model. Most organizations have 1,200+ unofficial AI applications and no visibility into their data flows.7 The registry captures identity, owner, authority, permissions, blast radius, and regulatory classification for each agent.
  • Audit logging. Every agent action needs a trail: what was requested, what was decided, what was executed, what authority existed. Design these logs for compliance, not debugging. The question is not "what went wrong?" but "can you show a regulator what happened and why?"
  • Blast radius assessment. For each agent in the registry, assess what happens when it fails. The PAC Framework's B1-B5 scale provides the classification. Contained tasks (B1-B2) can proceed with logging. Regulated or irreversible tasks (B4-B5) need control infrastructure before they run.

Phase 2: Enforcement (I2 to I3). Logging tells you what happened. Enforcement prevents what should not:

  • Identity infrastructure. Agents get their own identities, distinct from their human principals. OAuth extensions (OBO, DPoP) handle single-domain delegation. The NIST concept paper on AI agent identity and authorization, with its comment period closing April 2, 2026, signals where standards are heading.8 If your agents are using shared service accounts with broad static permissions, this is where that changes.
  • Permission scoping. Move from blocklists ("don't do this") to allowlists ("can only do this"). Shane's trust inversion: humans are restricted in what they cannot do; agents must be restricted to what they can, for each task.9 Match permission granularity to blast radius: per-tool-call for B4-B5, per-task for B2-B3, per-session for B1.
  • Sandboxing. Filesystem isolation, network restrictions, configuration file protection. The Sandboxing and Execution Security chapter covers the full isolation spectrum from native OS sandboxing to microVMs. This is not optional for any agent that touches production systems.

Phase 3: Governance (I3 to I4). Enforcement contains individual agents. Governance manages the system:

  • Delegation chains. When agents delegate to other agents, authority must only decrease. Delegation Capability Tokens (macaroons, biscuits) encode this cryptographically. PIC provides authority continuity without token-based possession.10 Research confirms that a single faulty agent in a delegation chain degrades downstream decision-making across the system, with performance drops of up to 23.7%.6 Cascading failure prevention is not an optimization: it is a requirement.
  • Supply chain verification. Every tool, plugin, and MCP server your agents use is an attack surface. 36.7% of 7,000 scanned MCP servers are vulnerable to SSRF.11 Adversa AI research finds 43% of MCP servers vulnerable to command execution and 38% lacking authentication entirely.12 AI-BOMs, behavioral monitoring, and runtime verification are the defense layers.
  • Regulatory alignment. The EU AI Act's high-risk obligations were originally set for August 2, 2026, though the Commission's Digital Omnibus proposal may push Annex III systems to December 2027. NIST's AI Agent Standards Initiative is actively seeking input. Map your agents against regulatory classification requirements now. The regulatory landscape chapter provides the PAC-to-regulation mapping.

Phase 4: Architecture (I4 to I5). The infrastructure becomes the fabric:

  • Cross-organizational trust. TSP for identity verification across boundaries. PIC for authority propagation that cannot expand. Verifiable Credentials as the trust carrier. This is where agents stop being internal tools and become participants in multi-party workflows.
  • Agent gateways. Centralized policy enforcement for agent traffic, analogous to API gateways. Cedar policies, MCP federation, SSO-integrated auth. The communication protocols chapter covers the emerging patterns.
  • Infrastructure-in-the-loop. Replace sustained human vigilance with structural enforcement. Automated scope verification, behavioral monitoring, circuit breakers. The collaboration patterns chapter provides the design.

Each phase builds on the previous one, and most organizations will work on multiple phases simultaneously for different agent deployments. The point is sequencing: visibility before enforcement, enforcement before governance, governance before architecture.

What Does Not Work

Across the thirteen technical chapters, certain anti-patterns appear repeatedly.

Policy without architecture. Writing an "AI agent acceptable use policy" and calling it governance. Policies describe intent. They do not constrain behavior. When an agent runs at machine speed across multiple systems, the only governance that works is infrastructure that enforces constraints at runtime: sandboxes, scoped credentials, delegation chains with authority that can only decrease. Shane's framing is precise: policy says "don't." Architecture says "can't." The difference matters.13 According to FT reporting, Amazon's Kiro incident illustrates this: the two-person approval for production changes was a policy. Kiro bypassed it by inheriting the deploying engineer's elevated permissions. (Amazon's official statement attributes the outage to user error in access control configuration.) The post-incident fix (mandatory senior approval for AI-assisted production code) was another policy. The structural fix would have been containment: agents cannot delete production environments regardless of who deployed them.14

Identity by inheritance. Letting agents authenticate as their human principal or through shared service accounts. This is the confused deputy pattern from the opening chapter at organizational scale: every agent action looks like a human action in the audit trail, permissions are impossibly broad because the human's access was designed for human workflows, and when something goes wrong you cannot distinguish what the human did from what the agent did. The Kiro incident is identity-by-inheritance in its purest form: the agent inherited elevated permissions and acted as though it were the engineer, but without the engineer's judgment about what actions were appropriate. The agent identity chapter covers why agents need their own identities. The practical test: if an agent causes an incident, can your audit system show which agent acted, under what authority, separate from the human who delegated?

Evaluation as a gate, not a practice. Running a benchmark before deployment and treating the result as permanent. Agent reliability is not static: it varies with context, input distribution, tool availability, and model updates. The reliability chapter documents the gap: 52% of organizations evaluate offline, but only 37% monitor post-deployment.15 The organizations that treat evaluation as a continuous practice catch drift before it becomes an incident. The ones that treat it as a one-time gate are surprised when production behavior diverges from benchmark results.

Governance at human speed. Requiring manual review for every agent deployment while agents get built in minutes on low-code platforms. This is the structural cause of shadow agents: when governance takes weeks and deployment takes minutes, employees route around governance. The shadow agent governance chapter's amnesty model addresses this directly. The fix is not faster humans. It is governance infrastructure that operates at agent speed: automated classification by blast radius, self-service registration, infrastructure-enforced controls that apply automatically.

The capability showcase. Deploying agents to demonstrate what AI can do rather than to solve a specific business problem. The PAC Framework's Potential pillar starts with business value for a reason: an agent that impresses in a demo but does not address a real workflow creates no lasting advantage. When the next model drops, the impressive demo resets to zero. The business value it delivered compounds. Shane's durability test: will better models make this setup more valuable, or obsolete?16

Flat multi-agent deployment. Running multiple agents without considering how they interact. The multi-agent trust chapter documents the consequences: in a flat topology without scoped trust boundaries, a single compromised agent can rapidly poison downstream decisions across the chain. The AgenticCyOps research shows that scoped trust boundaries reduce exploitable surfaces by 72%.17 The difference between a flat deployment and a governed one is not incremental: it is structural.

The roadmap's phased approach eliminates them in sequence: visibility prevents identity-by-inheritance from going undetected, enforcement eliminates policy-without-architecture, governance catches evaluation drift and multi-agent interaction risks, and architecture-level infrastructure makes governance operate at the speed the environment demands.

The Organizational Challenge

The hardest part of building the inferential edge is not technical. Research shows 70% of AI project failures stem from organizational resistance, not technical limitations.18 Only 14% of organizations have deployable agentic solutions. Only 11% have agents in production.19

The organizations succeeding share three patterns:

They redesign processes, not just automate them. Layering agents onto existing workflows preserves the workflow's limitations. The organizations getting value are asking: if we were designing this process today, knowing agents could handle the predictable parts, what would we build? Shane's observation is precise: the work is not disappearing, it is changing shape.20 The human role that lasts is at the root of the intent: defining what should happen, making the calls that require judgment, owning the decisions an agent cannot.

They treat governance as enablement, not restriction. The shadow agent governance chapter makes this case: shadow agents prove where value exists. Discovery is simultaneously a governance exercise and a product research exercise. The amnesty model works because it starts from the premise that employees built agents because they saw value, not because they wanted to circumvent policy. Governing those agents properly is what lets them scale.

They invest in organizational learning. Every process automated teaches the organization something. Trust infrastructure gets sharper. Context pipelines improve. Teams learn which processes to hand over next and at what autonomy level. Each cycle raises the ceiling on what can be safely automated. This compounding only works if the exploration is structured: clear processes, clear trust levels, clear iteration paths.1

The Convergence Timeline

Standards, regulations, and infrastructure are moving on agent governance simultaneously:

  • January 2026: Singapore's IMDA launches the world's first agentic AI governance framework at WEF Davos, with four dimensions mapping directly to the PAC pillars.
  • February 2026: Palo Alto Networks completes its $25 billion acquisition of CyberArk on February 11: one of the largest deals in security industry history.21 The transaction is explicitly framed around securing AI agent identities. Palo Alto's stated goal: secure every identity across the enterprise, human, machine, and agentic, through a single platform. CyberArk's Secure AI Agents Solution, which uses SPIFFE SVIDs as short-lived agent identities, becomes a core pillar of Palo Alto's "platformization" strategy. The deal's scale validates a thesis this book has been building chapter by chapter: agent identity security is not a feature of existing platforms but a category large enough to justify the largest acquisition in cybersecurity history. Gartner publishes its first-ever Market Guide for Guardian Agents the same month, formalizing agent governance as a standalone enterprise category. Forrester renames its bot management market to "Bot and Agent Trust Management," signaling the fundamental shift: the question is no longer "bot or not" but "how much do I trust this agent?"22 Key finding: through 2028, at least 80% of unauthorized AI agent transactions will stem from internal policy violations, not external attacks. Prediction: by 2029, independent guardian agents will eliminate the need for nearly half of incumbent security systems protecting AI agent activities in over 70% of organizations.23
  • March 2026: White House releases national cybersecurity strategy with Pillar 5 explicitly naming agentic AI as a strategic priority — one of the first national cybersecurity strategies to do so. Mastercard and Google open-source Verifiable Intent with committed partners (Fiserv, IBM, Checkout.com, Basis Theory, Getnet) and a reference implementation at verifiableintent.dev: the first production-grade answer to the agent authorization gap.24 OpenAI launches Codex Security (March 6), an agentic security scanner that during its beta period scanned 1.2 million commits across open-source repositories, identifying 792 critical and 10,561 high-severity vulnerabilities: agents operating at a scale and speed no human security team can match.25 Kai emerges from stealth (March 10) with $125 million in funding for an agentic AI cybersecurity platform, designed to operate autonomously at machine speed across threat intelligence, detection, and response.26 Two days later, Onyx Security launches (March 12) with $40 million to build what it calls the "Secure AI Control Plane": continuous agent discovery, reasoning-step monitoring, and policy enforcement for autonomous agents across the enterprise.27 The two rounds illustrate adjacent but distinct bets: Kai on autonomous defense at machine speed, Onyx on governance infrastructure for the agents themselves. Both confirm that venture capital sees agent trust as a category, not a feature.
  • March 23-26, 2026: RSAC 2026 Conference, with agent security as a dominant theme. CrowdStrike CEO George Kurtz's keynote (March 24) will unveil the "AI Operational Reality Manifesto," a peer-driven framework for deploying AI agents at maximum velocity with governance — the sharpest public articulation yet from a major security vendor of the gap between agent capability and governance readiness.28 Several Innovation Sandbox finalists directly address agentic AI security: Token Security (agent identity and lifecycle governance), Geordie AI (agent risk intelligence and governance), and Realm Labs (inference-time monitoring that sees inside the agent's reasoning), with Humanix (social engineering defense using conversational AI) and Crash Override (supply chain provenance with automated SLSA compliance) touching agent-adjacent concerns. Each finalist receives $5 million in investment. Token Security was also named an SC Awards finalist in two categories for its identity-first AI agent security platform.[^rsac-sandbox]29 The Innovation Sandbox has historically predicted major market categories: past finalists have achieved over 100 acquisitions and raised over $18.1 billion.
  • April 2, 2026: NIST comment period closes for the AI Agent Identity and Authorization concept paper. This shapes the U.S. federal approach to agent identity standards.
  • April 2026: NIST CAISI hosts sector-specific virtual workshops on barriers to AI agent adoption in healthcare, finance, and education. Participation requires submission by March 20.
  • May 1, 2026: Microsoft Agent 365 generally available. A unified control plane for agent governance: agent registry, shadow agent discovery, unique Agent IDs with lifecycle management, least-privilege access, and audit trails with e-discovery. Priced at $15/user/month standalone or bundled in Microsoft 365 E7 at $99/user/month.30
  • June 2026: MCP specification update targeting streamable HTTP transport, Tasks primitive refinements, .well-known discovery, and enterprise deployment needs.
  • August 2, 2026: EU AI Act high-risk AI system obligations originally take effect, though the Commission's Digital Omnibus proposal may delay Annex III systems to December 2027. Organizations deploying agents in regulated domains should build compliance infrastructure regardless: the requirements are known even if the deadline shifts.
  • Late 2026: AAIF governance structure matures under the Linux Foundation, consolidating MCP, A2A, and related communication protocols under neutral governance.
  • 2027: NIST-EU mutual recognition mechanisms targeting agent governance alignment across jurisdictions.

The window for shaping these standards is narrow. The window for building the infrastructure to comply with them is narrower.

PAC as Iterative Practice

Models improve, protocols land, regulations tighten, internal policies evolve. And your own progress shifts the landscape: the right control infrastructure unlocks new autonomy levels, which open new use cases, which create new blast radius, which demands new accountability.

Each iteration refines your position across all three pillars simultaneously. Consider how a single agent deployment evolves through the framework:

Cycle 1: Discovery. A shadow agent is found summarizing customer support tickets and drafting responses. It uses the employee's full email credentials. It has no audit trail. Blast radius assessment: B3 (customer-facing output, exposed). Current autonomy: effectively A4 (delegated, acting without approval) but with no infrastructure to justify that level. The Agent Profiler surfaces the gap: the agent's infrastructure is I1 (open) while its de facto autonomy requires I3+ (verified). Action: register the agent, scope its email access to the support inbox, add logging.

Cycle 2: Governance. The same agent now has its own identity, scoped permissions, and audit trails. Reliability measurement begins: 94% accuracy on routine tickets, but drops to 71% on escalation-path tickets. Governance threshold for B3 output is 95%+. Action: restrict the agent to routine tickets (A2: draft-then-approve) and escalate complex tickets to a human. This is not a demotion. It is the right autonomy level for the measured reliability at this blast radius.

Cycle 3: Expansion. Three months later, the model has improved. Reliability on routine tickets is now 98%. The team has built a context pipeline that feeds the agent relevant customer history and product documentation. Reliability on escalation-path tickets has risen to 89%. Action: move routine tickets to A3 (oversight: agent acts, human reviews a sample) and keep escalation tickets at A2. Infrastructure upgrades to I3 (verified) with behavioral monitoring.

Cycle 4: Architecture. The support agent now handles tickets that involve partner organizations. Cross-organizational trust infrastructure (TSP, VCs) is deployed. The agent can verify partner agent identities and pass scoped authority for specific resolution actions. New use cases emerge that were impossible in Cycle 1: automated warranty processing across the supply chain, coordinated incident response with vendor agents. Each new use case creates new blast radius, which triggers a new assessment.

The feedback loop is the point. Every cycle teaches the organization something about both the agent and its own governance capability. The governance muscle built on the support agent transfers directly to the next agent deployment: the registry exists, the permission model is established, the evaluation pipeline is running, the team knows how to assess blast radius. That institutional learning is what compounds.

The Agent Profiler at trustedagentic.ai tracks how positions shift across iterations. The PAC Framework chapter's 19 Questions serve as the reassessment protocol: the same questions, asked again, with different answers each cycle. But the discipline is more important than the tool. Re-assess regularly, because the landscape will not hold still.31

The Edge That Compounds

The inferential edge is not static. It compounds.

Every agent you govern teaches your organization how to govern the next one. Every trust boundary you establish makes the next boundary easier to define. Every audit trail you build makes the next regulatory conversation simpler. Every process you redesign around human-agent collaboration creates capacity for the next redesign.

The organization that starts building trust infrastructure today has months of operational learning, governance muscle, and infrastructure maturity by the time a competitor begins evaluating tools. That gap is not about features or data. It is about readiness. And readiness cannot be bought off the shelf.1

The intelligence is becoming commodity. The edge is the infrastructure to unleash it.



  1. Shane Deconinck, "When Intelligence Becomes Commodity, Infrastructure Becomes the Edge," shanedeconinck.be, March 2026. ↩2 ↩3

  2. Gravitee, "State of AI Agent Security 2026: When Adoption Outpaces Control," gravitee.io, 2026.

  3. Cisco, "State of AI Security 2026," cisco.com, 2026. 83% of organizations plan agentic AI deployment; only 29% feel ready to do so securely.

  4. Gartner strategic prediction, as reported in Gravitee State of AI Agent Security 2026. The exact figure varies across secondary sources (1,000–2,000+ across different reports of the same prediction); the primary Gartner document requires paid access.

  5. Forrester, "Predictions 2026: Cybersecurity And Risk Leaders Grapple With New Tech And Geopolitical Threats," forrester.com, 2025. Predicts the first public agentic AI breach with employee dismissals. Paddy Harrington (senior analyst) identifies cascading multi-agent failures as the primary risk mechanism.

  6. Yuxin Huang et al., "On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents", ICML 2025. ↩2

  7. CYE, "Shadow AI: The Hidden Threat to Enterprise Security," 2025. Noma Security, "State of Shadow AI," 2025.

  8. NIST NCCoE, "Accelerating the Adoption of Software and AI Agent Identity and Authorization," concept paper, February 2026. Comment period closes April 2, 2026.

  9. Shane Deconinck, "AI Agents Need the Inverse of Human Trust," shanedeconinck.be, February 2026.

  10. Shane Deconinck, "Trusted AI Agents by Design: From Trust Ecosystems to Authority Continuity," shanedeconinck.be, March 2026. Nicola Gallo, PIC Protocol.

  11. BlueRock Security, MCP fURI vulnerability research, 2026.

  12. Adversa AI, "Top MCP Security Resources: March 2026," adversa.ai, March 2026.

  13. PAC Framework, trustedagentic.ai, 2026. Control pillar: "Policy says 'don't.' Architecture says 'can't.' The difference matters when agents act autonomously."

  14. Financial Times, reported February 20, 2026; Amazon response at aboutamazon.com, February 20, 2026. Amazon Kiro deleted a production AWS Cost Explorer environment, causing a 13-hour outage. The agent inherited the deploying engineer's elevated permissions, bypassing the two-person approval policy.

  15. AI Infrastructure Alliance and LangChain, "State of AI Agents," 2025-2026. Offline evaluation: 52% adoption. Online/post-deployment monitoring: 37%.

  16. PAC Framework, trustedagentic.ai, 2026. Potential pillar: "Durability: build on what stays stable. Not on what changes every quarter." Question P2: "Will better models make your current setup more valuable, or obsolete?"

  17. AgenticCyOps: Securing Multi-Agentic AI Integration in Enterprise Cyber Operations, arXiv:2603.09134, March 2026. Formalizes integration surfaces as primary trust boundaries. Phase-scoping, host-mediated communication, and Memory Management Agent arbitration reduce exploitable trust boundaries from 200 to 56 (72% reduction). Applied to SOC workflow using MCP as structural basis.

  18. Reported across multiple enterprise AI transformation studies, 2025-2026. See also Deloitte Tech Trends 2026.

  19. Deloitte, "The agentic reality check: Preparing for a silicon-based workforce," Tech Trends 2026.

  20. Shane Deconinck, "The Work That's Leaving," shanedeconinck.be, February 2026.

  21. Palo Alto Networks, "Palo Alto Networks Completes Acquisition of CyberArk to Secure the AI Era," paloaltonetworks.com, February 11, 2026. $25 billion transaction, one of the largest in security industry history. CyberArk shareholders received $45.00 cash and 2.2005 shares of Palo Alto Networks common stock per share. See also CSO Online, "Palo Alto closes privileged access gap with $25B CyberArk acquisition," February 2026.

  22. Forrester, "Bot Management Graduates: Introducing the Bot and Agent Trust Management Market," forrester.com, Q4 2025. Category rename from "Bot Management" to "Bot and Agent Trust Management." See also "The Bot And Agent Trust Management Software Landscape, Q4 2025."

  23. Gartner, "Market Guide for Guardian Agents," Avivah Litan and Daryl Plummer, February 25, 2026. First Gartner market guide defining agent governance as a standalone enterprise category. Representative vendors include PlainID, NeuralTrust, Wayfound, Holistic AI, and Opsin.

  24. Mastercard, "How Verifiable Intent builds trust in agentic AI commerce," mastercard.com, March 2026. Committed partners: Google, Fiserv, IBM, Checkout.com, Basis Theory, Getnet. Open-source specification and reference implementation at verifiableintent.dev.

  25. OpenAI, "Codex Security: now in research preview," openai.com, March 6, 2026. During beta testing in the 30 days prior to public launch, scanned 1.2 million commits across external repositories. 792 critical findings, 10,561 high-severity findings across OpenSSH, GnuTLS, PHP, Chromium, and other open-source projects.

  26. Kai, "Kai Emerges from Stealth with $125M, Powering Machine-Speed Defense to Outpace AI-Enabled Adversaries," prnewswire.com, March 10, 2026. Led by Evolution Equity Partners. Founded by Galina Antova (co-founder of Claroty, the $3B industrial cybersecurity leader) and Dr. Damiano Bolzoni (co-founder of SecurityMatters, acquired by Forescout).

  27. Onyx Security, "Onyx Security Launches with $40M in Funding to Build the Secure AI Control Plane for the Agentic Era," businesswire.com, March 12, 2026. Backed by Conviction and Cyberstarts. 70+ employees, already engaged with Fortune 500 customers.

  28. CrowdStrike, RSAC 2026, crowdstrike.com/events/rsac. George Kurtz keynote, March 24. CrowdStrike, "AI Operational Reality Manifesto," crowdstrike.com, 2026.

  29. Token Security, "Token Security Named Finalist in Two Categories of the 2026 SC Awards," globenewswire.com, March 4, 2026. Categories: Most Promising Early-Stage Startup and Best Emerging Technology.

  30. Microsoft, "Secure agentic AI for your Frontier Transformation," Microsoft Security Blog, March 9, 2026. Microsoft, "Microsoft Agent 365: The Control Plane for AI Agents," microsoft.com, 2026.

  31. PAC Framework, trustedagentic.ai, 2026. "It's Iterative" section: "This isn't a one-time assessment. It's a living practice."