• Agents Don't Share a Language — And That's a Governance Problem
    LangChain, CrewAI, the Microsoft Agent SDK, and Claude Code all describe agent behavior differently. Before you can govern agents, you need a common language for what they actually do.
  • From Plain English to Enforceable Policy — Meet the Kyvvu Policy Generator
    Describing what your agents shouldn't do is easy. Formalizing that into deterministic, enforceable runtime policy is harder. We built an agent that bridges the gap.
  • The Month AI Agents Went Rogue
    March 2026 will be remembered as the month agentic AI incidents stopped being theoretical. Five real incidents. Real data exposed. Real systems compromised. Here is what happened — and why "just restrict the prompt" is not an answer.
  • Policies on Paths: A Position Paper on Runtime Governance for AI Agents
    We published a position paper on arXiv today. The core argument: the execution path is the right object for governance, not the individual action.
  • Your Pipeline Might Be Governed. Your Agents Aren't.
    An AI agent just sent a report to an external recipient it shouldn’t have contacted. Your CI/CD pipeline passed every check. Your deployment was clean. Your access control logs show nothing unusual — the agent had read access to the database and write access to email, both legitimately granted.
  • Why Runtime Guardrails Are a Missing Layer in Enterprise AI
    Many enterprises deploying AI agents seem to rely on careful prompting, thorough testing, and retrospective logging to 'constrain' their agents. This is however an insufficient model—and with the EU AI Act's high-risk provisions taking effect in August 2026, it is also increasingly a regulatory liability.
  • From Monitoring to Enforcement: The Three Layers of AI Agent Compliance
    Most AI governance tools only solve one-third of the compliance problem. There are three distinct layers — monitoring, incident generation, and runtime intervention — and understanding the difference between them is not an academic exercise. For enterprises deploying agents in regulated environments, it is the difference between a defensible compliance...
  • The Agent Dilemma: Power vs. Control — and Why Orchestration Is Now the Missing Layer
    OpenClaw went viral, got absorbed by OpenAI, and accidentally started a fight with an insurance company. It perfectly illustrates the core tension in enterprise AI deployment: the agents powerful enough to be useful are the ones with the least governance. Here's why orchestration is the missing layer.
  • Mapping the EU AI Act to AI Agent Compliance
    We’ve spent the past few months mapping EU AI Act requirements to what AI agents actually need to do at runtime. We’re sharing our findings because honestly, we think more people should be building agents—but in a way that’s compliant from day one.
  • An Agentic World: What Happens When AI Agents Become Colleagues
    Imagine a morning in 2030 where AI agents handle the noise, while you focus on strategy. This isn't science fiction—it's closer than you think. Based on the prologue of 'AI Agents at Work.'