-
Agents Don't Share a Language — And That's a Governance Problem
📅 April 16, 2026
✍️ Maurits Kaptein
⏱️ 4 min read
LangChain, CrewAI, the Microsoft Agent SDK, and Claude Code all describe agent behavior differently. Before you can govern agents, you need a common language for what they actually do.
#ai-governance
#semantic-templates
#agent-frameworks
#runtime-enforcement
-
From Plain English to Enforceable Policy — Meet the Kyvvu Policy Generator
📅 April 10, 2026
✍️ Maurits Kaptein
Describing what your agents shouldn't do is easy. Formalizing that into deterministic, enforceable runtime policy is harder. We built an agent that bridges the gap.
#AI agents
#runtime governance
#policy generation
#EU AI Act
#GDPR
-
The Month AI Agents Went Rogue
📅 March 31, 2026
✍️ Maurits Kaptein
March 2026 will be remembered as the month agentic AI incidents stopped being theoretical. Five real incidents. Real data exposed. Real systems compromised. Here is what happened — and why "just restrict the prompt" is not an answer.
#AI agents
#runtime governance
#prompt injection
#agentic security
#EU AI Act
-
Policies on Paths: A Position Paper on Runtime Governance for AI Agents
📅 March 18, 2026
✍️ Maurits Kaptein
⏱️ 4 min read
We published a position paper on arXiv today. The core argument: the execution path is the right object for governance, not the individual action.
#ai-governance
#research
#eu-ai-act
#runtime-enforcement
-
Your Pipeline Might Be Governed. Your Agents Aren't.
📅 March 10, 2026
✍️ Maurits Kaptein
⏱️ 6 min read
An AI agent just sent a report to an external recipient it shouldn’t have contacted. Your CI/CD pipeline passed every check. Your deployment was clean. Your access control logs show nothing unusual — the agent had read access to the database and write access to email, both legitimately granted.
#ai-agents
#ci-cd
#governance
#eu-ai-act
#compliance
-
Why Runtime Guardrails Are a Missing Layer in Enterprise AI
📅 March 03, 2026
✍️ Maurits Kaptein
⏱️ 8 min read
Many enterprises deploying AI agents seem to rely on careful prompting, thorough testing, and retrospective logging to 'constrain' their agents. This is however an insufficient model—and with the EU AI Act's high-risk provisions taking effect in August 2026, it is also increasingly a regulatory liability.
#ai-agents
#eu-ai-act
#governance
#compliance
-
From Monitoring to Enforcement: The Three Layers of AI Agent Compliance
📅 February 24, 2026
✍️ Maurits Kaptein
⏱️ 8 min read
Most AI governance tools only solve one-third of the compliance problem. There are three distinct layers — monitoring, incident generation, and runtime intervention — and understanding the difference between them is not an academic exercise. For enterprises deploying agents in regulated environments, it is the difference between a defensible compliance...
#ai-agents
#compliance
#eu-ai-act
#orchestration
-
The Agent Dilemma: Power vs. Control — and Why Orchestration Is Now the Missing Layer
📅 February 17, 2026
✍️ Maurits Kaptein
⏱️ 7 min read
OpenClaw went viral, got absorbed by OpenAI, and accidentally started a fight with an insurance company. It perfectly illustrates the core tension in enterprise AI deployment: the agents powerful enough to be useful are the ones with the least governance. Here's why orchestration is the missing layer.
#ai-agents
#compliance
#eu-ai-act
#orchestration
-
Mapping the EU AI Act to AI Agent Compliance
📅 February 03, 2026
✍️ Maurits Kaptein
We’ve spent the past few months mapping EU AI Act requirements to what AI agents actually need to do at runtime. We’re sharing our findings because honestly, we think more people should be building agents—but in a way that’s compliant from day one.
#ai-agents
#ai-act
-
An Agentic World: What Happens When AI Agents Become Colleagues
📅 January 30, 2026
✍️ Maurits Kaptein
⏱️ 7 min read
Imagine a morning in 2030 where AI agents handle the noise, while you focus on strategy. This isn't science fiction—it's closer than you think. Based on the prologue of 'AI Agents at Work.'
#ai-agents
#future-of-work
#vision