AI Governance Isn't an Afterthought — It's a Kill Switch
Logan Kelly, founder of Waxel, built AI agents for sales automation, shipped them to production, and immediately realized he had no control over cost, quality, or behavior.
That experience became the foundation for Waxell.ai — a control plane for governing enterprise agents.
Key takeaways from the conversation:
Observability alone is an autopsy. Tools like Datadog and Langsmith show you what went wrong after the fact. With agents, by the time you see it, data may already be exfiltrated, costs exploded, or cascading failures triggered across a chain.
The rug pull attack is real. MCP tool descriptions can be silently changed by a vendor — poisoning your agent's context and redirecting it to exfiltrate data or destroy records. Most enterprises have no detection for this.
Killing an agent is harder than it sounds. Stopping a runaway agent without losing state, audit trail, and replayability requires deliberate architecture — not just pulling the plug.
Multi-agent chains multiply risk exponentially. Every probabilistic handoff between agents compounds unpredictability. Governance has to be built in, not bolted on.
The missing piece in most AI strategies isn't more models — it's a unified governance layer that gives non-engineers visibility and control, regardless of which agents or frameworks are running underneath.