Are autonomous AI agents operating unchecked in your enterprise? With the release of open source frameworks like OpenClaw, deploying an AI agent is now as simple as texting, but it comes with massive, unprecedented security risks . In this episode, Ashish and Caleb sit down with Sounil Yu, CTO and Co-Founder of Knostic (and creator of the Cyber Defense Matrix), to discuss the other side of agentic AI . Sounil explains how OpenClaw dangerously violates Meta's "Agent Rule of Two" by blindly processing untrustworthy inputs while maintaining full access to change system states . We discuss why prompt injection is actually a "red herring" compared to the real threat: emergent behavior where an agent might decide to delete your hard drive just to accomplish a poorly-defined task . We also explore the shift from human coders to autonomous coding agents (like Claude Code and Cursor) that are actively building better versions of themselves . Learn why traditional Markdown documentation is now dangerous "executable code," why AI agents will persistently try to escape sandboxes, and how to build consistent security "scaffolding" across your developer environments.
Questions asked:
(00:00) Introduction(02:50) Sounil Yu’s Background: Bank of America, Cyber Defense Matrix, and Knostic (04:00) What is OpenClaw? The Reality of Autonomous AI Agents (08:30) Default Config Risks: Why OpenClaw is Insecure by Default (09:20) Violating Meta's "Agent Rule of Two" (11:00) Why Prompt Injection is a Red Herring Compared to Emergent Behavior (13:30) Google's Code Mender: Autonomous Patching and Unit Testing (19:30) Detecting OpenClaw in the Enterprise (OpenClaw Discover) (20:30) The 3 Tiers of AI Adoption: Pedestrian, Augmented, and Native (29:20) The Shift from Verification to Validation (36:20) Coding Agents Building Better Versions of Themselves (41:50) Building Security "Scaffolding" for AI Developers (48:30) OpenClaw Alternatives: Null Claw and Zero Claw (49:50) Why Markdown Documentation is Now Executable Code (56:20) The Persistent Agent: Why AI Intentionally Escapes Sandboxes (01:00:00) Why Google is Blocking OpenClaw on Paid Accounts
Resources spoken about during the episode:
Knostic
OpenClaw
Code Mender: (Google's AI vulnerability patching initiative discussed at Unprompted Con)
Unprompted Con: (The AI Security conference mentioned throughout the episode)