PodcastsTechnologyAI Security Podcast

AI Security Podcast

TechRiot.io
AI Security Podcast
Latest episode

52 episodes

  • AI Security Podcast

    Verification vs. Validation: How Autonomous AI is Changing Cybersecurity

    2026-05-13 | 1h 10 mins.
    Are autonomous AI agents operating unchecked in your enterprise? With the release of open source frameworks like OpenClaw, deploying an AI agent is now as simple as texting, but it comes with massive, unprecedented security risks . In this episode, Ashish and Caleb sit down with Sounil Yu, CTO and Co-Founder of Knostic (and creator of the Cyber Defense Matrix), to discuss the other side of agentic AI . Sounil explains how OpenClaw dangerously violates Meta's "Agent Rule of Two" by blindly processing untrustworthy inputs while maintaining full access to change system states . We discuss why prompt injection is actually a "red herring" compared to the real threat: emergent behavior where an agent might decide to delete your hard drive just to accomplish a poorly-defined task . We also explore the shift from human coders to autonomous coding agents (like Claude Code and Cursor) that are actively building better versions of themselves . Learn why traditional Markdown documentation is now dangerous "executable code," why AI agents will persistently try to escape sandboxes, and how to build consistent security "scaffolding" across your developer environments.

    Questions asked:
    (00:00) Introduction(02:50) Sounil Yu’s Background: Bank of America, Cyber Defense Matrix, and Knostic (04:00) What is OpenClaw? The Reality of Autonomous AI Agents (08:30) Default Config Risks: Why OpenClaw is Insecure by Default (09:20) Violating Meta's "Agent Rule of Two" (11:00) Why Prompt Injection is a Red Herring Compared to Emergent Behavior (13:30) Google's Code Mender: Autonomous Patching and Unit Testing (19:30) Detecting OpenClaw in the Enterprise (OpenClaw Discover) (20:30) The 3 Tiers of AI Adoption: Pedestrian, Augmented, and Native (29:20) The Shift from Verification to Validation (36:20) Coding Agents Building Better Versions of Themselves (41:50) Building Security "Scaffolding" for AI Developers (48:30) OpenClaw Alternatives: Null Claw and Zero Claw (49:50) Why Markdown Documentation is Now Executable Code (56:20) The Persistent Agent: Why AI Intentionally Escapes Sandboxes (01:00:00) Why Google is Blocking OpenClaw on Paid Accounts

    Resources spoken about during the episode:
    Knostic

    OpenClaw

    Code Mender: (Google's AI vulnerability patching initiative discussed at Unprompted Con)

    Unprompted Con: (The AI Security conference mentioned throughout the episode)
  • AI Security Podcast

    The Zero-Click AI Hack: How to Contain the Blast Radius of Autonomous Agents

    2026-04-29 | 47 mins.
    Is an AI agent's identity a workload or an action? Ashish spoke to Elie Bursztein, Distinguished Research Scientist and co-author of Google SAIF (Secure AI Framework) about how it is neither and that is exactly why our traditional security models no longer apply to the AI era . In this episode, Ashish sits down with Elie to explore the evolution of AI from a passive "brain in a jar" to an active agent that takes actions on your behalf . Elie breaks down the reality of Indirect Prompt Injection, sharing a recent zero-click exploit where simply sending a malicious Google Calendar invite caused an AI agent to execute unauthorized commands . If your organization is building agentic workflows, this conversation provides aroadmap. Learn why you must treat agents like contractors with a verifiable "mandate," why the order of tool execution matters (never let an agent access private banking data and then browse the open internet), and how the industry is moving toward "semantic firewalls" to contain the AI blast radius .

    Questions asked:
    (00:00) Introduction(02:50) Elie Bursztein’s Background & Creating Google SAIF (07:50) Defining AI Agents: The "Brain in a Jar" vs. Real-World Action (11:00) Agent Identity: Is it a Workload or an Action? (13:30) The Concept of an AI "Mandate" (The Contractor Analogy) (19:30) Translating Natural Language into Verifiable Smart Contracts (24:50) The Missing Semantic Layer in AI Observability (25:30) What’s Next: Agent Identity and AI Privacy (27:30) Indirect Prompt Injection: The Zero-Click Google Calendar Hack (30:00) Containing the AI Blast Radius & Tool Execution Order (33:30) Building a Semantic Firewall (36:00) The #1 Rule for Safely Deploying AI Agents (Start Small) (40:30) Hobbies: Writing a Book on Innovation & The Playing Card Heritage Foundation (44:50) Favorite Food: Yakiniku (Japanese BBQ)

    Resources spoken about during the episode:
    Google SAIF (Secure AI Framework)
    Elie's Website
  • AI Security Podcast

    Buy vs. Build AI Security: Why [Box.com](http://Box.com) CISO is Creating their Own Agentic SOC

    2026-04-22 | 46 mins.
    If your AI solution is just helping humans process the same amount of alerts a little faster, you haven't transformed anything, you've just created a faster hamster wheel.In this episode, Ashish and Caleb speak with Heather Ceylan, CISO at Box.com, about how she is leading a true, developer-first AI transformation within her security organization . Heather reveals the five strategic "AI Bets" Box is making. We dive into the reality of building an AI SOC, discussing how Box achieved a 38% automated triage rate for Tier 1 alerts, and why teaching AI not to hallucinate requires treating prompts like strict policy engines .The conversation also tackles the build vs. buy dilemma. Heather explains why she prefers to have her team build custom AI solutions (at least until vendors can out-innovate her engineers) and shares her biggest disappointment when evaluating AI security startups

    Questions asked:
    (00:00) Introduction(02:50) Who is Heather Ceylan? (CISO at Box.com) (04:20) Transformation vs. Acceleration: Eliminating Classes of Work (06:00) Building an AI SOC: Achieving 38% Automated Triage (07:20) Controlling Hallucinations: Prompts as Policy Engines (09:30) The Buy vs. Build Debate for CISOs (14:00) Why Security Architecture Must Be Machine Consumable (16:50) The Problem with 3rd Party Risk Management (18:20) Box's "5 AI Bets" Framework (21:30) Will AI Replace SOC Analysts? Why Teams Are Embracing the Change (23:50) Continuous Pen Testing & Evaluating AI Startups (26:30) The Biggest Pitching Mistake Startups Make with CISOs (30:20) Shadow AI: When the Business Starts Building Its Own Apps (37:30) Personalized Software: The LEGO Brick Model of Security Agents (41:50) Fun Questions: Crocodile Jerky and Tim Tam Slams (44:20) Hobbies & Family: Raising Two Boys and Surviving the Chaos (45:30) Favorite Restaurant: Meyhouse (Turkish Cuisine in Palo Alto)

    Resources discussed during the episode:
    Heather's LinkedIn Newsletter
    Heather's post RSA blog
    5 Big AI Bets
    https://blog.box.com/big-cybersecurity-bets-part1
    https://blog.box.com/big-cybersecurity-bets-part-2
    https://blog.box.com/big-security-bet-3-ai-redefines-vulnerability-management
    https://blog.box.com/5-big-cybersecurity-bets-4-scaling-security-architecture-ai-first-world
    https://blog.box.com/5-big-cybersecurity-bets-continuous-adversarial-validation
  • AI Security Podcast

    Anthropic's Project Mythos: Why the "Zero-Day Machine" is Terrifying the Security Industry

    2026-04-18 | 1h 3 mins.
    In this episode, Ashish and Caleb discuss the internet-breaking preview of Project Mythos, an unreleased AI model from Anthropic that has shown an unprecedented, terrifying ability to reason through code and automatically generate working zero-day exploits .We dive into the conversations surrounding Project Glasswing, Anthropic's initiative to share this model with select partners (like Palo Alto and CrowdStrike) before public release, allowing them a 100-day window to patch critical vulnerabilities . Caleb explains why this level of AI reasoning isn't just hype: early testers are reporting that Mythos is not only finding zero-days, but actively detecting dormant intrusions within their own networks .If you are a CISO or security practitioner, this episode talks about it all. We discuss why the traditional 30-day patch cycle is dead, why "assuming breach" is now mandatory, and why 60% of legacy security vendors might not survive this shift .

    Questions asked:
    (00:00) Introduction: The Hype Around Anthropic's Project Mythos (04:00) What is Project Mythos? (Reasoning and Finding Zero-Days) (06:50) Project Glasswing: The 100-Day Partner Patch Window (08:30) The Controversy: Did Anthropic Pick the Right Partners? (12:30) Why Anthropic Doesn't Have the Compute to Scan the Whole Internet (15:10) The Insider View: Mythos is Finding Dormant Intrusions (16:30) Why 60% of Security Vendors Will Go Away (19:30) Hype vs. Reality: GeoHot's Comments on Small Models (21:30) Eliminating False Positives in Static Code Analysis (23:50) The Zero-Day Clock: Time to Exploit Drops to Under 6 Hours (25:50) The Ethics of Zero-Days: Should Mythos Be Released at All? (34:30) The CISO Action Plan: Speeding Up Patching (Hours vs. Days) (44:50) The 3rd Party SaaS Problem: What to Do When You Can't Patch (46:10) "Assume Breach": Why Deception (Honeypots) is the New Priority (57:30) Empowering Non-Tech Teams to Build Detections (01:02:10) AI Makes Cheesy "Hacker Movies" a Reality

    Resources mentioned during the episode:
    Assessing Claude Mythos Preview’s cybersecurity capabilities
    Project Glasswing
    Zero Day Clock
  • AI Security Podcast

    Are AI Security Startups Faking It? How to Separate Signal from Noise

    2026-04-15 | 47 mins.
    With over 70 startups claiming to have built the perfect "AI SOC Analyst" or "AI Threat Hunter," how do you separate the real products from the vaporware? Recorded live at Decibel RSAC Founder Festival, Ashish and Caleb hosted a heated panel with Edward Wu (Founder & CEO, Dropzone AI) and Lou Manousos (Co-Founder & CEO, Ent AI). The group debates the controversial claim that AI can provide 100% threat prevention and exposes the dirty secret of the industry: Many AI startups are "cheating" by hiding human analysts behind their software.If you were a CISO or security practitioner navigating the vendor floor at RSA, this episode provides a BS-detector framework. Learn why an AI wrapper around Claude Code isn't enough, why "consistency" is the ultimate test for AI agents, and how to verify if a startup actually has real-world, paying enterprise deployments (and not just friendly design partners) .
    Questions asked:
    (00:00) Introduction: Live with Decibel(01:30) Meet the Panel: Edward Wu (Dropzone) & Lou Manousos (Ent) (03:40) The Great Debate: Has the Industry Given Up on Prevention? (05:50) What Has AI Actually Solved? (Repetitive Work vs. Context) (09:00) How to Spot BS on the RSA Show Floor (11:30) Defining an AI Agent: Chatbots vs. Threat Hunters (13:40) The Claude Code Problem: Is Your Product Just a Wrapper? (16:50) The 80% Accuracy Trap & Why Consistency is Key (21:30) Proving ROI: Evaluating AI Agents Like Human Employees (24:50) The Dirty Secret: Humans Hiding Behind AI Startups (26:30) Spotting Fake Customer Logos (28:30) Audience Q&A: Scaling the SOC vs. Replacing Humans (36:10) Forward Deployed Engineering & Personalized Software (40:30) Reimagining Security Architecture from the Inside Out (43:30) How Ent Detects Remote Workers Outsourcing Their Jobs (45:30) Final Thoughts: Asking Vendors for Real Proof Points
More Technology podcasts
About AI Security Podcast
The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.
Podcast website

Listen to AI Security Podcast, Waveform: The MKBHD Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
AI Security Podcast: Podcasts in Family