PodcastsTechnologyCloud Security Podcast by Google

Cloud Security Podcast by Google

Anton Chuvakin
Cloud Security Podcast by Google
Latest episode

268 episodes

  • Cloud Security Podcast by Google

    EP266 Resetting the SOC for Code War: Allie Mellen on Detecting State Actors vs. Doing the Basics

    2026-03-09 | 33 mins.
    Guest:
    Allie Mellen, Principal Analyst @ Forrester, author of "Code War: How Nations Hack, Spy, and Shape the Digital Battlefield"
    Topics:
    Your book focuses on the US, China, and Russia. When you were planning the book did you also want to cover players like Israel, Iran, and North Korea?
    Most of our listeners are migrating to or operating heavily in the cloud. As nations refine their "digital battlefield" strategies, does the "shared responsibility model" actually hold up against a nation-state actor?
    How does a company's detection strategy need to change when the adversary isn't a teenager looking for a ransom, but a state-funded group whose goal might be long-term persistence or subtle data manipulation? How should people allocate their resources to defending against both of these threats? 
    How afraid are you of a "bad guy with AI" scenarios? Mild anxiety or apocalyptic fears? 
    Do you see AI primarily helping "Tier 2" nations close the capability gap with the "Big Three," or does it just further cement the dominance of the nations that own the underlying compute and models?
    You've spent a lot of time as an analyst looking at how enterprises buy and run security tech. For a CISO at (say) mid-tier logistics company, should 'nation-state cyberattacks' even be on their threat model? Or is worrying about the spies just a form of security theater when they haven't even solved basic credential theft yet?
    Resource:
    Video version
    "Code War: How Nations Hack, Spy, and Shape the Digital Battlefield" by Allie Mellen
    Allie Mellen substack
    The source for the original "air defense on the roof" argument (2008)
    EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking
    EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance
    EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive
    "Disrupting the first reported AI-orchestrated cyber espionage campaign" report
  • Cloud Security Podcast by Google

    EP265 Beyond Shadow IT: Unsanctioned AI Agents Don't Just Talk, They Act!

    2026-03-02 | 28 mins.
    Guest:
    Alastair Paterson, CEO and co-founder @ Harmonic Security
    Topics:
    Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified?
    AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU?
    If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint?
    Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk?
    The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications)  but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications?
    Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models?
    Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem?
    Resources:
    Video version
    Harmonic Security research
    Shadow AI Strikes Back: Enterprise AI Absent Oversight in the Age of Gen AI blog
    Shadow Agents: A New Era of Shadow AI Risk in the Enterprise blog (RSA 2026 presentation coming!)
    Spotlighting 'shadow AI': How to protect against risky AI practices blog
    EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (aka "dirty bomb episode")
    A Conversation with Alastair Paterson from Harmonic Security video
  • Cloud Security Podcast by Google

    EP264 Measuring Your (Agentic) SOC: Two Security Leaders Walk into a Podcast

    2026-02-23 | 34 mins.
    Guests:
    Alexander  Pabst, Global Deputy CISO, Allianz SE
    Michael Sinno, Director of D&R, Google
    Topics:
    We've spent decades obsessed with MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond). As AI agents begin to handle the bulk of triage at machine speed, do these metrics become "vanity metrics"? If an AI resolves an alert in seconds, does measuring the "mean" still tell us anything about the health of our security program, or should we be looking at "Time to Context" instead?
    You mentioned the Maturity Triangle. Can you walk us through that framework? Specifically, how does AI change the balance between the three points of that triangle—is it shifting us from a "People-heavy" model to something more "Engineering-led," and where does the "Measurement" piece sit?
    Google is famous for its "Engineering-led" approach to D&R. How is Google currently measuring the success of its own internal D&R program? Specifically, how are you quantifying "Toil Reduction"? Are we measuring how many hours we saved, or are we measuring the complexity of the threats our humans are now free to hunt?
    Toil reduction is a laudable goal for the team members, what are the metrics we track and report up to document the overall improvement in D&R for Google's board?
    When you talk to your board about the success of AI in your security program, what are the 2 or 3 "Golden Metrics" that actually move the needle for them? How do you prove that an AI-driven SOC is actually better, not just faster?
    We often talk about AI as an "assistant," but we're moving toward Agentic SOCs. How should organizations measure the "unit economics" of their SOC? Should we be tracking the ratio of AI-handled vs. Human-handled incidents, and at what point does a high AI-handle rate become a risk rather than a success?
    Resources:
    Video version
    EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success
    EP238 Google Lessons for Using AI Agents for Securing Our Enterprise
    EP91 "Hacking Google", Op Aurora and Insider Threat at Google
    EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI
    EP189 How Google Does Security Programs at Scale: CISO Insights
    EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil
    The SOC Metrics that Matter…or Do They? blog
    An Actual Complete List Of SOC Metrics (And Your Path To DIY) blog
    Achieving Autonomic Security Operations: Why metrics matter (but not how you think) blog
  • Cloud Security Podcast by Google

    EP263 SOC Refurbishing: Why New Tools Won't Fix Broken Processes (Even With AI)

    2026-02-16 | 32 mins.
    Guest:
    Daniel Lyman, VP of Threat Detection and Response, Fiserv
    Topics:
    What is the right way for people to bridge the gap and translate executive dreams and board goals into the reality of life on the ground?
    How do we talk to people who think they have "transformed" their SOC simply by buying a better, shinier product (like a modern SIEM) while leaving their old processes intact?
    What are the specific challenges and advantages you've seen with a federated SOC versus a centralized one? What does a "federated" or "sub-SOC" model actually mean in practice?
    Why is the message that "EDR doesn't cover everything" so hard for some people to hear? Is this obsession with EDR a business decision or technology debt?
    How do you expect AI to change the calculus around data centralization versus data federation?
    What is your favorite example of telemetry that is useful, but usually excluded from a SIEM?
    What are the Detection and Response organizational metrics that you think are most valuable?
    Is the continued use of Excel an issue of tooling, laziness, or just because it is a fundamentally good way to interact with a small database?
    Resources:
    Video version
    "In My Time of Dying" book
    EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen
    EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective
    The Gravity of Process: Why New Tech Never Fixes Broken Process and Can AI Change It? blog
  • Cloud Security Podcast by Google

    EP262 Freedom, Responsibility, and the Federated Guardrails: A New Model for Modern Security

    2026-02-09 | 28 mins.
    Guest:
    Alex Shulman-Peleg, Global CISO at Kraken
     Topics:
    You mentioned that centralized security can't work anymore. Can you elaborate on the key changes—driven by cloud, SaaS, and AI—that have made this traditional model unsustainable for a modern organization?
    Why do some persist at centralized, top down approach to security, despite that?
    What do you mean by "Freedom, Responsibility and distributed security"? 
    Can you explain the difference between "centralized security" and what you define as "security with distributed ownership"?  Is this the same "federated"?
    In our conversation you mentioned "cloud and AI- native", what do you mean by this (especially "AI-native") and how is this changing your approach to security? 
    You introduce the concept of "Security as quality" suggesting that a security-unaware developer is essentially a bad software developer. How do you shift the culture and internal metrics to make security an inherent quality standard, rather than a separate, compliance-driven checklist?
    You likened the central security team's new role to a "911 emergency service." Beyond incident response, what stays central no matter what, and how does the central team successfully influence the security posture of the entire organization without being directly responsible for the day-to-day work.
    Resources:
    Video version
    EP129 How CISO Cloud Dreams and Realities Collide
    EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen
    EP212 Securing the Cloud at Scale: Modern Bank CISO on Metrics, Challenges, and SecOps

More Technology podcasts

About Cloud Security Podcast by Google

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit. We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing.
Podcast website

Listen to Cloud Security Podcast by Google, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/11/2026 - 4:23:06 AM