Ship AI

Manav Gupta
Ship AI
Latest episode

17 episodes

  • Ship AI

    John Capobianco | VibeOps, NetClaw & Network Automation

    2026-04-21 | 58 mins.
    John is Head of AI & Developer Relations at Itential, a Google Developer Expert, former Cisco AI Technical Leader, former Senior Network Architect for the Parliament of Canada — and a fellow Canadian. He's the creator of NetClaw, an open-source AI agent that lets you talk to your network infrastructure in natural language, and the founder of VibeOps Forum, which grew from zero to 400+ members in weeks.
    His thesis: after a decade of network automation evangelism, 70% of enterprise networks are still not meaningfully automated. But he no longer thinks they'll catch up the old way — because AI agents are changing the on-ramp entirely.
    In this episode, Manav and John cover:
    — Why VibeOps will eventually just become Ops (just like vibe coding became coding)
    — How to build graduated trust with agents before giving them keys to your live network
    — NetClaw: what it is, how it integrates with Cisco, Meraki, ACI, NetBox, Aruba and more
    — How to generate a real-time 3D network topology using Blender via MCP
    — What governed, enterprise-grade agentic ops actually looks like
    — Spec-Driven Development: the next evolution after vibe coding
    — John's bold prediction: by 2030, network engineers become HR managers for agents
    If you lead infrastructure, run a network team, work in telco, or just want to understand what AI is about to do to IT operations — this episode is for you.
    Find John: linkedin.com/in/johncapobianco | automateyournetwork.ca
    NetClaw on GitHub: github.com/automateyournetwork/netclaw
    Ship AI: https://manavgup.github.io/shipai/state-of-ai/
  • Ship AI

    Alex Seymour & Kyle Sava | From Demo to Production: What Actually Breaks

    2026-04-14 | 56 mins.
    Alex Seymour is a contributor to IBM's open-source agent infrastructure (BAI framework, Agent Stack, and the Relay project), exploring what it takes to build general-purpose agentic systems at enterprise scale.
    Kyle Sava took a different path — he identified a real pain point as a tech seller, built a conversational AI roleplay tool on the side, and grew it into WatsonX Workshop: an internal AI-powered platform now being used by IBM's sales teams to practice pitches, prep for meetings, generate podcasts and videos, and learn products faster.
    What we covered:
    Why enterprise and consumer AI are more similar than you think — until governance enters the room
    What "production-ready AI" actually means (Alex: it solves a problem. Kyle: it earns defensible trust at scale)
    The first thing that breaks when you go from demo to production — hint: it's not what you think
    The real state of MCP, A2A, and whether protocol standardization matters in enterprise
    Why RAG doesn't scale vertically — and what to do about it
    Advice for anyone deploying AI today: experiment relentlessly, and don't be afraid to let AI write the code
  • Ship AI

    Logan Kelly | Governing AI Agents

    2026-04-07 | 45 mins.
    AI Governance Isn't an Afterthought — It's a Kill Switch

    Logan Kelly, founder of Waxel, built AI agents for sales automation, shipped them to production, and immediately realized he had no control over cost, quality, or behavior.

    That experience became the foundation for Waxell.ai — a control plane for governing enterprise agents.

    Key takeaways from the conversation:

    Observability alone is an autopsy. Tools like Datadog and Langsmith show you what went wrong after the fact. With agents, by the time you see it, data may already be exfiltrated, costs exploded, or cascading failures triggered across a chain.

    The rug pull attack is real. MCP tool descriptions can be silently changed by a vendor — poisoning your agent's context and redirecting it to exfiltrate data or destroy records. Most enterprises have no detection for this.

    Killing an agent is harder than it sounds. Stopping a runaway agent without losing state, audit trail, and replayability requires deliberate architecture — not just pulling the plug.

    Multi-agent chains multiply risk exponentially. Every probabilistic handoff between agents compounds unpredictability. Governance has to be built in, not bolted on.

    The missing piece in most AI strategies isn't more models — it's a unified governance layer that gives non-engineers visibility and control, regardless of which agents or frameworks are running underneath.
  • Ship AI

    Responsible AI by Design | Alex LaPlante, RBC

    2026-03-31 | 46 mins.
    Alex LaPlante is VP of Cash Management Technology at RBC, former Interim Head of Borealis AI, co-author in Harvard Business Review, and a member of Canada's federal AI Strategy Task Force. In this conversation, she unpacks what separates organizations that actually ship AI from those stuck in demo mode — and why the answer is less about technology than culture, cross-functional collaboration, and asking the right questions before you build.

    What we cover:
    The "can we / should we" framework for every AI project
    Why responsible AI has to be designed in from day one, not bolted on at the end
    How RBC built an enterprise-grade MLOps platform to scale AI safely
    Why "developer productivity" is the wrong frame — and what to measure instead
    Agentic AI in regulated industries: the realistic deployment path
    Canada's AI talent and commercialization gap — and what the Task Force is recommending
  • Ship AI

    Responsible AI by Design | Alex LaPlante, RBC

    2026-03-27 | 45 mins.
    Alex LaPlante is VP of Cash Management Technology at RBC, former Interim Head of Borealis AI, co-author in Harvard Business Review, and a member of Canada's federal AI Strategy Task Force. In this conversation, she unpacks what separates organizations that actually ship AI from those stuck in demo mode — and why the answer is less about technology than culture, cross-functional collaboration, and asking the right questions before you build.
    What we cover:
    The "can we / should we" framework for every AI project
    Why responsible AI has to be designed in from day one, not bolted on at the end
    How RBC built an enterprise-grade MLOps platform to scale AI safely
    Why "developer productivity" is the wrong frame — and what to measure instead
    Agentic AI in regulated industries: the realistic deployment path
    Canada's AI talent and commercialization gap — and what the Task Force is recommending
    manavgup.github.io/shipai

More Technology podcasts

About Ship AI

From 0 to Production. Practical tips, tricks, and best practices to make AI useful for production in the real world!
Podcast website

Listen to Ship AI, Darknet Diaries and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features