PodcastsTechnologyLinear Digressions

Linear Digressions

Katie Malone
Linear Digressions
Latest episode

306 episodes

  • Linear Digressions

    Memory Management for AI Agents (The Agents Season, Episode 4)

    2026-05-10 | 24 mins.
    Context windows are powerful — but finite, and surprisingly easy to overwhelm. When an AI agent is tackling a long, complex task, the information it needs has to fit inside that limited real estate, and research shows that anything buried in the middle tends to quietly disappear. So how do you design a system that actually *remembers* what matters? This episode digs into memory management for AI agents, from foundational computing concepts to practical lessons from tools like Claude Code.

    ---
    Website: https://lineardigressions.com
    Apple Podcasts: https://podcasts.apple.com/us/podcast/linear-digressions/id941219323
    Spotify: https://open.spotify.com/show/1JdkD0ZoZ52KjwdR0b1WoT
    Substack: https://substack.com/@lineardigressions
  • Linear Digressions

    Lost in the Middle (The Agents Season, Episode 3)

    2026-05-04 | 19 mins.
    Just like a memorable talk lives or dies by its opening and closing, LLMs have a surprisingly similar quirk: they pay close attention to what's at the beginning and end of their context window — and kind of zone out in the middle. This "lost in the middle" phenomenon has real consequences for anyone building AI agents that rely on long-context reasoning. In this episode we dig into the research behind how (and how poorly) models actually use the information you feed them, and what it means for the agentic systems we're all trying to build.
  • Linear Digressions

    ReAct and Tool Usage (The Agents Season, Episode 2)

    2026-04-27 | 23 mins.
    Before 2022, there was a wall between AI and the real world — models could reason impressively, but couldn't look anything up, run code, or check whether anything they said was actually true. This episode traces the moment that wall came down, through two landmark papers: ReAct, which showed what happens when you interleave reasoning and action in a loop, and Toolformer, which taught models to decide *for themselves* when to reach for a tool. Plus: what MCP actually is, and why a hobbyist project called Open Claw became the fastest-growing open source project in history.

    ---
    Website: https://lineardigressions.com
    Apple Podcasts: https://podcasts.apple.com/us/podcast/linear-digressions/id941219323
    Spotify: https://open.spotify.com/show/1JdkD0ZoZ52KjwdR0b1WoT
    Substack: https://substack.com/@lineardigressions
  • Linear Digressions

    What's an AI Agent? And Why's That Hard to Define? (The Agents Season, Episode 1)

    2026-04-20 | 19 mins.
    AI agents are having a moment — and unpacking them properly takes more than a single conversation. This episode kicks off a dedicated multi-part season exploring AI agents from every angle, building up a complete picture piece by piece rather than skimming the surface. Think of it as a structured deep dive into one of the most talked-about (and most misunderstood) topics in machine learning right now. Buckle up — ten more episodes to go.

    ---
    Website: https://lineardigressions.com
    Apple Podcasts: https://podcasts.apple.com/us/podcast/linear-digressions/id941219323
    Spotify: https://open.spotify.com/show/1JdkD0ZoZ52KjwdR0b1WoT
    Substack: https://substack.com/@lineardigressions
  • Linear Digressions

    Unfaithful Chain of Thought

    2026-04-13 | 24 mins.
    What's actually happening when an LLM "thinks out loud"? Research on human decision-making suggests that much of the reasoning we believe drives our choices is actually post hoc rationalization — we decide first, explain later. Katie and Ben get curious about whether the same might be true for large language models: when you watch a model reason through a problem in real time, is that chain of thought the genuine process, or just a plausible-sounding story told after the fact? It's a deceptively deep question with real stakes for how much we should trust model explanations.

    Miles Turpin et al., "Language Models Don't Always Say What They Think: Unfaithful Explanations in
    Chain-of-Thought Prompting" (NeurIPS 2023, NYU and Anthropic): https://arxiv.org/abs/2305.04388

    Anthropic, "Reasoning Models Don't Always Say What They Think" (Alignment Faking research, 2025):
    https://www.anthropic.com/research/reasoning-models-dont-say-think
More Technology podcasts
About Linear Digressions
Demystifying AI for the intelligently curious
Podcast website

Listen to Linear Digressions, Dwarkesh Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features