PodcastsTechnologyUnsupervised Learning with Jacob Effron

Unsupervised Learning with Jacob Effron

by Redpoint Ventures
Unsupervised Learning with Jacob Effron
Latest episode

92 episodes

  • Unsupervised Learning with Jacob Effron

    Ep 85: Has AI Infra Stabilized, FM Vibe Shift, & What's Next for Coding Agents

    2026-04-23 | 54 mins.
    This episode is a wide-ranging conversation between Jacob and Swyx (Shawn Wang), an AI engineer, podcaster, and now operator at Cognition, who sits at a uniquely informed intersection of builder, investor, and community organizer in the AI world. The two cover the current state of the AI engineering zeitgeist: from the stabilization of agent infrastructure and the surprising stickiness of Claude Code, to the competitive dynamics of the AI coding wars, the rise of open models, the threat to traditional SaaS, and the frontier questions around world models, memory, and what it actually means for AI to "understand" something. The episode is grounded in practitioner-level candor, with Swyx offering real takes from running AIE conferences, working inside Cognition, and thinking deeply about what the next wave of AI-native software development looks like.

     

    (0:00) Intro

    (1:17) What the Top AI Engineers Are Thinking About

    (2:13) Has AI Infra Finally Stabilized?

    (6:39) When Does Doing RL In-House Make Sense?

    (11:26) Why Selling Dev Tools to Agents is Different

    (17:18) AI Coding Wars

    (29:04) Consumer AI Plateau

    (30:22) Codex vs Claude Code

    (44:52) Future of Open Models

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 84: OpenAI’s Chief Scientist on Continual Learning Hype, RL Beyond Code, & Future Alignment Directions

    2026-04-09 | 58 mins.
    Jakub Pachocki, OpenAI's Chief Scientist, sits down with Jacob to cover the full arc of where AI research stands today and where it's headed. The conversation spans the explosive growth of coding agents and what it signals about near-term AI capability, the use of math and physics benchmarks as proxies for general intelligence, how reinforcement learning is being extended beyond easily-verified domains toward longer-horizon tasks, and what it means to run a research organization at the precise moment the models themselves are starting to accelerate the research. Jakub shares a candid take on the competitive landscape, why chain-of-thought monitoring is one of the most promising tools in the alignment toolkit, and — with unusual directness — why the concentration of power enabled by highly automated AI organizations is a societal problem that doesn't yet have an obvious solution.

     

    (0:00) Intro

    (1:53) Research Intern Capability Timelines

    (4:59) Math Breakthroughs

    (7:59) RL Beyond Verifiable Tasks

    (12:32) RL vs In-Context

    (19:01) Allocating Compute Internally

    (28:18) AI for Science

    (31:40) Pattern Matching

    (33:23) Solving the Hardest Math Problems

    (37:40) Chain of Thought Monitoring

    (44:33) Generalization and Value Alignment in Models

    (47:57) Inside OpenAI

    (51:55) Quickfire

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 83: Owning the System of Record, AI-Native Org Charts, & Why ITSM is The Most Vulnerable Legacy Category

    2026-04-02 | 54 mins.
    Serval is one of the fastest-growing AI-native enterprise software companies right now, and this episode is a rare inside look at the deliberate architectural, go-to-market, and talent decisions behind that growth. Jake Stauch breaks down why he made the contrarian bet to build a full system of record rather than layer on top of existing tools, why ITSM is more vulnerable to AI disruption than CRM, ERP, or HRIS, and how Serval is winning Fortune 500 deals against a $14B incumbent with a fraction of the resources. Beyond the product, Jake gets into the organizational decisions that underpin Serval's velocity — why recruiting is the #1 job of every employee, how to prevent talent bar decay as you scale from 8 to 200 people, and how the role of the manager is shifting as ICs own more scope than ever. Threading it all together is a founder's honest account of what it means to build a horizontal software company when the models are improving, the infrastructure is shifting, and the window to displace a legacy incumbent is open but won't stay open forever.

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 82: Behind Legora's $550M Raise, Model Competition, Doubling Revenue Every Quarter, & US Expansion

    2026-03-11 | 54 mins.
    Max Jungestål, CEO of Legora, joins Jacob Effron and Logan Bartlett to discuss the company's $550M Series D and share a candid account of what building an AI-native company at speed actually looks like from the inside.

    Max argues that the AI application layer requires a fundamentally different operating model than traditional SaaS, one built on low ego, constant reinvention, and a willingness to watch nine months of work get washed away by a model update. He walks through how step-function improvements in the underlying models, particularly Opus 4.5 and 4.6, have repeatedly forced Legora to rebuild core product features from scratch, and why he sees that as a feature, not a bug.

    On the legal industry, Max offers a ground-level view of how AI is actually diffusing through law firms, less through top-down mandates and more through competitive pressure between firms and, increasingly, from enterprise clients demanding efficiency from their outside counsel. He pushes back on the viability of AI-native law firms, dismisses outcome-based pricing as harder than it looks, and makes the case for why foundation model competition creates tailwinds rather than threats for a company with Legora's depth.

    The episode closes with a detailed look at the US expansion strategy, including the deliberate cultural decisions, like flying all New York hires to Stockholm for onboarding, that Max believes are the real source of Legora's compounding advantage.

     

    [0:00] Intro

    [1:16] Legora's Series D Story

    [3:24] Why You Need Low Ego to Build in AI

    [5:58] From 60% to 100% Accuracy in One Summer

    [7:04] Law Firm Economics Shift

    [14:09] Pricing Seats Vs Outcomes

    [18:31] Why Foundation Models Entering Legal Helps Legora

    [30:10] Convincing a 75-Year-Old Partner to Go All In

    [33:02] Hiring Legal Engineers

    [34:32] Running an AI-Native Company

    [35:57] The Opus 4.5 Christmas Breakthrough

    [40:02] Building With Customers

    [44:01] All In On US Expansion

    [51:22] Stockholm Startup DNA

     

    With your co-hosts: 

    @jacobeffron 

    - Partner at Redpoint, Former PM Flatiron Health 

    @patrickachase 

    - Partner at Redpoint, Former ML Engineer LinkedIn 

    @ericabrescia 

    - Former COO Github, Founder Bitnami (acq’d by VMWare) 

    @jordan_segall 

    - Partner at Redpoint
  • Unsupervised Learning with Jacob Effron

    Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL

    2026-01-29 | 1h 2 mins.
    This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for.
     
    (0:00) Intro
    (1:26) Scaling Paradigms in AI
    (3:36) Challenges in Reinforcement Learning
    (11:48) AGI Timelines
    (18:36) Converging Labs
    (25:05) Jerry’s Departure from OpenAI
    (31:18) Pivotal Decisions in OpenAI’s Journey
    (35:06) Balancing Research and Product Development
    (38:42) The Future of AI Coding
    (41:33) Specialization vs. Generalization in AI
    (48:47) Hiring and Building Research Teams
    (55:21) Quickfire
     
    With your co-hosts: 
    @jacobeffron 
    - Partner at Redpoint, Former PM Flatiron Health 
    @patrickachase 
    - Partner at Redpoint, Former ML Engineer LinkedIn 
    @ericabrescia 
    - Former COO Github, Founder Bitnami (acq’d by VMWare) 
    @jordan_segall 
    - Partner at Redpoint

More Technology podcasts

About Unsupervised Learning with Jacob Effron

We probe the sharpest minds in AI in search for the truth about what’s real today, what will be real in the future and what it all means for businesses and the world. If you’re a builder, researcher or investor navigating the AI world, this podcast will help you deconstruct and understand the most important breakthroughs and see a clearer picture of reality. Follow this show and consider enabling notifications to stay up to date on our latest episodes. Unsupervised Learning is a podcast by Redpoint Ventures, an early-stage venture capital fund that has invested in companies like Snowflake, Stripe, and Mistral. Hosted by Redpoint investor Jacob Effron alongside Patrick Chase, Jordan Segall and Erica Brescia.
Podcast website

Listen to Unsupervised Learning with Jacob Effron, Understood: Deepfake Porn Empire and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Unsupervised Learning with Jacob Effron: Podcasts in Family