Powered by RND
PodcastsGovernmentYour Undivided Attention

Your Undivided Attention

The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin
Your Undivided Attention
Latest episode

Available Episodes

5 of 143
  • The Crisis That United Humanity—and Why It Matters for AI
    In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem.Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA“Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan SolomonThe full text of the Montreal ProtocolThe full text of the Kigali Amendment RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco’s PlaybookForever Chemicals, Forever Consequences: What PFAS Teaches Us About AIAI Is Moving Fast. We Need Laws that Will Too.Big Food, Big Tech and Big AI with Michael MossCorrections:Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.
    --------  
    51:47
  • How OpenAI's ChatGPT Guided a Teen to His Death
    Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam’s storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI’s press release on sycophancy in 4oFurther reading on OpenAI’s decision to eliminate the persuasion red lineKashmir Hill’s reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.
    --------  
    45:12
  • “Rogue AI” Used to be a Science Fiction Trope. Not Anymore.
    Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all.In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years.  Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAGladstone AI’s State Department Action Plan, which discusses the loss of control risk with AIApollo Research’s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo ResearchAnthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment fakingThe Trump White House AI Action PlanFurther reading on the phenomenon of more advanced AIs being better at deception.Further reading on Replit AI wiping a company’s coding databaseFurther reading on the owl example that Jeremie gaveFurther reading on AI induced psychosisDan Hendryck and Eric Schmidt’s “Superintelligence Strategy” RECOMMENDED YUA EPISODESDaniel Kokotajlo Forecasts the End of Human DominanceBehind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveThis Moment in AI: How We Got Here and Where We’re GoingCORRECTIONSTristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions.
    --------  
    42:11
  • AI is the Next Free Speech Battleground
    Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property.  Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.This isn't a science fiction scenario. It’s the future we’re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court’s role in steering AI and what we can do to help steer it better.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“The First Amendment Does Not Protect Replicants” by Larry LessigMore information on the Tech Justice Law ProjectFurther reading on Sewell Setzer’s storyFurther reading on NYT v. SullivanFurther reading on the Citizens United caseFurther reading on Google’s deal with Character AIMore information on Megan Garcia’s foundation, The Blessed Mother Family FoundationRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.The AI Dilemma 
    --------  
    49:11
  • Daniel Kokotajlo Forecasts the End of Human Dominance
    In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future. AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you’re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don’t have to agree with Daniel’s specific forecast to recognize that the incentives around AI could take us to a very bad place.We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.  Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe AI 2027 forecast from the AI Futures ProjectDaniel’s original AI 2026 blog post Further reading on Daniel’s departure from OpenAIAnthropic recently released a survey of all the recent emergent misalignment researchOur statement in support of Sen. Grassley’s AI Whistleblower bill RECOMMENDED YUA EPISODESThe Narrow Path: Sam Hammond on AI, Institutions, and the Fragile FutureAGI Beyond the Buzz: What Is It, and Are We Ready?Behind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveClarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections. 
    --------  
    38:19

More Government podcasts

About Your Undivided Attention

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.
Podcast website

Listen to Your Undivided Attention, Strict Scrutiny and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.7 | © 2007-2025 radio.de GmbH
Generated: 9/13/2025 - 3:43:49 AM