Powered by RND
PodcastsEducation80,000 Hours Podcast

80,000 Hours Podcast

Rob, Luisa, and the 80,000 Hours team
80,000 Hours Podcast
Latest episode

Available Episodes

5 of 287
  • The case for and against AGI by 2030 (article by Benjamin Todd)
    More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible? This article by Benjamin Todd looks into the cases for and against, and summarises the key things you need to know to understand the debate. You can see all the images and many footnotes in the original article on the 80,000 Hours website.In a nutshell:Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models’ thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.All of these drivers are set to continue until 2028 and perhaps until 2032.This means we should expect major further gains in AI performance. We don’t know how large they’ll be, but extrapolating recent trends on benchmarks suggests we’ll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.Whether we call these systems ’AGI’ or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate — leading to transformative impacts.Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we’ll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.Chapters:Introduction (00:00:00)The case for AGI by 2030 (00:00:33)The article in a nutshell (00:04:04)Section 1: What's driven recent AI progress? (00:05:46)How we got here: the deep learning era (00:05:52)Where are we now: the four key drivers (00:07:45)Driver 1: Scaling pretraining (00:08:57)Algorithmic efficiency (00:12:14)How much further can pretraining scale? (00:14:22)Driver 2: Training the models to reason (00:16:15)How far can scaling reasoning continue? (00:22:06)Driver 3: Increasing how long models think (00:25:01)Driver 4: Building better agents (00:28:00)How far can agent improvements continue? (00:33:40)Section 2: How good will AI become by 2030? (00:35:59)Trend extrapolation of AI capabilities (00:37:42)What jobs would these systems help with? (00:39:59)Software engineering (00:40:50)Scientific research (00:42:13)AI research (00:43:21)What's the case against this? (00:44:30)Additional resources on the sceptical view (00:49:18)When do the 'experts' expect AGI? (00:49:50)Section 3: Why the next 5 years are crucial (00:51:06)Bottlenecks around 2030 (00:52:10)Two potential futures for AI (00:56:02)Conclusion (00:58:05)Thanks for listening (00:59:27)Audio engineering: Dominic ArmstrongMusic: Ben Cordell
    --------  
    1:00:06
  • Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)
    When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to sideline its nonprofit foundation, announced in a blog post that made headlines worldwide.The company’s sudden announcement that its nonprofit will “retain control” credits “constructive dialogue” with the attorneys general of California and Delaware — corporate-speak for what was likely a far more consequential confrontation behind closed doors. A confrontation perhaps driven by public pressure from Nobel Prize winners, past OpenAI staff, and community organisations.But whether this change will help depends entirely on the details of implementation — details that remain worryingly vague in the company’s announcement.Return guest Rose Chan Loui, nonprofit law expert at UCLA, sees potential in OpenAI’s new proposal, but emphasises that “control” must be carefully defined and enforced: “The words are great, but what’s going to back that up?” Without explicitly defining the nonprofit’s authority over safety decisions, the shift could be largely cosmetic.Links to learn more, video, and full transcript: https://80k.info/rcl4Why have state officials taken such an interest so far? Host Rob Wiblin notes, “OpenAI was proposing that the AGs would no longer have any say over what this super momentous company might end up doing. … It was just crazy how they were suggesting that they would take all of the existing money and then pursue a completely different purpose.”Now that they’re in the picture, the AGs have leverage to ensure the nonprofit maintains genuine control over issues of public safety as OpenAI develops increasingly powerful AI.Rob and Rose explain three key areas where the AGs can make a huge difference to whether this plays out in the public’s best interest:Ensuring that the contractual agreements giving the nonprofit control over the new Delaware public benefit corporation are watertight, and don’t accidentally shut the AGs out of the picture.Insisting that a majority of board members are truly independent by prohibiting indirect as well as direct financial stakes in the business.Insisting that the board is empowered with the money, independent staffing, and access to information which they need to do their jobs.This episode was originally recorded on May 6, 2025.Chapters:Cold open (00:00:00)Rose is back! (00:01:06)The nonprofit will stay 'in control' (00:01:28)Backlash to OpenAI’s original plans (00:08:22)The new proposal (00:16:33)Giving up the super-profits (00:20:52)Can the nonprofit maintain control of the company? (00:24:49)Could for profit investors sue if profits aren't prioritised? (00:33:01)The 6 governance safeguards at risk with the restructure (00:34:33)Will the nonprofit’s giving just be corporate PR for the for-profit? (00:49:12)Is this good, or not? (00:51:06)Ways this could still go wrong – but reasons for optimism (00:54:19)Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore
    --------  
    1:02:50
  • #216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it
    When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don't grasp parliamentary procedure even after decades in office — is the problem the people, or the structure they work in?Today's guest, political journalist Ian Dunt, studies the systemic reasons governments succeed and fail.And in his book How Westminster Works ...and Why It Doesn't, he argues that Britain's government dysfunction and multi-decade failure to solve its key problems stems primarily from bad incentives and bad processes. Even brilliant, well-intentioned people are set up to fail by a long list of institutional absurdities that Ian runs through — from the constant churn of ministers and civil servants that means no one understands what they’re working on, to the “pathological national sentimentality” that keeps 10 Downing Street (a 17th century townhouse) as the beating heart of British government.While some of these are unique British failings, we see similar dynamics in other governments and large corporations around the world.But Ian also lays out how some countries have found structural solutions that help ensure decisions are made by the right people, with the information they need, and that success is rewarded.Links to learn more, video, highlights, and full transcript. Chapters:Cold open (00:00:00)How Ian got obsessed with Britain's endless failings (00:01:05)Should we blame individuals or incentives? (00:03:24)The UK left its allies to be murdered in Afghanistan (to save cats and dogs) (00:09:02)The UK is governed from a tiny cramped house (00:17:54)“It's the stupidest conceivable system for how to run a country” (00:23:30)The problems that never get solved in the UK (00:28:14)Why UK ministers have no expertise in the areas they govern (00:31:32)Why MPs are chosen to have no idea about legislation (00:44:08)Is any country doing things better? (00:46:14)Is rushing inevitable or artificial? (00:57:20)How unelected septuagenarians are the heroes of UK governance (01:01:02)How Thatcher unintentionally made one part of parliament work (01:10:48)Maybe secrecy is the best disinfectant for incompetence (01:14:17)The House of Commons may as well be in a coma (01:22:34)Why it's in the PM's interest to ban electronic voting (01:33:13)MPs are deliberately kept ignorant of parliamentary procedure (01:35:53)“Whole areas of law have fallen almost completely into the vortex” (01:40:37)What's the seed of all this going wrong? (01:44:00)Why won't the Commons challenge the executive when it can? (01:53:10)Better ways to choose MPs (01:58:33)Citizens’ juries (02:07:16)Do more independent-minded legislatures actually lead to better outcomes? (02:10:42)"There’s no time for this bourgeois constitutional reform bulls***" (02:16:50)How to keep expert civil servants (02:22:35)Improving legislation like you’d improve Netflix dramas (02:34:34)MPs waste much of their time helping constituents with random complaints (02:39:59)Party culture prevents independent thinking (02:43:52)Would a written constitution help or hurt? (02:48:37)Can we give the PM room to appoint ministers based on expertise and competence? (02:51:51)Would proportional representation help? (02:56:20)Proportional representation encourages collaboration but does have weaknesses (02:58:51)Alternative electoral systems (03:07:44)This episode was originally recorded on January 30, 2025.Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
    --------  
    3:14:52
  • Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests
    How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing problems? Should you specialise deeply or develop a unique combination of skills?From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who’ve found unconventional paths to impact and helped others do the same.Links to learn more and full transcript.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)Holden Karnofsky on just kicking ass at whatever (00:02:53)Jeff Sebo on what improv comedy can teach us about doing good in the world (00:12:23)Dean Spears on being open to randomness and serendipity (00:19:26)Michael Webb on how to think about career planning given the rapid developments in AI (00:21:17)Michelle Hutchinson on finding what motivates you and reaching out to people for help (00:41:10)Benjamin Todd on figuring out if a career path is a good fit for you (00:46:03)Chris Olah on the value of unusual combinations of skills (00:50:23)Holden Karnofsky on deciding which weird ideas are worth betting on (00:58:03)Karen Levy on travelling to learn about yourself (01:03:10)Leah Garcés on finding common ground with unlikely allies (01:06:53)Spencer Greenberg on recognising toxic people who could derail your career and life (01:13:34)Holden Karnofsky on the many jobs that can help with AI (01:23:13)Danny Hernandez on using world events to trigger you to work on something else (01:30:46)Sarah Eustis-Guthrie on exploring and pivoting in careers (01:33:07)Benjamin Todd on making tough career decisions (01:38:36)Hannah Ritchie on being selective when following others’ advice (01:44:22)Alex Lawsen on getting good mentorship (01:47:25)Chris Olah on cold emailing that actually works (01:54:49)Pardis Sabeti on prioritising physical health to do your best work (01:58:34)Chris Olah on developing good taste and technique as a researcher (02:04:39)Benjamin Todd on why it’s so important to apply to loads of jobs (02:09:52)Varsha Venugopal on embracing uncomfortable situations and celebrating failures (02:14:25)Luisa's outro (02:17:43)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
    --------  
    2:18:41
  • #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power
    Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend. Today’s guest — Tom Davidson of the Forethought Centre for AI Strategy — claims in a new paper published today that advanced AI enables power grabs by small groups, by removing the need for widespread human participation. Links to learn more, video, highlights, and full transcript. https://80k.info/tdAlso: come work with us on the 80,000 Hours podcast team! https://80k.info/workThere are a few routes by which small groups might seize power:Military coups: Though rare in established democracies due to citizen/soldier resistance, future AI-controlled militaries may lack such constraints. Self-built hard power: History suggests maybe only 10,000 obedient military drones could seize power.Autocratisation: Leaders using millions of loyal AI workers, while denying others access, could remove democratic checks and balances.Tom explains several reasons why AI systems might follow a tyrant’s orders:They might be programmed to obey the top of the chain of command, with no checks on that power.Systems could contain "secret loyalties" inserted during development.Superior cyber capabilities could allow small groups to control AI-operated military infrastructure.Host Rob Wiblin and Tom discuss all this plus potential countermeasures.Chapters:Cold open (00:00:00)A major update on the show (00:00:55)How AI enables tiny groups to seize power (00:06:24)The 3 different threats (00:07:42)Is this common sense or far-fetched? (00:08:51)“No person rules alone.” Except now they might. (00:11:48)Underpinning all 3 threats: Secret AI loyalties (00:17:46)Key risk factors (00:25:38)Preventing secret loyalties in a nutshell (00:27:12)Are human power grabs more plausible than 'rogue AI'? (00:29:32)If you took over the US, could you take over the whole world? (00:38:11)Will this make it impossible to escape autocracy? (00:42:20)Threat 1: AI-enabled military coups (00:46:19)Will we sleepwalk into an AI military coup? (00:56:23)Could AIs be more coup-resistant than humans? (01:02:28)Threat 2: Autocratisation (01:05:22)Will AGI be super-persuasive? (01:15:32)Threat 3: Self-built hard power (01:17:56)Can you stage a coup with 10,000 drones? (01:25:42)That sounds a lot like sci-fi... is it credible? (01:27:49)Will we foresee and prevent all this? (01:32:08)Are people psychologically willing to do coups? (01:33:34)Will a balance of power between AIs prevent this? (01:37:39)Will whistleblowers or internal mistrust prevent coups? (01:39:55)Would other countries step in? (01:46:03)Will rogue AI preempt a human power grab? (01:48:30)The best reasons not to worry (01:51:05)How likely is this in the US? (01:53:23)Is a small group seizing power really so bad? (02:00:47)Countermeasure 1: Block internal misuse (02:04:19)Countermeasure 2: Cybersecurity (02:14:02)Countermeasure 3: Model spec transparency (02:16:11)Countermeasure 4: Sharing AI access broadly (02:25:23)Is it more dangerous to concentrate or share AGI? (02:30:13)Is it important to have more than one powerful AI country? (02:32:56)In defence of open sourcing AI models (02:35:59)2 ways to stop secret AI loyalties (02:43:34)Preventing AI-enabled military coups in particular (02:56:20)How listeners can help (03:01:59)How to help if you work at an AI company (03:05:49)The power ML researchers still have, for now (03:09:53)How to help if you're an elected leader (03:13:14)Rob’s outro (03:19:05)This episode was originally recorded on January 20, 2025.Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
    --------  
    3:22:44

More Education podcasts

About 80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
Podcast website

Listen to 80,000 Hours Podcast, SOLVED with Mark Manson and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.2 | © 2007-2025 radio.de GmbH
Generated: 5/14/2025 - 8:03:23 AM