Powered by RND
Listen to The Gradient: Perspectives on AI in the App
Listen to The Gradient: Perspectives on AI in the App
(3,738)(249,730)
Save favourites
Alarm
Sleep timer
Save favourites
Alarm
Sleep timer
HomePodcastsTechnology
The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Podcast The Gradient: Perspectives on AI
Podcast The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

The Gradient
add
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com
More
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com
More

Available Episodes

5 of 92
  • Arjun Ramani & Zhengdong Wang: Why Transformative AI is Really, Really Hard to Achieve
    In episode 91 of The Gradient Podcast, Daniel Bashir speaks to Arjun Ramani and Zhengdong Wang. Arjun is the global business and economics correspondent at The Economist.Zhengdong is a research engineer at Google DeepMind.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:53) Arjun intro* (06:04) Zhengdong intro* (09:50) How Arjun and Zhengdong met in the woods* (11:52) Overarching narratives about technological progress and AI* (14:20) Setting up the claim: Arjun on what “transformative” means* (15:52) What enables transformative economic growth?* (21:19) From GPT-3 to ChatGPT; is there something special about AI?* (24:15) Zhengdong on “real AI” and divisiveness* (27:00) Arjun on the independence of bottlenecks to progress/growth* (29:05) Zhengdong on bottleneck independence* (32:45) More examples on bottlenecks and surplus wealth* (37:06) Technical arguments—what are the hardest problems in AI?* (38:00) Robotics* (40:41) Challenges of deployment in high-stakes settings and data sources / synthetic data, self-driving* (45:13) When synthetic data works* (49:06) Harder tasks, process knowledge* (51:45) Performance art as a critical bottleneck* (53:45) Obligatory Taylor Swift Discourse* (54:45) AI Taylor Swift???* (54:50) The social arguments* (55:20) Speed of technology diffusion — “diffusion lags” and dynamics of trust with AI* (1:00:55) ChatGPT adoption, where major productivity gains come from* (1:03:50) Timescales of transformation* (1:10:22) Unpredictability in human affairs* (1:14:07) The economic arguments* (1:14:35) Key themes — diffusion lags, different sectors* (1:21:15) More on bottlenecks, AI trust, premiums on human workers* (1:22:30) Automated systems and human interaction* (1:25:45) Campaign text reachouts* (1:30:00) Counterarguments* (1:30:18) Solving intelligence and solving science/innovation* (1:34:07) Strengths and weaknesses of the broad applicability of Arjun and Zhengdong’s argument* (1:35:34) The “proves too much” worry — how could any innovation have ever happened?* (1:37:25) Examples of bringing down barriers to innovation/transformation* (1:43:45) What to do with all of this information? * (1:48:45) OutroLinks:* Zhengdong’s homepage and Twitter* Arjun’s homepage and Twitter* Why transformative artificial intelligence is really, really hard to achieve* Other resources and links mentioned:* Allan-Feuer and Sanders: Transformative AGI by 2043 is * On AlphaStar Zero* Hardmaru on AI as applied philosophy* Robotics Transformer 2* Davis Blalock on synthetic data* Matt Clancy on automating invention and bottlenecks* Michael Webb on 80,000 Hours Podcast* Bob Gordon: The Rise and Fall of American Growth* OpenAI economic impact paper* David Autor: new work paper* Baumol effect paper* Pew research centre poll, public concern on AI* Human premium Economist piece* Callum Williams — London tube and AI/jobs* Culture Series book 1, Iain Banks Get full access to The Gradient at thegradientpub.substack.com/subscribe
    2023-09-21
    1:49:33
  • Miles Grimshaw: Benchmark, LangChain, and Investing in AI
    In episode 90 of The Gradient Podcast, Daniel Bashir speaks to Miles Grimshaw.Miles is General Partner at Benchmark. He was previously a General Partner at Thrive Capital, where he helped the firm raise its fourth and fifth funds, and sourced deals in Lattice, Mapbox, Benchling, and Airtable, among others.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:48) Miles’ background (note: Miles is now the second newest GP at Benchmark)* (06:07) Miles’ investment philosophy and previous investments* (12:25) Investing in the “decade of deep learning” and how Miles became interested in AI* (18:53) Miles’ / Benchmark’s investment in Langchain* (24:29) On AI advances and adoption* (39:25) Hardware shortages, radically changing UX for LLMs* (48:12) Opportunities for AI applications in new domains* (50:15) Miles’ advice for potential founders in AI* (1:00:00) OutroLinks:* Miles’ Twitter* Benchmark homepage* LangChain homepage Get full access to The Gradient at thegradientpub.substack.com/subscribe
    2023-09-14
    1:00:47
  • Shreya Shankar: Machine Learning in the Real World
    In episode 89 of The Gradient Podcast, Daniel Bashir speaks to Shreya Shankar.Shreya is a computer scientist pursuing her PhD in databases at UC Berkeley. Her research interest is in building end-to-end systems for people to develop production-grade machine learning applications. She was previously the first ML engineer at Viaduct, did research at Google Brain, and software engineering at Facebook. She graduated from Stanford with a B.S. and M.S. in computer science with concentrations in systems and artificial intelligence. At Stanford, helped run SHE++, an organization that helps empower underrepresented minorities in technology.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:22) Shreya’s background and journey into ML / MLOps* (04:51) ML advances in 2013-2016* (05:45) Shift in Stanford undergrad class ecosystems, accessibility of deep learning research* (09:10) Why Shreya left her job as an ML engineer* (13:30) How Shreya became interested in databases, data quality in ML* (14:50) Daniel complains about things* (16:00) What makes ML engineering uniquely difficult* (16:50) Being a “historian of the craft” of ML engineering* (22:25) Levels of abstraction, what ML engineers do/don’t have to think about* (24:16) Observability for Production ML Pipelines* (28:30) Metrics for real-time ML systems* (31:20) Proposed solutions* (34:00) Moving Fast with Broken Data* (34:25) Existing data validation measures and where they fall short* (36:31) Partition summarization for data validation* (38:30) Small data and quantitative statistics for data cleaning* (40:25) Streaming ML Evaluation* (40:45) What makes a metric actionable* (42:15) Differences in streaming ML vs. batch ML* (45:45) Delayed and incomplete labels* (49:23) Operationalizing Machine Learning* (49:55) The difficult life of an ML engineer* (53:00) Best practices, tools, pain points* (55:56) Pitfalls in current MLOps tools* (1:00:30) LLMOps / FMOps* (1:07:10) Thoughts on ML Engineering, MLE through the lens of data engineering* (1:10:42) Building products, user expectations for AI products* (1:15:50) OutroLinks:* Papers* Towards Observability for Production Machine Learning Pipelines* Rethinking Streaming ML Evaluation* Operationalizing Machine Learning* Moving Fast With Broken Data* Blog posts* The Modern ML Monitoring Mess* Thoughts on ML Engineering After a Year of my PhD Get full access to The Gradient at thegradientpub.substack.com/subscribe
    2023-09-07
    1:16:36
  • Stevan Harnad: AI's Symbol Grounding Problem
    In episode 88 of The Gradient Podcast, Daniel Bashir speaks to Professor Stevan Harnad.Stevan Harnad is professor of psychology and cognitive science at Université du Québec à Montréal, adjunct professor of cognitive science at McGill University, and professor emeritus of cognitive science at the University of Southampton. His research is on category learning, categorical perception, symbol grounding, the evolution of language, and animal and human sentience (otherwise known as “consciousness”). He is also an advocate for open access and an activist for animal rights.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (05:20) Professor Harnad’s background: interests in cognitive psychobiology, editing Behavioral and Brain Sciences* (07:40) John Searle submits the Chinese Room article* (09:20) Early reactions to Searle and Prof. Harnad’s role* (13:38) The core of Searle’s argument and the generator of the Symbol Grounding Problem, “strong AI”* (19:00) Ways to ground symbols* (20:26) The acquisition of categories* (25:00) Pantomiming, non-linguistic category formation* (27:45) Mathematics, abstraction, and grounding* (36:20) Symbol manipulation and interpretation language* (40:40) On the Whorf Hypothesis* (48:39) Defining “grounding” and introducing the “T3” Turing Test* (53:22) Turing’s concerns, AI and reverse-engineering cognition* (59:25) Other Minds, T4 and zombies* (1:05:48) Degrees of freedom in solutions to the Turing Test, the easy and hard problems of cognition* (1:14:33) Over-interepretation of AI systems’ behavior, sentience concerns, T3 and evidence sentience* (1:24:35) Prof. Harnad’s commentary on claims in The Vector Grounding Problem* (1:28:05) RLHF and grounding, LLMs’ (ungrounded) capabilities, syntactic structure and propositions* (1:35:30) Multimodal AI systems (image-text and robotic) and grounding, compositionality* (1:42:50) Chomsky’s Universal Grammar, LLMs and T2* (1:50:55) T3 and cognitive simulation* (1:57:34) OutroLinks:* Professor Harnad’s webpage and skywritings* Papers:* Category Induction and Representation* Categorical Perception* From Sensorimotor Categories to Grounded Symbols* Minds, machines and Searle 2* The Latent Structure of Dictionaries Get full access to The Gradient at thegradientpub.substack.com/subscribe
    2023-08-31
    1:58:21
  • Terry Winograd: AI, HCI, Language, and Cognition
    In episode 87 of The Gradient Podcast, Daniel Bashir speaks to Professor Terry Winograd. Professor Winograd is Professor Emeritus of Computer Science at Stanford University. His research focuses on human-computer interaction design and the design of technologies for development. He founded the Stanford Human-Computer Interaction Group, where he directed the teaching programs and HCI research. He is also a founding faculty member of the Stanford d.school and a founding member and past president of Computer Professionals for Social Responsibility.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:00) Professor Winograd’s background* (05:10) At the MIT AI Lab* (05:45) The atmosphere in the MIT AI Lab, Minsky/Chomsky debates* (06:20) Blue-sky research, government funding for academic research* (10:10) Isolation and collaboration between research groups* (11:45) Phases in the development of ideas and how cross-disciplinary work fits in* (12:26) SHRDLU and the MIT AI Lab’s intellectual roots* (17:20) Early responses to SHRDLU: Minsky, Dreyfus, others* (20:55) How Prof. Winograd’s thinking about AI’s abilities and limitations evolved* (22:25) How this relates to current AI systems and discussions of intelligence* (23:47) Repetitive debates in AI, semantics and grounding* (27:00) The concept of investment, care, trust in human communication vs machine communication* (28:53) Projecting human-ness onto AI systems and non-human things and what this means for society* (31:30) Time after leaving MIT in 1973, time at Xerox PARC, how Winograd’s thinking evolved during this time* (38:28) What Does It Mean to Understand Language? Speech acts, commitments, and the grounding of language* (42:40) Reification of representations in science and ML* (46:15) LLMs, their training processes, and their behavior* (49:40) How do we coexist with systems that we don’t understand?* (51:20) Progress narratives in AI and human agency* (53:30) Transitioning to intelligence augmentation, founding the Stanford HCI group and d.school, advising Larry Page and Sergey Brin* (1:01:25) Chatbots and how we consume information* (1:06:52) Evolutions in journalism, progress in trust for modern AI systems* (1:09:18) Shifts in the social contract, from institutions to personalities* (1:12:05) AI and HCI in recent years* (1:17:05) Philosophy of design and the d.school* (1:21:20) Designing AI systems for people* (1:25:10) Prof. Winograd’s perspective on watermarking for detecting GPT outputs* (1:25:55) The politics of being a technologist* (1:30:10) Echos of the past in AI regulation and competition and learning from history* (1:32:34) OutroLinks:* Professor Winograd’s Homepage* Papers/topics discussed:* SHRDLU* Beyond Programming Languages* What Does It Mean to Understand Language?* The PageRank Citation Ranking* Stanford Digital Libraries project* Talk: My Politics as a Technologist Get full access to The Gradient at thegradientpub.substack.com/subscribe
    2023-08-24
    1:33:21

More Technology podcasts

About The Gradient: Perspectives on AI

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com
Podcast website

Listen to The Gradient: Perspectives on AI, Moteur de recherche and Many Other Stations from Around the World with the radio.net App

The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Download now for free and listen to the radio easily.

Google Play StoreApp Store