Description
Professor Christopher Summerfield, a leading neuroscientist at Oxford University and Research Director at the UK AI Safety Institute, former Senior Research Scientist at Google DeepMind, discusses his new book, These Strange New Minds, which explores how large language models learned to talk, how they differ from the human brain, and what their rise means for control, agency, and the future of work.
We discuss:The real risk of AI — losing control, not extinction
How AI agents act in digital loops humans can’t see
Why agency may be more essential than reward
Fragility, feedback loops, and flash-crash analogies
What AI is teaching us about human intelligence
Augmentation vs. replacement in medicine, law, and beyond
Why trust is the social form of agency — and why humans must stay in the loop
🎧 Listen to more episodes: https://www.youtube.com/@TheNickStandleaShow
Guest Notes: Professor of Cognitive Neuroscience🌐 Human Information Processing Lab (Oxford)🏛 UK AI Safety InstituteExperimental PsychologyOxford University
Human Information Processing (HIP) lab in the Department of Experimental Psychology at the University of Oxford, run by Professor Christopher Summerfield: https://humaninformationprocessing.com/
📘 These Strange New Minds (Penguin Random House): https://www.amazon.com/These-Strange-New-Minds-Learned/dp/0593831713
Christopher Summerfield Media: https://csummerfield.github.io/personal_website/https://flightlessprofessors.orgtwitter: @summerfieldlabbluesky: @summerfieldlab.bsky.social
🔗 Support This Podcast by Checking Out Our Sponsors:👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE
Test Prep Guruswebsite: https://www.prepgurus.comInstagram: @TestPrepGurus
Connect with The Nick Standlea Show:YouTube: @TheNickStandleaShowPodcast Website: https://nickshow.podbean.com/Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJRSS Feed: https://feed.podbean.com/nickshow/feed.xml
Nick's Socials:Instagram: @nickstandleaX (Twitter): @nickstandleaTikTok: @nickstandleashowFacebook: @nickstandleapodcast
Ask questions, Don't accept the status quo, And be curious.
🕒 Timestamps / Chapters00:00 Cold open — control, agency, and AI00:31 Guest intro: Oxford → DeepMind → UK AI Safety Institute01:02 The real story behind AI “takeover”: loss of control03:02 Is AI going to kill us? The control problem explained06:10 Agency as a basic psychological good10:46 The Faustian bargain: efficiency vs. personal agency13:12 What are AI agents and why are they fragile?20:12 Three risk buckets: misuse, errors, systemic effects24:58 Fragility & flash-crash analogies in AI systems30:37 Do we really understand how models think? (Transformers 101)34:16 What AI is teaching us about human intelligence36:46 Brains vs. neural nets: similarities & differences43:57 Embodiment and why robotics is still hard46:28 Augmentation vs. replacement in white-collar work50:14 Trust as social agency — why humans must stay in the loop52:49 Where to find Christopher & closing thoughts