What happens when the people building artificial intelligence quietly believe it might destroy us?
On this episode of Digital Disruption, we’re joined by Gregory Warner, Peabody Award–winning journalist, former NPR correspondent, and host of the hit AI podcast The Last Invention.
Gregory Warner is a versatile journalist and podcaster. He has been recognized with a Peabody Award and other awards from organizations like Edward R. Murrow, New York Festivals, AP, and PRNDI. Warner's career includes serving as an East Africa correspondent, where he covered the region's economic growth and terrorism threats. He has also worked as a senior reporter for American Public Media's Marketplace, focusing on the economics of American health care. His work has been recognized with a Best News Feature award from the Third Coast International Audio Festival.
Gregory sits down with Geoff for an honest conversation about the AI race unfolding today. After years spent interviewing the architects, skeptics, and true believers behind advanced AI systems, Gregory has come away with an unsettling insight: the same people racing to build more powerful models are often the most worried about where this technology is heading. This episode explores whether we’re already living inside the AI risk window, why AI safety may be even harder than nuclear safety, and why Silicon Valley’s “move fast and fix later” mindset may not apply to superintelligence. It also examines the growing philosophical divide between AI doomers and AI accelerationists. This conversation goes far beyond chatbots and job-loss headlines. It asks a deeper question few are willing to confront: are we building something we can’t control and, doing it anyway?
In this video:
00:00 Why AI’s creators fear what they’re building
01:00 The origins of The Last Invention podcast
03:00 AI models that already behave like elite hackers
05:00 Why the AI risk window may already be open
06:30 What AI safety actually means (and why it’s so hard)
08:00 Why guardrails don’t really “fix” AI models
10:00 The role everyday users unknowingly play in AI safety
12:00 Human-in-the-loop: safety feature or illusion?
13:30 The danger of anthropomorphizing AI systems
15:00 AI as an alien intelligence, not a human one
17:00 Why AI failure won’t look like sci-fi movies
19:00 The Silicon Valley AI arms race explained
21:00 OpenAI, DeepMind, Anthropic, xAI: who’s racing and why
23:00 The accelerationist worldview vs AI safety advocates
25:00 The “Compressed Century” and radical AI optimism
27:00 Can AI actually solve humanity’s biggest problems?
29:00 Is AI utopia hope… or hype?
31:00 Risk, responsibility, and what comes next
Connect with Gregory:LinkedIn: https://www.linkedin.com/in/radiogrego/Instagram: https://www.instagram.com/radiogrego/
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG