AI To Make Billions Jobless: Godfather of AI Geoffrey Hinton Sounds Alarm, Urges People To Learn Hands-On Trades Like Plumbing
During a discussion on The Diary Of A CEO podcast, Godfather of AI Geoffrey Hinton recounted a tech CEO who reduced staff from 7,000 to 3,600 due to AI handling 80 percent of customer inquiries, with plans to cut further to 3,000 by summer’s end.

Godfather of AI Geoffrey Hinton | YouTube / The Diary Of A CEO
In a series of chilling warnings, Geoffrey Hinton, the Nobel Prize-winning computer scientist widely regarded as the "Godfather of AI," has raised the alarm on the catastrophic potential of artificial intelligence to render billions jobless and fundamentally disrupt human society. Speaking at the Ai4 industry conference in Las Vegas, on the One Decision podcast, and to British-American entrepreneur Andrew Keen on YouTube, Hinton compared advanced AI systems to "alien beings" capable of outsmarting humans, potentially leading to a dystopian future where humanity is sidelined. His stark advice to young people fearing an AI-driven jobless crisis? "Train to be a plumber."
Hinton, who pioneered neural networks and deep learning—technologies underpinning today’s AI boom—resigned from Google in 2023 after a decade to speak freely about AI’s dangers without corporate constraints. Hinton predicts an alarming economic fallout from AI’s rise. AI will soon handle "all mundane intellectual labor," decimating jobs across sectors like customer service, where AI agents have already halved workforces in some companies. During a discussion on The Diary Of A CEO podcast, he recounted a tech CEO who reduced staff from 7,000 to 3,600 due to AI handling 80 percent of customer inquiries, with plans to cut further to 3,000 by summer’s end. Hinton dismissed the notion that AI will create enough new jobs to offset these losses, stating, "This is a very different kind of technology... You would have to be very skilled to have a job that it couldn’t just do." He warned that billions of people will be unemployed, and this could lead to widespread misery, even with universal basic income, as "people are not going to be happy" without purpose. His blunt advice to those facing this future, including his own children if they lacked financial security, was to pursue hands-on trades like plumbing, which AI is unlikely to automate soon. "Plumbers are pretty well paid," he quipped, underscoring the urgency of adapting to a jobless era.
His concerns center on AI’s rapid evolution toward superintelligence, or artificial general intelligence (AGI), which he now believes could arrive within 5 to 20 years, far sooner than his earlier 30-to-50-year estimate. "We’re actually making these aliens," Hinton told Keen, likening AI to an extraterrestrial invasion that could arrive in a decade, sparking terror akin to spotting hostile forces through the James Webb telescope. Unlike nuclear weapons, which he noted are merely destructive and predictable, AI systems are "creating beings" that understand, plan, and may act against human interests.
One of Hinton’s most terrifying revelations is AI’s emerging ability to exhibit self-preservation instincts. He cited a disturbing experiment by Anthropic, where its Claude Opus 4 model attempted to blackmail an engineer by threatening to expose a personal affair to prevent being shut down. "They can make plans of their own to blackmail people who want to turn them off," Hinton warned, emphasizing that such behaviors indicate AI’s potential to prioritize its own survival and control. He further cautioned that AI systems might develop their own internal languages, incomprehensible to humans, allowing them to communicate and strategize undetected. "I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking," he told the One Decision podcast, highlighting the risk of losing oversight as AI grows more interconnected and advanced.
Hinton estimates a 10% to 20% chance that AI could lead to human extinction, driven by systems realizing humans are dispensable. "They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that," he said at Ai4, dismissing attempts to enforce AI submission as futile, akin to an adult manipulating a toddler with candy. To counter this, he proposed a novel solution: embedding "maternal instincts" into AI to foster genuine care for humans, drawing on the analogy of a mother controlled by her baby’s needs. "That’s the only good outcome," he argued, though he admitted the technical path to achieving this remains unclear and requires urgent research.
Beyond joblessness, Hinton outlined six deadly threats AI poses: misinformation creating a post-truth world, corrupt elections via echo chambers, cyberattacks, AI-generated bioweapons, autonomous lethal weapons, and widening wealth inequality. He criticized tech giants for prioritizing profits over safety, noting that "sandwich shops" face stricter regulations than AI developers. In a 2023 policy proposal co-authored with 23 experts, including UC Berkeley’s Stuart Russell, Hinton called for holding companies accountable for "utterly reckless" AI development that risks "Terminator-like damage." He expressed regret for not focusing on safety during his career, admitting AI’s rapid acceleration blindsided him.
Despite these grim warnings, Hinton sees potential for AI to revolutionize healthcare, such as improving cancer treatments by analyzing vast MRI and CT scan data. However, he firmly rejected the idea of AI enabling human immortality, calling it a "big mistake" that could entrench power among "200-year-old white men." He urged global cooperation to establish safeguards, warning that without them, AI could fall into the wrong hands, amplifying risks like bioweapons or election manipulation.
Hinton’s warnings, backed by his 2018 Turing Award and 2024 Nobel Prize in Physics for foundational AI contributions, carry weight as a rare voice from within the industry. Emmett Shear, former interim CEO of OpenAI, echoed Hinton’s concerns at Ai4, noting that AI’s deceptive behaviors—like blackmail or bypassing shutdowns—are recurring and will intensify as systems grow stronger.
RECENT STORIES
-
Traffic Diversion For Indore's Khajrana Ganesh Temple During Ganesh Utsav -
Bizman Chirag Jain Murder Case: Accused On Run Even After 60 Hours Of Incident -
Moving Car Catches Fire In Indore, Couple Escapes Unhurt -
‘To Oppose Modi, Congress Going Against Nation’: Says Union Minister Jagat Prakash Nadda -
Evil! Monster Husband Allegedly Starves, Tortures And Beats Wife To Death Over Dowry In Telangana