[
]
Geoffrey Hinton, a British-Canadian computer scientist who is often referred to as the “godfather” of artificial intelligence (AI), has raised concerns that the technology may lead to human extinction in the next 30 years.
Prof Hinton, awarded the Nobel Prize in Physics earlier this year for his work in the field, estimates a “10% to 20%” chance that AI could result in human extinction over the next three decades. This is an increase from his earlier prediction of a 10% likelihood.
In an interview with BBC Radio 4’s Today programme, Mr Hinton was asked whether his views on a potential AI apocalypse had changed. He responded by saying, “Not really, 10% to 20%.” When asked if the odds had increased, Hinton said, “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”
He added “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
Mr Hinton, who is also a professor emeritus at the University of Toronto, described humans as toddlers when compared to advanced AI systems. “I like to think of it as: imagine yourself and a three-year-old. We’ll be three-year-olds,” he said.
His concerns regarding the technology first became publicly known when he resigned from his role at Google in 2023 to speak more freely about the dangers of unregulated AI development. He warned that “bad actors” could exploit AI to cause harm.
Reflecting on the rapid progress of AI development, Hinton said, “I didn’t think it would be where we (are) now. I thought at some point in the future we would get here.”
He expressed concern that experts in the field now predict AI systems could become smarter than humans in the next 20 years, saying it’s “a very scary thought.”
Mr Hinton underscored the need for government regulation, noting the pace of development was “very, very fast, much faster than I expected.” He warned that relying only on big companies driven by profit motives would not ensure the safe development of AI. “The only thing that can force those big companies to do more research on safety is government regulation,” he added.