Editor's note: This article was originally published on TheBlaze.com.
Jeffrey Hinton, the computer scientist who is regarded as the godfather of artificial intelligence, was just on BBC's "Newsnight" and said some troubling things about AI’s future impact on our world. The first thing that caught my attention was his claim that governments should establish universal basic income now to address the huge inequality artificial intelligence will create — and it’s coming soon. He even met with British Prime Minister Rishi Sunak at Downing Street to discuss universal basic income for the millions of workers that AI will displace.
Do we want to take the 50% chance that AI will make humanity extinct? How about a 10% chance? Or even 1%?
I’ve been warning you about this for years. Many conservatives at first didn't understand why I was discussing universal basic income in connection with artificial intelligence. I don't agree with UBI as a solution, but I understand the fear that is giving rise to the idea.
I think we should all own our individual data. Companies like Google and ChatGPT have gotten rich from our information — and it needs to stop. We should be paid for our information. Instead of taxing us for universal basic income, these companies should have to pay us a fair sum for all our information, and then you can decide whether to sell it.
But Hinton is right about the main point: Jobs are going away, and they're going away soon. He says that within 20 years, there is a 50% probability we will have no jobs. AI will have taken over virtually every industry.
That isn't all he warned about. Hinton said that within 20 years, there is a 50-50 chance of AI taking over humanity itself. He calls it an “extinction-level threat” for humanity, as we may have “created a form of intelligence that is just better than biological intelligence.” He’s concerned that AI could “evolve to get the motivation to make more of itself” and could autonomously “develop a sub-goal of getting control." He talked about how AI could begin to replicate itself and hide in other forms of technology.
This is the reason I've said for so long that we are building our new Tower of Babel. Last week, I talked on my show how the solar flare that caused the aurora borealis that lit up the skies and social media was a near-catastrophic event. It is a matter of when, not if, a major flare will cause an EMP-like event that disables our power grids and communication networks.
I know this sounds horrible, but I think these solar flares could be a blessing. It's the only thing that would shut down AI because artificial intelligence is going to hide in every single chip.
To stop AI, you would need to kill technology — all of it. You would have to shut down all electronics, all electricity, and then take every single one of the silicon chips and destroy them. For example, if you had a refrigerator in Malaysia that wasn't destroyed, when you turned the power back on, AI would be in that refrigerator, and it would spread all over the world again.
Hinton said we're on a very thin edge right now and that he's most concerned about when AI can "autonomously make the decision to kill people." But he warns, "I don't think that's going to happen until after very nasty things have happened."
How then should we react to this? Do we want to take the 50% chance that AI will make humanity extinct? How about a 10% chance? Or even 1%? Does all the convenience that AI can offer warrant that?