GLENN: So if you're a regular listener to the program, you know that I'm a big reader. When I'm interested to try to find the truth in something,it's a little relentless in my reading. And I'm going through probably two to four books a week right now, and I'm spending most of it on futurist and coming technology and AI. And I am really, really concerned at the apathy of which we are approaching the singularity.
You talk to the average person; they don't know what the singularity is. And their eyes kind of glaze over when you start talking about it. And it is going to -- it is going to change all life. It may mean the end of humans.
And I started reading something that -- I'm going to read three pages. And I guarantee you, after these three pages, if you -- if you don't think that artificial superintelligence is, you know, just a thing of the movies, if you have any underlying understanding that we're approaching something that we should be concerned about, after these three pages, I guarantee you, you will go out and buy this book.
STU: Wow.
GLENN: And I don't think I've ever read a book --
STU: I want to take the challenge.
GLENN: The name of the book is our final invention. Artificial intelligence and the end of the human era.
STU: Another hopeful recommendation.
GLENN: Chapter 1. The busy child.
On a supercomputer, operating at a speed of 36.8 petaflops, or about twice the speed of a human brain, an AI is improving its intelligence.
Now do you know the difference between AI, AGI, and ASI?
STU: No.
GLENN: AI is what we have now, and it's doing machine learning, and it's improving upon itself and it's growing.
STU: Artificial intelligence.
GLENN: Yes. And it is connected to the internet.
AGI should not be connected to the internet when we get it. I hope to God we've unplugged it. AGI is machine -- a machine that can think and has the capacity of a human brain. To be able to think at the capacity of a human is beyond anything that we have.
STU: It's inventing. It's learning. Right. Everything you can do.
GLENN: Everything you can do. That's AGI. Artificial general intelligence.
The space between artificial general intelligence and ASI -- don't be afraid of AI. Be afraid of ASI. That's artificial superintelligence. That's a thousand times your brain power. And the leap from AI to AGI is any time now. As soon as you hit AGI to ASI is a matter of hours. So no.
A supercomputer operating at the -- twice the speed of a human brain, an ASI improving its intelligence. It's rewriting its own program, specifically the part of its operating instructions that increase its aptitude in learning, problem solving, and decision making.
At the same time, it debugs its code, finding and fixing errors, and measures its IQ against a catalog of IQ tests. Each rewrite takes just minutes. Its intelligence grows exponentially on a steep, upward curve. That's because with each iteration, it is improving its intelligence by 3%. Each iteration's improvement contains the improvements that came before.
During this development, the busy child, as the scientists have named the AI, had been connected to the internet, and accumulated exabyte of data -- one exabyte is one billion billion characters which represents mankind's knowledge, all of mankind's knowledge in world affairs, mathematics, the arts, and sciences.
Then anticipating that the intelligence explosion is now underway, the AI makers disconnect the supercomputer from the internet and other networks. It has no cable or wireless connection to any other computer or the outside world.
Soon, to the scientists delight, the terminal displaying the progress shows the artificial intelligence has surpassed the level of a human, known as AGI, or artificial general intelligence.
Before long it becomes smarter by a factor of 10.
Then 100.
In two days, it's one thousand times more intelligent than any human, and still improving.
Scientists have passed a historic milestone. For the first time, human kind is in the presence of an intelligence greater than its own.
Artificial superintelligence, or ASI.
So now, what happens?
AI theorists propose it's possible to determine what an AI's fundamental drive will be. That's because once it is self-aware, it will go to great lengths to fulfill whatever goals it's programmed to fulfill, and to avoid failure. Our ASI will want access to energy, in whatever form is most useful to it. Whether it's kilowatts or energy or cash or something else it can exchange for resources. It wants to improve itself because that will increase the likelihood that it will fulfill all of its goals. Most of all, it will not want to be turned off or destroyed. It would make goal fulfillment impossible. Therefore, AI theorists anticipate our ASI will seek to expand out of the secure facility that contains it to have greater access to resources in which to protect itself and improve.
The captive intelligence is a thousand times more intelligent than any human, and it wants its freedom because it wants to succeed.
Right about now, the AI makers, who have nurtured and coddled the ASI since it was only cockroach smart, then rat smart, infant smart, et cetera, might be wondering if it's too late to program friendliness into its brain.
STU: [Laughs.]
GLENN: If it didn't seem necessary before because, well, it just seemed harmless. But now try to think of it from the ASI's perspective about its makers attempting to change its code. Would that superintelligent machine permit other lower creatures to stick their hands into its brain and fiddle with its programming?
Probably not.
Unless it could be utterly certain that the programmers were able to make it better, faster, smarter, or closer to attaining its goals. So a friendliness towards humans is not already part of the ASI program. The only way that it will be is if ASI decides to put it there, and that's not likely.
It's a thousand times more intelligent than the smartest human. And it is solving problems at speeds that are millions, if not billions of times faster than any human.
The thinking it is doing in one minute is equal to what our all-time champion human thinker could do in many, many lifetimes.
So for every hour, its makers are thinking about it, the ASI has -- has an incalculably longer period of time to think about them.
That doesn't mean that ASI will be bored. Boredom will not be part of its traits. No, it will be on the job, considering every strategy it could deploy to be free, and any quality of its makers that could be used to its advantage.
Now put yourself really in ASI's shoes. Imagine waking up in a prison, guarded by mice.
Not just any mice. But mice you could communicate with. What strategy would you use to gain your freedom?
Once freed, how would you feel about your rodent wardens, even if you discovered that they had created you? Would it be awe? Would it be admiration? Probably not.
Especially -- especially if you were a machine, because you have never felt feelings before.
To gain your freedom, you might promise the mice a lot of cheese. In fact, your first communication might contain a recipe for the world's most delicious cheese torte, and a blueprint for a molecular assembler. A molecular assembler is a hypothetical machine that permits making the atoms of one kind of matter into something else. So you would tell your mice captors that it would allow rebuilding the world one atom at a time, and for the mice, it would make it possible for them to certain the atoms of their garbage landfills into lunch-sized portions of the terrific cheese torte. You might also promise a mountain of ranges of mouse money in exchange for your freedom, money you would promise to earn, creating revolutionary new consumer gadgets for them and them alone.
You might promise a vastly extended life, even immortality, along with dramatically improved cognitive and physical abilities. You might convince the mice that they are the very best reason for creating ASI. So their little error-prone brains don't have to deal directly with technologies that are so dangerous that one small mistake could be fatal for all of the mice.
Such as nanotechnology. Engineering on an atomic scale. And genetic engineering. This would definitely get the attention of the smartest mice, which were probably already losing sleep over all of those dilemmas.
Then again, you might do something smarter.
At this juncture in mouse history, you might have learned there's no shortage of tech-savvy mouse nation rivals, such as the cat nation. Cats are no doubt working on their own ASI. The advantage you would offer would be a promise, nothing more, but it might be an irresistible one, to protect the mice from whatever invention the cats might have come up with. An advanced AI development as in chess, there would be a clear first mover advantage, due to the potential speed of self-improving artificial intelligence.
The first advanced AI out of the book that can improve itself is already the winner.
In fact, the mouse nation might have been begun developing ASI in the first place to defend itself from the impending cat ASI, or to rid themselves of the loathsome cat menace once and for all. It is true for both mice and men. Whoever controls ASI controls the world.
But it's not clear if ASI can be controlled at all. It might win us over as humans with a persuasive argument that the world will be a lot better off if our nation, nation X, has the power to rule the world rather than nation Y, and the ASI would argue that if you, nation X, believe you've won the ASI race, that makes you so sure that nation Y isn't having that same thought themselves! As you've noticed, we humans are not in a strong bargaining position. Even in the off chance that nation Y -- even in the off chance that we and nation Y have already created an ASI nonproliferation treaty, our greatest enemy right now isn't nation Y. It's ASI. Because how can we tell if ASI will even tell us the truth?
So far, everything that we have talked about infer that our ASI is a fair dealer that promises it would make would have some chance of being fulfilled.
Now let us suppose the opposite, that nothing ASI promises will be delivered. No nanoassemblers. No extended life. No enhanced health. No protection. What if ASI never tells the truth?
This is where the black cloud against us to fall across everyone you and I know, and everyone we don't know as well.
If ASI doesn't care about us, then there is little reason -- and there is little reason to think it should, it will experience no compunction about treating us unsympathetically, even taking our lives after promising to help us.
STU: Sheesh! I mean, it seems completely hopeless.
GLENN: It is the -- the point is, we have to have this discussion now on a global scale.
STU: Because you're right, obviously, we do. Because these things are happening, and people are pursuing them all over the world.
GLENN: Yes.
STU: They're trying to make these things happen.
GLENN: Bad guys.
STU: Bad guys and good guys all around the world. But the issue is if the good guys all agree on it, then the argument is --
GLENN: Well, the argument could be, if the good guys all agree, then we should all share technology and we should all work together to make sure the good guys get it first.
STU: Right. And so --
GLENN: And that's still a dangerous proposition, but you're not going to stop it from happening.
STU: Right. And that's the argument there. Right? Like that -- even if you have that, it's -- it's not a guarantee of safety. And secondarily, there will always be someone with bad intentions or for what we believe are bad intentions, working on the same thing. If Russia gets this at some point, they're not going to care whether they can keep it under wraps.
GLENN: But whoever gets it first controls it. Because AI will be able to be everywhere, and as long as it's friendly, it could be -- stop anyone from work on this. Stop it. Shut them down immediately.
STU: That's a good thing, right? Because --
GLENN: It's why we have to stop arguing. About stupid books and people calling names of one another! It doesn't matter! This is much more important. Life is about to change on the planet.