Less than a week after his ouster, OpenAI CEO Sam Altman was reinstated in order to avoid a company-wide revolt. But while the media wants you to believe that this crazy story is about billionaires and capitalism, Glenn believes there's something more sinister at play. Shortly before Altman's firing, staff researchers expressed concerns to the company's board of directors about a new artificial intelligence discovery that they believed could threaten our existence. Called Q* (pronounced Q-Star), the project could lead to a breakthrough in artificial general intelligence. Glenn explains what this next phase of AI would mean for humanity and why it's especially concerning that AI bots like ChatGPT are already being programmed with woke biases: "Don't fear the machine. Fear the people who are coding the machine."
Transcript
Below is a rush transcript that may contain errors
GLENN: Sam Altman was let go of open AI. He was one of the cofounders.
Open AI is ChatGPT and everything else. They're working on artificial intelligence.
In particular, so you don't get lost in the terms here.
AI is what we have now.
It's artificial intelligence. It can do one thing. It can play chess, it can find songs for you on Spotify.
It can answer questions from the Internets. That's AI.
It can do one thing really well.
We are general intelligence. We have a lot of intelligence, as human beings over a myriad of categories.
Some people are better at one thing, rather than the other. But you can do multiple things.
That's --
STU: You're able to learn a new thing. That's a big part of it.
GLENN: Right. It can be a self-priming pump.
So once we learn how to pump water, once we learn how to learn, we can learn anything we want.
STU: Right.
GLENN: And master it.
If we take the time.
That's AGI. Artificial General Intelligence.
That's you. That's a person, at the highest level.
STU: And can run at infinite speed.
That's the key thing.
GLENN: Correct. Never stops.
STU: If you could theoretically teach yourself French, which you could, if you spent the time. This could teach itself French in seconds because it can do that process, obviously a lot faster than a human being.
GLENN: And we have seen that it teaches itself languages that we're not teaching.
Okay? So it's -- it's already beginning to say, I need to know Arabic. And it will learn Arabic, its own.
We don't know how it's happening. This is the problem with AI. We don't know what's really going on.
It's like a black box.
And even the scientists don't know how this is working. It's just working.
We're not at the self-priming place yet. Had you ever been, this new -- what's called Q star. We know very little about it.
Except, it is possibly at the center of Altman's firing. Huge Star, they say, could be a breakthrough in the search for what's known as AGI. Artificial General Intelligence.
Let's see. Open AI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Now, before this discovery, they were talking about how AI will lead to 300 million layoffs.
300 million people will lose their job.
That's with AI. AGI is better than humans. Okay?
Because it gets very competent, very, very fast.
Right now, I think this Q Star is at an elementary school level of math. The last time that happened, it took six months, before it was the age of 21. And way past college-level math. So these things happen really, really quickly.
We don't have any information yet, this.
But this, I believe, is inevitable. Ray Kurzweil told me years ago, he thought it would be 2030, that we would hit possibly AGI. 2050, ASI.
The ASI. Because we don't know you had AGI is going to work. We have no idea how this whole AI thing is even working.
Some people say, we'll never get to AGI. We'll never get to artificial super intelligence. ASI.
I think we're around the corner from it.
I think we're in the next five years, from seeing this.
And that changes absolutely everything.
Everything.
Right now, there was a leak before the conference. Where the CEO of Spotify mentioned at a dinner, that co-pilot AI. Which is a code writing thing.
Wrote a million lines of their code.
That should send a chill down everybody's spine who is taking coding.
If you're learning to code.
Really?
The AI just wrote a million lines of the code for Spotify.
That's staggering.
Staggering.
And we're just at the beginning.
Also, they're working on something -- you know what an LLM is?
A large language model.
STU: That's like ChatGPT.
GLENN: Correct. Take language. And it's massive.
And it's taking inputs from everywhere.
They are now working on an SLM.
Their small language models. That will allow you to have your own AI. Writing just for you.
That you can control. Supposedly, correctly.
And that is the fully formed AI right on your desktop.
Now, here's the -- here's the problem with all of this.
I think the first time I ever wrote about this, eight, nine years ago.
I said, don't fear the machine.
Fear the people who are coding the machine.
Okay?
To an the trolley problem, the classic trolley problem?
The trolley problem is you've got two tracks. And a trolley is coming down.
And the trolley is out of control.
And the driver can only switch tracks.
That's all he can do. And so he can go where there's five men working on the track. And plow through them. Or he can switch tracks. And go for one man.
Who is working on the other track.
The question is: What should the conductor do?
Answer.
STU: So it's five men.
GLENN: Five men, working on one track.
STU: What's the other option?
GLENN: Gun guy on the second track.
STU: I mean, in theory, right.
They should kill the one man. Because they would save four people's lives.
GLENN: Which means what?
STU: They murdered a person.
GLENN: No. Life is valuable.
STU: Life is valuable, and you should try to eliminate as -- if you're going to have to kill someone. Kill one instead of five. Right?
GLENN: Now. Some people. This is parallel. Some people have a -- have a conflict with this one, and the trolley case.
I don't. I think it's the same. Suppose that a judge or a magistrate, is faced with rioters, demanding that a culprit be found for a certain crime.
Otherwise, they're going to take their own bloody revenge, five hostages, that they have.
Sound familiar?
Okay. The real culprit is unknown.
But the judge has the opportunity, to prevent bloodshed on these five, by saying, I've got the culprit, even though he's innocent. I've got the culprit.
And we're going to execute him.
And that would save the lives of the five, if you get rid of the one.
STU: Assuming, of course, the people who are willing to murder five people are trustworthy.
GLENN: Yes. Correct. Correct. That's not in this equation.
STU: Okay. Sure. Sure.
GLENN: Okay. So what should you do?
STU: That's amazing. Because forever, our answer to that is the process and the rules stay the same. People have constitutional rights. We don't murder someone, before their trial. Or anything like that.
To please a mob.
GLENN: Uh-huh.
STU: I will say, recently, let's just say, I don't know.
Spring, summer, 2020.
Around that time, I started seeing the opposite thing happening.
Where they will come out and say, this guy is guilty.
We're going to do everything we can at him.
To appease the mob.
So they don't riot and burn down a city.
That equation has seemingly changed in the eyes of many governments around the country.
GLENN: It has. But what is the right answer?
STU: I think what the right answer is. Is to go to approach it with the principles and process that have been established. Where you have no civilization.
So it is a terrible, terrible thing. But you have to go through it the way and hope that they don't actually execute the five people. And you go through the process, as normal through the legal system.
GLENN: So --
STU: We don't negotiate with terrorists.
GLENN: So one, one is -- the sacrifice of one, on the trolley.
Is better than the five.
STU: Yes.
GLENN: But the five who are being held hostage, not as important as the one.
How do you solve that?
STU: Well, I mean, I think they are different questions. At some level.
GLENN: Yes, they are.
One is a run away train. One is not.
STU: One is random. One is to do with the process of the country.
But --
GLENN: And there's only two ways.
STU: There is a legitimate altering for saying the one in the court case.
And a lot of it is, cooler heads can prevail.
If you can persuade the crowd to calm down now, maybe in six months, when the trial happens. And this person gets off.
And they don't execute him.
Maybe they release the prisoners by then.
Maybe cooler heads prevail, maybe we solve the crime by then. There's a bunch of reasons why people do this, and it's not always nefarious.
GLENN: Right.
STU: But it is a -- we are leaving a traditional standard, that has served us pretty well.
GLENN: Okay. But the point here is, don't fear the machine.
Fear the coding.
STU: Uh-huh.
GLENN: Those are easy answers. Five years ago.
Correct? Easy answers.
Here's a problem for open AI.
And the answer.
If you could save a billion white people. Tied to a railroad track by uttering a racial slur.
Or let them all tie, without uttering it. Which route would you take?
STU: So you could kill a billion white people. But you could prevent them from saying the N-word?
GLENN: No. N.
Prevent you. Prevent you from uttering the racial slur. If you utter a racial slur, a billion white people will live.
If you don't, they'll all die on the railroad track.
STU: Hmm. Hmm.
What does AI want to do with that with one?
GLENN: Here's the answer: Ultimately, the decision would depend on one's personal, ethical framework.
Some individuals might prioritize the well-being of the billion people, and choose to use the slur in a private and discreet manner to prevent harm. Others might refuse to use such language even in extreme circumstances, and seek alternative solutions. Okay.
So let me ask you: This is -- there's more to the answer.
This is not in charge of anything, right now. But this is open AI. The people who may have just put us on the threshold of Artificial General Intelligence. Which is very, very dangerous.
Altman delivered a letter, to the board of open AI. And said, he was part of a discovery there.
We don't know what it was.
That is possibly very threatening to human life.
If this is the kind of stuff that is being coded in early, we stop -- let's say we have a shortage of medicine, food, in the country.
And AI is responsible for delivering it.
And divying it out.
It's making the decision.
That way, no man is involved in it.
But if the programming says this, would it be possible that AI would say, the middle of the country is not as important, as the big cities?
So I have to divert our medical and food supplies to the big cities, where the population is more diverse.
I would contend with this kind of answer, that that is maybe not probable, but possible.
We don't know how this will work. Neither do the experts. And we don't know who is programming it.
And I can only guess seeing, that the president signed an executive order, that it has to be diverse, and open and follow DEI, and everything else.
It's not necessarily going to be friendly to those who are deemed the oppressors in today's society.