AI development companies like OpenAI and Google DeepMind are in a “reckless race” to build smarter AIs that may soon become an “army of geniuses.” But is that a good idea? And who would control this “army?” Glenn speaks with former OpenAI researcher and AI Futures Project Executive Director, Daniel Kokotajlo, who warns that the future is coming fast! He predicts who will likely hold power over AI and what this tech will look like in the near future. Plus, he explains why developers with ethical concerns, like himself, have been leaving these Silicon Valley giants in droves.
Transcript
Below is a rush transcript that may contain errors
GLENN: So we have Daniel Kokotajlo, and he's a former OpenAI researcher. Daniel, have you been on the program before? I don't think you have, have you?
DANIEL: No, I haven't.
GLENN: Yeah. Well, welcome, I'm glad you're here. Really appreciate it. Wanted to have you on, because I am a guy. I've been talking about AI forever.
And it is both just thrilling, and one of the scariest things I've ever seen, at the same time.
And it's kind of like, not really sure which way it's going.
Are -- how confident are you that -- what did you say?
DANIEL: It can go both ways. It's going to be very thrilling. And also very scary.
GLENN: Yeah. Okay.
Good. Good. Good.
Well, thanks for starting my Monday off with that. So can you tell me, first of all, some of the things, that you think are coming, and right around the corner that people just don't understand.
Because I don't think anybody. The average person, they hear this. They think, oh, it's like social media. It's going to be like the cell phone.
It's going to change everything. And they don't know that yet.
DANIEL: Yeah. Well, where to begin. I think so people are probably familiar with systems like ChatGPT now, which are large language models, that you can go have an actual normal conversation with, unlike ordinary software programs.
They're getting better at everything. In particular, right now, and in the next few years, the companies are working on turning them into autonomous agents stop instead of simply responding to some message that you send them, and then, you know, turning off. They would be continuously operating, roaming around, browsing the internet. Working on their own projects. On their own computers.
Checking in with you, sending messages. Like a human employee, basically.
GLENN: Right.
DANIEL: That's what the companies are working on now. And it's the stated intention of the CEOs of these companies, to build eventually superintelligence.
What is superintelligence? Super intelligence is fully eponymous AI systems, that are better at humans at absolutely everything.
GLENN: So on the surface -- that sounds -- that sounds like a movie, that we've all seen.
And you kind of -- you know, you say that, and you're like, anybody who is working on these.
Have they seen the same movies that I have seen?
I mean, what the heck? Let's bring -- let's just go see Jurassic park. I mean, ex-Machina. I don't -- I mean, is it just me? Or do people in the industry just go, you know, this could be really bad?
DANIEL: Yeah. It's a great question. And the answer is, they totally have seen those movies, and they totally think, yes, they can get rid of that. In fact, that's part of the founding story, of some of these companies.
GLENN: What? What do you mean? What do you mean?
DANIEL: So Shane Legg, who is I guess I'll give you the technical founder of Deep Minds, which is now part of Google Deep Minds. Which is one of the big three companies, building towards super intelligence.
I believe in his Ph.D. thesis, he discusses the possibility of superhuman AI systems, and how if they're not correctly aligned to the right values, if they're not correctly instilled with the appropriate ethics, that they could kill everyone.
And become a -- a superior competitor species to humans.
GLENN: Hmm.
DANIEL: Not just them. Lots of these people at these companies, especially early on. Basically had similar thoughts of, wow. This is going to be the biggest thing ever.
If it goes well, it could be the best thing that ever happens. If it goes poorly, it could literally kill everyone, or do something similarly catastrophic, like a permanent dystopia. People react to that in different ways. So some people voted to stay in academia.
Some people stayed in other jobs that they had, or funded nonprofit to do research about this other thing. Some people, decided, well, this is going to happen, then it's better good people like me and my friends are in charge, when it happens.
And so that's basically the founding story of a lot of these companies. That is sort of part of why Deep Minds was created, and part of why OpenAI was created.
I highly recommend going and reading some of the emails that surfaced in court documents, related to the lawsuits against OpenAI.
Because in some of those emails. You see some of the founders of OpenAI, talking to each other about why they founded OpenAI.
And basically, it was because they didn't trust Deep Mind to handle this responsibly. Anyway how --
GLENN: And did they go on to come up with -- did they go on to say, you know, and that's why we've developed this? And it's going to protect us from it? Or did they just lose their way.
What happens?
DANIEL: Well, it's an interesting sociological question.
My take on it is that institutions tend to be -- tend to conform to their incentives over time.
So it's been a sort of like -- there's been a sort of evaporating growing effect.
Where the people who are most concerned about where all this is headed, tend to not be the one to get promoted.
And end up running the companies.
And they tend to be the ones who, for example, be the ones who quit like me.
GLENN: Let's stop it for a second.
Let's stop it there for a second.
You were a governance researcher on OpenAI on scenario planning.
What does that mean?
DANIEL: I was a researcher on the government's team. Scenario funding is just one of several things that I did.
So basically, I mean, I did a couple of different things at OpenAI. One of the things that I did was try to see what the future will look like. So 2027 is a much bigger, more elaborate, more rigorous version of some smaller projects, that I sort of did when I was at OpenAI.
Like I think back in 2022, I wrote my own -- figuring out what the next couple of years were going to look like. Right? Internal scenario, right?
GLENN: How close are you?
DANIEL: I did some things right. I did some things wrong. The basic trends are (cut out), et cetera.
For how close I was overall, I actually did a similar scenario back in 2021, before I joined OpenAI.
And so you can go read that, and judge what I got right and what I got wrong.
I would say, that is about par for the course for me when I went to do these sorts of things. And I'm hoping that AI 27 will also be, you know, about that level of right and wrong.
GLENN: So you left.
DANIEL: The thing that I wrote in 2021 was what 2026 looks like, in case you want to look it up.
GLENN: Okay. I'll look it up. You walked away from millions of equity in OpenAI. What made you walk away? What were they doing that made you go, hmm, I don't think it's worth the money?
DANIEL: So -- so back to the bigger picture, I think. Remember, the companies are trying to build super intelligence.
It's going to be better than humans, better that night best humans at everything. While also being faster and cheaper. And you can just make many, many copies of them.
The CEO of anthropic. He uses this term. The country of geniuses. To try to visualize what it would look like.
Quantitatively we're talking about millions of copies.
Each one of which is smarter than the smartest geniuses.
While also being more charismatic. Than the most charismatic celebrities and politicians.
Everything, right?
So that's what they're building towards.
And that races a bunch of questions.
Is that a good idea for us to build, for example?
Like, how are we going to do that?
(laughter)
And who gets to control the army of geniuses.
GLENN: Right. Right.
DANIEL: And what orders are going to be give up?
GLENN: Right. Right.
DANIEL: They have some extremely important questions. And there's a huge -- actually, that's not even all the questions. There's a long list of other very important questions too. I was just barely scratching the surface.
And what I was hoping would happen, on OpenAI. And these other companies, is that as the creation of these AI systems get closer and closer, you know, it started out being far in the future. As time goes on, and progress is made. It starts to feel like something that could happen in the next few years. Right?
GLENN: Yes, right.
DANIEL: As we get closer and closer, there needs to be a lot more waking up and paying attention. And asking these hard questions.
And a lot more effort in order to prepare, to deal with these issues. So, for example, OpenAI created the super alignment team, which was a -- a team of technical researchers and engineers, specifically focused on the question of how do we make sure that we can put any values into these -- how do we make sure we can control them at all?
Even when they're smarter than us.
So they started that team.
And they said that they were going to give 20 percent of their compute to -- towards me on this problem, basically.
GLENN: How much -- how much percentage. Go ahead.
DANIEL: Well, I don't know. And I can't say. But as much as 20 percent.
So, yeah. 20 percent was huge at the time.
Because it was way more than the company, than any company was devoting to that technical question at the time. So at the time, it was sort of a leap forward.
It didn't pan out. As far as I know, they're still not anywhere near 20 percent. That's just an example of the sort of thing that made me quit. That we're just not ready. And we're not even taking the steps to get ready.
And so we are -- we're going to do this anyway, even though we don't understand it. Don't know how to control it. And, you know, it will be a disaster. That's basically what got me delayed.
GLENN: So hang on just a second. Give me a minute.
I want to come back and I want to ask you, do you have an opinion on who should run this? Because I don't like OpenAI.
I like X better than anybody, only because Elon Musk has just opened to free speech on everything. But I don't even trust him. I don't trust any of these people, and I certainly don't trust the government.
So who will end up with all of this compute, and do we get the compute?
And enough to be able to stop it, or enough to be able to be dangerous?
I mean, oh. It just makes your head hurt.
We'll go into that when we come back.
Hang on just a second. First, let me tell you about our sponsor this half-hour.
It's Preborn. Every day, across the country, there's a moment that happens behind closed doors. A woman, usually young, scared, unsure, walks into a clinic. With a choice in front of her. A world that seems like it's pressing in at all size.
And she just doesn't know what to do.
This is the way. You know, I hate the abortion truck thing. Where everyone is screaming at each other.
Can we just look at this mom for just a second? And see that in most cases, it's somebody who has nobody on their side.
That doesn't have any way to afford the baby.
And is scared out of their mind. And so they just don't know what to do. She had been told 100 times, you know, it's easy. This is just normal.
But when she goes to a Preborn clinic, if she happens to go there, she'll hear the baby's heartbeat.
And for the first time, that changes everything. That increases the odds that mom does not go through with an abortion at 50 percent.
Now, the rest of it is all in. But I don't have anybody to help me.
Sheets other thing that Preborn does. Because they care about mom, rather than the baby. That's what is always lost in this message. Mom is really important as well.
So they not only offer the free ultrasound. But they are there for the first two years. They help pay for what ever the mom needs.
All the checkups. All the visits. And the doctor. Even clothing. And everything. Really, honestly.
It's amazing. Twenty-eight dollars provides a woman with a free ultrasound.
And another moment. Another miracle. And possibly another life.
And it just saves two people not only the baby, but also a mom. Please dial #250. Say the key word baby.
#250. Key word baby or visit Preborn.com/Beck.
It's Preborn.com/Beck. It's sponsored by Preborn. Ten-second station ID.
(music)
Daniel Kokotajlo.
He's former OpenAI researcher. AI futures project executive director. And talking about the reckless race, to use his words, to build AGI.
You can find his work at AI-2027.com.
So, Daniel, who is going to end up with control of this thing?
DANIEL: Great question.
Well, probably no one.
And if not no one, probably some CEO or president would be my guess.
GLENN: Oh, that's comforting.
DANIEL: Like in general, if you wanted them to understand, like, you know, my views, the views of my team at the Future Project. And sort of how it all fits together. And why we came to these conclusions. You can go read our website, which has all of this stuff on it.
Which is basically our best guest attempt after predicting their future.
Obviously, you know, the future is very difficult to predict.
We will probably get a bunch of things wrong.
This is our best guess. That's AI-2027.com.
GLENN: Yes.
DANIEL: Yeah. So as you were saying, if one of these companies succeed in getting to this army of geniuses on the data centers. Super intelligence AIs. There's a question of, who controls them?
There's a technical question, of can -- does humanity even have the tools it needs to control super intelligence AIs?
Does anyone control them?
GLENN: I mean, it seems to me --
DANIEL: That's an unsolved question.
GLENN: I think anyone who understands this.
It's like, we get Bill Gates. But it's like a baby gate.
Imagine a baby trying to outsmart the parent.
You won't be able to do it.
You will just step over that gate.
And I don't understand why a super intelligence wouldn't just go, oh, that's cute.
Not doing that. You know what I mean?
DANIEL: Totally. And getting a little bit into the literature here.
So there's a division of strategies into AI's control techniques, and AI's alignment techniques.
So the control techniques are designed to allow you to control the super intelligence AI. Or the AGI, or whatever it is that you are trying to control.
Despite the fact that it might be at odds with you. And it might have different goals than you have.
Different opinions about how the future should be. Right?
So that's it sort of adversarial technique, where you, for example, restrict its access to stuff.
And you monitor it closely.
And you -- you use other copies of the AI, as watchers.
To play them off against each other.
But there's all these sort of control techniques. That are designed to work even if you can't trust the AIs.
And then there's a technique, which are designed to make the case that you don't need the control techniques, because the AIs are virtuous and loyal and obedient. And trustworthy, you know, et cetera.
Right? And so a lot of techniques are trying to sort of continue the specified values, deeply into the AIs, in robust ways, so that you never need the control techniques. Because they were never -- so there's lots of techniques. There's control techniques. Both are important fields of research. Maybe a couple hundred people working on -- on these fields right now.
GLENN: Okay. All right.
Hold on. Because both of them sound like they won't work.