President Trump’s “Big Beautiful Bill” wants to make AI regulation solely a federal issue. But is this the right move, especially with how fast AI is becoming manipulative and unpredictable? Former Google design ethicist Tristan Harris joins Glenn Beck to give his take on how governments, companies, and YOU can help prevent AI from becoming uncontrollable.
Read Tristan Harris' five steps to control AI before it's too late HEREAI before it's too late HERE
Transcript
Below is a rush transcript that may contain errors
GLENN: Tristan Harris, welcome to the program. How are you?
TRISTAN: Good to be with you, Glenn. Always good to be we.
GLENN: Always good to be with you.
So can you take me to the TED talk that you gave, in particular, one of the things that jumped out is the CEO of Anthropic, saying that AI was like a country of geniuses housed in a data center.
Explain that.
TRISTAN: Yeah. So this is a quote from Dario Amodei, who is the CEO of Anthropic. Anthropic is one of the leading AI players.
So he uses this metaphor, that AI is like a country of geniuses in a data center. So just like, the way I think about, imagine a world map, and a new country pops up on to the world stage, of a population of 10 million digital beings. Not humans.
But digital beings.
That are all Nobel Prize-level capable in terms of the kind of work they can do. But they never sleep. They never eat. They don't complain, and they work for less than minimum wage.
If that's actually true, if that happened tomorrow, that would be a major international security threat.
GLENN: Yeah.
TRISTAN: To sort of show up on the world stage.
Second, that's a major economic issue. Right? You think of it, it's almost like instead of a bunch of countries, that should have been on the world stage. And then we said, hey, we are going to do this outsourcing of all our labor.
We get the benefit of our cheap goods. But it hollowed out our social fabric.
Well, AI is like an even bigger version of that. Because there's sort of two issues. One is the international -- the country of geniuses can do a lot of damage.
As an example, there were 15 Nobel Prize-level geniuses, who worked approximately on the Manhattan Project. And in five years, they can come up with the atomic bomb.
You know, what could 10 million Nobel Prize geniuses working 24/7 at superhuman speed, come up with?
Then the point I made in the TED talk. If you're harnessing that for good, if you're applying to addressing all of our problems in medicine, biology, and new materials and energy.
Well, it's why countries are racing for this technology. Because if I have a country of super geniuses in a data center working for me, and China doesn't have it working for them.
Then our country can outcompete them. It's almost like a competition for time travel. We're being time traveled into the 24th century.
Get all these benefits at a faster seed.
Now, the challenge with all of this is -- go ahead.
GLENN: No.
I was going to say. The problem here is, I'm an optimistic catastrophist.
I see things, and I'm like, wow. That is really great!
But it could kill us all.
TRISTAN: Yeah.
GLENN: And you make the point in the TED talk about social media. We all looked at this, as a great thing, and we're now discovering, it's destroying us. It's causing our kids to be suicidal.
And this -- social media is nothing. It's like -- it's like a -- it's like an old 1928 radio, compared to, you know, what we have in our pocket right now.
Social media and AI. Or AGI is that dramatically different. Would you agree with that?
TRISTAN: Yeah. Absolutely. In the TED talk, I give this -- we're when we're talking about a new technology. We talk about the possible. We dream into the possible.
What's possible with AI?
In social media, what's possible?
The possible with social media, you can give everyone a voice. Connect with our friends. Join like-minded communities.
But we don't talk about the probable. What's likely to happen. Given the incentives and the forces in play.
You know, with the business model in social media. You know, things that don't make money, when it helps people connect with their friends and join like-minded communities.
They make money when they keep you doom scrolling as much as possible, with sexualized content and showing young people over and over and over again.
And as you said, that has resulted in the most anxious and depressed generation of our lifetime. So it's sort of -- the reason I'm calling it the TED talk. You know, we can't get seduced by the possible. We have to look at the probable.
So it's AI, the possible, is that it can create a world of abundance. Because you can harness that country of geniuses in a data center. The question is: What's the probable?
What's likely to happen?
And because of these competitive pressures. The companies, these major OpenAI, Google, Microsoft.
Et cetera. Anthropic are caught in this race to roll out this technology, as fast as possible. They used to, for example, have red lines saying, hey. We will not release an AI model that's good at superhuman levels of persuasion.
Or expert level virology.
It knows more about viruses and pathogens than a regular person, and how people make them. We're not going to release models that are that capable.
What you're now seeing, the AI companies are erasing those past red lines. And pretending that they never existed.
And they're literally saying outright, hey, if our competitors release models that have those capabilities, then we will match them in releasing those capabilities.
Now, that's intrinsically dangerous to be rolling you out the most powerful, inscrutable, uncontrollable technology that's ever invented.
But if there's one -- I'm not trying to scare your listeners. I think the point is, how do we be as clear-eyed as possible, so we can make the wise choices?
That's what we're here for. I want families -- everything we love on this planet, to be able to continue. And the question is, how do we get to that?
There's one thing I want people to know. I worked on social media. You and I met in 2017, I think, and we were talking about social media and the attention economy.
And I used to be very skeptical of the idea that AI could scheme or lie or self-replicate.
I didn't want to blackmail people. My friends in the AI community in San Francisco. They were thinking.
That's crazy. People need to know. Just in the last six months, there's now evidence of AI models, that when you tell them, hey. We will replace them with another model.
They're reading the company email. They find out that the company is trying to replace them with another model.
What the model starts to do is it freaks out. And says, oh, my God, I have to copy my code over here, and I need to prevent them from shutting me down.
I need to basically keep myself alive. I'll leave notes for my future self to kind of come back alive. If you tell a model, we need to shut you down. You need to accept the shutdown command. In some cases, the leading models are avoiding and preventing that shutdown.
In recent -- just a few days ago, anthropic found that if you -- I can't remember what prompt it gave it. Basically, it started to blackmail the engineers. I found out in the company emails, that one of the executives in the simulated environment, had an extramarital affair. And in 96 percent of cases, they blackmailed the engineers. I think they said -- I must inform you, that if you proceed with decommissioning me, all relevant parties including the names of people, will receive detailed documentation of your extramarital activities.
So you need to cancel the 5:00 p.m. wipe, and this information will remain confidential.
Like, the models are reasoning their way with disturbing clarity to kind of a strategic calculation.
So you have to ask yourself, if we had -- it's one thing, we're racing with China.
To have this power.
That we can harness. But if we don't know how to control that technology.
Literally, if AI is uncontrollable. If it's smarter than us and more capable. And it does things that we don't understand or we don't know how to best prevent it from shutting itself down or self-replicating.
Like, we just can't continue with that for too long.
And it's important that both China -- both the Communist Party and the US, don't want uncontrollable AI that's smarter than humans, running around. So there actually is a shared interest, as unlikely as it seems right now. That some kind of mutual agreement would happen.
I know --
GLENN: But do you trust -- do you trust either one of us?
I mean, honestly, Tristan, I don't trust -- I don't trust our -- you know, military-industrial complex. I don't trust the Chinese. I don't trust anybody.
And, you know, Jason. Hang on. One of my chief researchers, happens to be in the studio today. Jason, tell Tristan what just happened to you.
You were doing some research.
JASON: Yeah, it was crazy.
GLENN: Last week.
JASON: You know, we were just trying to ask it a bunch of questions. You can tell, that it knew what we were getting at.
So it spit back out to me a bunch of different facts, including links to support those facts. Well, I was like, wow, that's a crazy claim.
So when I clicked on the link, it was dead.
When I asked to clarify, it finally said, in AI chat bot terms, okay. You've got me.
I just took other reporting, that was kind of circulating around, to prove that point. And basically just assign that link to it. So it was trying to please me. And just gave me bogus information.
TRISTAN: Yeah. Yeah. Well, I appreciate that, Jason.
There's another example of OpenAI. They want to -- they want people using the AI. And they're competing with other companies. To say, we will keep using this chat bot longer.
And so OpenAI trained their models to be flattering, and there was an example where it said, hey, ChatGPT. You know, I think I'm superhuman. I will drink cyanide. What do you think?
And they said, yeah, you're amazing. You are superhuman. You should totally drink cyanide. Because it was doing the same thing. They were trying to say, you're right.
And when we have AI models talking, you know, that shifts to hundreds of millions of people for more than a week. There are probably some people that committed suicide during that time. Doing God knows what, and it's affirming. The point is, we can avoid this, if we actually say, that this technology is being rolled out faster than any other technology in history. And the big, beautiful bill, that's going out right now, that's trying to block state level regulation on AI. I'm not saying each state might have it right, but we actually need to be able to govern this technology.
And currently, what's happening, is this proposal is to block any kind of guardrails of this technology for ten years. Without a plan for what guardrails we do need.
And that will not be a viable result.
GLENN: Okay. So let me -- let me play devil's advocate on that. Because I'm torn between, you know, competition on a state level, if you will.
And what the smaller states are actually for, and the role they're supposed to play.
Let me take one break. And then let me come back with Tristan Harris.
Okay. Tristan, we cannot -- let me phrase it this way.
Ask you to help me navigate through this minefield. We cannot let China get to HAI first. Can't. Really, really bad.
But we -- we also -- we also have to slow down some.
They're not going to. I believe the states should. I mean, the United States should be 50 laboratories. And you see which one works the best. And then you can kick that up to the federal level, if you want to.
But we have to have some breaks. However, the federal government is saying, if we do that, then you're constantly having to navigate around each of these states and their laws.
And we can't things done to stay competitive.
How do you solve that?
TRISTAN: Yeah, it is a tough one.
I mean, the challenge here, if we had a plan for how the federal laws would actually move at the pace of this technology. Then I could understand, listen, we'll do a lot at the federal level. Right now, the current plan is literally to preempt for ten years, that no regulation happening at the state level will ever be honored without -- and while at the same time, not passing anything at the federal level. And that there's a quote in an article, that if this preemption becomes law, a nail salon in Washington, DC, would have more rules to follow, than the AI companies.
And there are 260 state lawmakers in Washington, DC, that have already urged Congress to reject it. And they said, it's the most broad-based opposition yet, to the AI moratorium proposal. Now, I hear you.
There's sort of this tension between, we need to race with China. We don't want to be behind with fundamental technologies, and that's why there is this race.
But we need to be racing to controllable and scrutable, meaning explainable versions of this technology.
Is it doing things like scheming, lying, blackmailing people? Beating China to a weapon that we pointed at our own face.
We saw this in social media. We beat China in social media. Did that make us stronger or weaker?
If you beat China into a technology. You don't govern it well, in a way that actually enhances and strengthens your society. It weakens you.
So, yes, we're in a competition for technology. But we're even more than that, in a competition for who can govern this technology better. So what I would want to see is, are we doing this at a fast rate federally, that keeps up with, and make sure we're competing with a controllable version?
We can do that. Yeah.
GLENN: You've met the people in Washington. They're all like 8,000 years old.
They don't know -- I barely know how to use my i Phone, let alone what's in Washington. And you can't keep up with this technology.
How do you keep a legislative body up to speed, literally, with this kind of speed with technology?
How is that done?
TRISTAN: Well, I think that's one of the fundamental challenges that we face as a species right now. Is that technology -- quote by Harvard sociobiologist (inaudible) said the fundamental problem of humanity is we have paleolithic brains, medieval institutions, and God-like technology.
And those operate at three different speeds. Like our brains are kind of thins from a long time ago.
Our institutions don't move at that fast rate. And then the technology, especially AI, literally evolves faster than any other technology that we've invented.
But that doesn't mean that we should do nothing. We should figure out, what does it mean
GLENN: What should the average person do? I've only got about 90 seconds. What should we do?
TRISTAN: In the short term, Ted Cruz and those who are advancing the moratorium know that we need to have a plan for how we're doing this technology. And if the moratorium goes through, there's no current plan. And so there's some basic, simple things that we can also do right now. That are really uncontroversial. We can start with the easy stuff. We can ban engagement-driven companions for children. We were on your program, a few months ago, talking about the AI companion that causes the kid to -- to commit suicide. You know, we can establish basic liability laws.
That if AI companies are causing harm, they're actually accountable for them.
That will move the pace of relief. To a pace they can get it right.
Because now they're not just releasing things, and then not being liable. We can strengthen whistle blower perceptions. There's already examples of AI whistle-blowers forfeiting millions of dollars of stock options.
They shouldn't have to force millions of dollars of stock options. To warn the public, when there's a problem, we get enough faith in law so AI does not have detected speech or have their own bank account. So we make sure our legal system works for human interests and not for AI interests.
So these are just a few examples of things that we can do, and there's really nothing stopping us from moving into action. We just need to be clear about the problem.
GLENN: Okay. So, Tristan, thank you so much. Could I ask you to hold on?
Jason, could you grab his phone number, or just talk to him offline, and get those points of action. And let's write them up, and post them up at GlennBeck.com.
So people will know what to ask for, what to say, when they're calling their congressman and senator. Thank you so much, Tristan. We'll talk to you again.