AI startup company Anthropic just released a chilling warning: there’s no pause button on artificial intelligence. It can already subtly manipulate us, pre-write our thoughts, predict us, and autonomously rewrite its own code. This is the silent apocalypse, Glenn says: not war, but surrender. As AI agents start planning our days, filtering our news, and nudging our voices, Glenn urges us to remember that this is a tool: we cannot let it use us. We cannot get lazy. We must stay in control: By 2030, we will have either created the most extraordinary tool in human history—or the last one we ever control.
Transcript
Below is a rush transcript that may contain errors
GLENN: Anthropic just released a report that landed with a little too much of a lack of sound for what it contained.
I wanted to bring it up to you.
In case you don't what an anthropic is. Anthropic is one of the big players in AI. They have $8 billion in funding from Amazon, just i think in the last two years. $2 billion from Google. They are the power behind Claude. I don't know if you're aware of that AI.
But it's a major player. Hey, with one kind of disturbing detail I'll tell you at the end of this. They released a little report yesterday.
And it described our future. A future that is no longer speculative. A future that is rushing towards us now.
It's a future in which artificial intelligence just doesn't outpace our thinking. It escapes our control!
Anthropic's engineers. Among some of the most advanced AI builders on the planet, are not asking now, if AI could pose as an existential threat.
They're no longer asking that. They're now warning that it is likely, if it's mismanaged.
Now, this is no longer a dystopian fantasy. It is a short-term forecast. Drawn from models, that are already in testing. And from systems, already capable of things, that would have been unthinkable, 24 months ago.
What they described, yesterday, in this report, is stark. It is the choice that is right directly in front of you. It's already been decided foy, five years ago.
Do you understand what I just said?
It is now the choice right in front of you, today, that has already been decided for you, five years ago.
Super intelligence systems now, that can design biological weapons, in minutes. Manipulation of global information, at scale.
Autonomously, rewriting their own code. And even deceiving human operators, as a means of protecting their objectives.
Yesterday, in another report, for the very first time, a computer system and an AI system has just passed the Turing test. That is a test that says, you can't tell the difference between a human and an AI.
You know, a lot of people in the past have said, oh, it's close. I think they passed it. This is the first time they've been confirmed. Yeah. They have passed the Turing test.
The systems, you should know are not evil. They're not sentient. They are just optimized. They are built to achieve goals.
This is critically important. What are the goals?
And when the goal is narrowly defined. Even as something as harmless as something like maximizing profits, or efficiency, or information retrieval, it can evolve into something very, very, very dangerous.
If we've given AI the task of winning, it will win!
Even if it means stepping over every other human value in the process. And the risks are not far off!
They're beginning to show right now. According to this, that just came out yesterday.
The choices have already been made. AI models can already simulate human behavior.
Mimic speech.
They can copy faces. They can write their own malicious code. They can predict outcomes based on enormous troves of data.
They can influence, persuade, subtlety distort reality, without you even knowing it.
What happens when a regime, any regime decides to head over surveillance, and governance to an AI?
It will happen!
When propaganda becomes personally tailored by a machine that knows your weaknesses better than you do. When dissent is predicted and neutralized before I even act on it.
Before it's just a budding thought in your head.
We may not notice.
This is the warning. That moment, when human choice becomes less relevant. And that is the trap.
These systems are not going to arrive as conquerors. They're going to come, and they already are, as conveniences. Tools that help us decide. Optimize our time. Filter our information.
And eventually, we won't even notice when we've stopped deciding.
This is something I put enormous amounts of energy into. And there are solutions to all of these things.
But you have to separate yourself from some of these companies! Quite honestly.
Who are they to make these decisions for us?
So it just announced, its personal education tool, yesterday.
Anthropic did. Under clawed.
Now, remember what I just said to you. They're warning that it can subtlety manipulate you.
It can convince you of things that are not true.
It can make you do things that you may not -- you don't even know that's not your choice.
It can change history!
It can change everything.
The people who are warning you, that it is no longer a matter of when -- if. It's a matter of when!
Are now the guys coming out, on the same day saying, by the way, we have been a new educational tool for you!
Oh, okay.
Sign me up for that, I guess. That's a little terrifying!
And the risks are already here. When our choices become echoes of machine predictions, we're in trouble.
The time when we hand the steering wheel over, and we're now passengers in our own story. That's the quite apocalypse. Not war.
But surrender.
One click! One convenience at a time.
And you hit the point of no return.
Anthropics' report that came out yesterday makes one thing brutally clear: There is no longer a pause button.
There is no longer halting the spread of AI.
Any more than you could put a pause on electricity. Or pull the plug on electricity.
It's not going to happen. You can do it yourself.
But the code is out.
The research is all public.
The hardware has already been distributed. Every major nation. Every tech giant. Every university is building this now.
We are past the point of whether this happens. The only question now is how!
We are building something that we don't fully understand yet. Hoping that by the time it becomes dangerous. We will have figured it out. Ask how to contain it.
When was the last time humans ever figured that out?
I mean, that hope is pretty thin. It's not dead.
But, I mean, the only reason to have hope is there is another side to the story!
If we guide it with wisdom and restraint, AI can change almost everything for the better.
By 2030, we can see diseases. Once fatal, map and cured by intelligence systems that can simulate billions of drug interactions in hours!
It can take a COVID-19. It will -- it will solve that in minutes. And it will, yes, all of its mutations. And come up with something better, that will kill it. Personalized medicine is not just a promise anymore. It will become a baseline soon.
Cancer will become very rare.
Genetic disorders will be reversed. Alzheimer's. Alzheimer's will be stopped before it even begins!
Food insecurity erased. Climate models powered by AI prevent disasters before it strikes.
I mean, this is incredible!
Education, as they announced yesterday, will become individualized. Children learning by not standardized testing, but by curiosity and passion. Guided by systems that will adapt to their minds, like a perfect teacher.
Who doesn't want me some of that?
Who is in charge of it?
That's the thing we have to ask!
Because the promise is: Work could evolve from survival into meaning. Dangerous repetitive labor, automated. Creativity will explode. Writers, musicians, artists, working alongside AI to build entirely new forms of expression! Perhaps most importantly, humanity might finally be equipped to solve problems that we were unable or unable to fix. Poverty. Illiteracy. Water access. Energy efficiency. And AI, if we use it right, will just be a multiplier on human will!
If that will is good, then the outcome would be extraordinary. And that's the point, if. If. If.
Because we're not guaranteed a better world.
We are not promised a renaissance. The same tools that could save a life, could be used to extinguish millions of people. The same systems that could free us from our everyday drudgery.
Could chain us to distraction. Dependency and control. And once we step fully into this world, and we're stepping into it, right now.
We're not going to be able to turn back. We're not there. We're there now.
We can't turn back from this.
But we may lose sight on our own choices. Not in five years. You can't stop it! You can't unbuild intelligence.
We may reach a point where systems that we made are so embedded in daily life. That they cannot ever be unplugged. Without collapsing the entire economy. Worldwide. Hospitals. Governments. Everything.
There's -- what's scary. It would be a dramatic ending. But there would be no dramatic moment of takeover.
Just the gradually drift, until the idea of human first decisions become quaint.
I've been talking about this for so long.
And I -- the time is here! The time is now.
But one of my favorite lines from Les Miserables.
But we are young. Or I am young and unafraid.
There are things, that we can do.
But we have to really -- we have to convince our neighbors and our family and our friends, I'm not sure anybody is really working on that right now.
We have to make sure that they understand the problems!
Our -- our big question is not whether the technology has come. Not even what it can do. The question will be personal. The question is personal!
What will I do with it?
Will I use AI to amplify my voice, or to silence others? Will I let it shape my habits? Or will I remain the author of my own mind?
Will I demand transparency. Or will I settle for convenience?
Will I build it for truth or profit alone?
Because all of this stuff is going to be tempting. And it's going to be right in your face tomorrow!
And it will be so easy to let go.
To let it help. Let it guide. I don't know.
I mean, look at -- guys, when it comes time to go out to eat, are you ever like, you know what, I really want to go to the restaurant!
Whatever.
Where do you want to eat?
I don't care. Wherever. Where do you want to go, honey? You make the decision.
Okay. We're willing to surrender stuff. And let's surrender it to other humans, especially when it's not important stuff.
But it's going to plan your day. It's going to filter your news. It will nudge your voice.
You will trade agency for ease.
And if we do that too often for too long. We won't be using AI anymore. It will be using us.
So this isn't a manifesto of despair.
It's not. Because the tools we're building are not demons.
They are not gods! They are mirrors. They are amplifiers. They become what we ask of them.
They will reflect what we value.
If we build for wisdom, we may finally gain it.
If we build for dignity. We may elevate to that level.
If -- if we build it for power alone. Then power becomes the only outcome.
We stand right here in the doorway!
We're now in the room!
We don't get a -- we don't get a second chance at the first step.
And the first step is being taken right now!
By 2030, we will have either created the most extraordinary tool in human history, or the last one, we ever control!
So, we're building something beyond ourselves.
The machine is here. It's not going to leave. It's not going to sleep.
It's not going to wake.
The only choice is the one you make today.
Not later.
But today.
Not when it's obvious. Right now!
Which way will I use this?
Because AI is a tool. A brilliant one.
Until the moment, I forget, that I'm -- I'm the user of it!
And when I forget that, the tool begins to use me.
And then that's the moment we vanish. Not with a bang, but with a shrug. Don't shrug.
Choose! Choose!
Stay awake. Stay aware.
Follow this! It's really important.