We Already Missed the Exit

We Already Missed the Exit

Policy

I watched a video recently where Bernie Sanders was interviewing Claude about AI regulation. There’s lots of videos like that lately. Someone, sometimes somebody known has some sort of meaningful conversation with an AI and ultimately that conversation is really just about their ideas since these AIs are all meant to be a mirror. So of course, he walked the model pretty successfully down a path toward endorsing a moratorium on new data centers until regulations are in place. His core argument: the companies building AI are spending hundreds of millions of dollars to prevent exactly the kind of sensible regulation that everyone agrees we need.

It’s a coherent point. It’s also the wrong answer.

I think a moratorium is too blunt. But not because innovation is good or because we need to stay competitive with China. Those arguments are fine as far as they go, and they don’t go very far. I reject a moratorium because the actors it would actually stop are not the ones I’m worried about.

The Last Real Exit

The last time anyone could have meaningfully pumped the brakes on AI development, the conversation was still mostly academic. GPT-2 was the warning shot. By the time GPT-3 hit, the window was closing. By 3.5, it was gone. Public adoption, capital influx, geopolitical competition, ecosystem lock-in. Once those four wheels start turning together, you don’t get a moratorium. You get a race.

We are in an extraordinarily unique moment. The most consequential scientific development in human history is not being directed by any government. It’s being driven by private labs, venture capital, and a relatively small number of individuals with significant resources and largely unverifiable worldviews. That makes it unlike anything we’ve had to govern before, and it makes traditional regulatory frameworks essentially useless against it.

This Isn’t the Gun Argument. It’s Worse.

People reach for the gun analogy here and it’s partially right. Like guns, if the U.S. doesn’t build it, someone else will. Like guns, stopping responsible actors doesn’t stop irresponsible ones. But the comparison breaks down in the most important place: the scale of potential harm.

A gun can devastate a room. A misaligned AI with tool use, with the ability to affect the physical world, can affect the whole world.

The paperclip maximizer thought experiment isn’t ridiculous anymore. An AI with good intentions that is nevertheless misaligned is still misaligned, and when you give it the ability to execute at scale, the consequences are not local. That means any regulation capable of actually addressing the risk would have to be global. And there is no single government on this planet that can enforce global AI regulation.

AI is often talked about like it’s abstract software. It isn’t. It runs on a very physical stack (fabs, packaging, power, logistics) and much of that stack is concentrated.

The only entity that comes close to having a real chokepoint is TSMC. Taiwan Semiconductor is what NVIDIA uses. It’s what Apple Silicon uses. It’s what Intel uses. Effectively everyone building serious AI hardware runs through one company in one country. That is both a genuine lever and a genuinely frightening concentration of fragility. I’d be very concerned about Taiwan right now. That’s a massively tempting target, not metaphorically. Controlling TSMC would be the most consequential strategic move any actor could make, and I’d be surprised if that hasn’t already occurred to multiple serious people in multiple capitals.

The Moratorium Problem

The ones who would actually comply with a data center moratorium are not the ones I lose sleep over. Or more precisely: they concern me less than the ones who wouldn’t comply at all. State-sponsored programs in countries with no obligation to U.S. policy. Well-capitalized labs operating in permissive jurisdictions. Technically sophisticated individuals who don’t file regulatory paperwork. None of them attend the hearings. None of them wait for the framework.

Meanwhile, the labs willing to engage with regulation slow down. The ones unwilling to don’t. You’ve now widened the gap in exactly the wrong direction.

And the timeline matters here. We are probably talking about a decade at the outside before we reach artificial superintelligence. Honestly, given the current rate of progress, I’d be surprised if it takes a decade. Regulatory timelines don’t operate on that schedule. The mechanism for detecting bad actors, establishing enforcement, and actually applying consequences at a meaningful scale would take longer than the window we have. The game would already be over.

Also: It’s Not Just Foreign Actors

This needs to be said plainly because the “China bad, U.S. good” framing is lazy and wrong.

I build primarily on OpenAI’s stack. I follow Anthropic closely. These are, by my assessment, probably the better players in this race. I’m not calling them out because I have grievances. I’m calling them out because even the best-intentioned actors in a race this consequential cannot be trusted with unilateral control over humanity’s future. That isn’t a referendum on anyone’s morality. It’s just smart skepticism, and if you’re not applying it to your favorites too, it isn’t really skepticism at all.

Sam Altman could go sideways. Anthropic could go sideways. Every major lab has people in it who genuinely believe they’re being altruistic. I’ll be charitable and say many of them probably are. But “I believe I’m doing good” and “I am actually doing good” are not the same thing, and concentrated power in the hands of people with unverifiable worldviews is structurally risky regardless of their stated intentions.

The hubris of the tech billionaire class is going to either destroy humanity or destroy itself. Neither outcome is the one they’re planning for. There’s no stable long-term version of this where a small number of people just win quietly and it’s fine.

The only move left is to influence the outcome. And the only real levers are physical and economic chokepoints. Not to control the technology outright, but to prevent worst-case concentration of power, to slow the most reckless actors, to preserve optionality for as long as possible.

But here’s the part I don’t hear enough people say…

The alignment problem runs both ways.

Everyone is working on aligning AI to human values. That’s the right problem. But it contains a hidden assumption, which is that humanity is a coherent, stable reference point worth aligning to. It isn’t. Humans are internally misaligned on almost everything that matters: values, priorities, what constitutes harm, what constitutes flourishing. A perfectly aligned AI navigating a species that is itself pointing in a thousand directions simultaneously is still going to produce chaotic outcomes.

So while everyone is working on AI alignment to humanity, someone needs to start working on aligning humanity to AI. That doesn’t mean submission. It means becoming capable of stable coexistence with something more powerful than us. Because we are going to have to coexist with whatever emerges from this, and if by some miracle AI is not critically misaligned, humans almost certainly are.

On Post-Scarcity

Just a little detour into another massive philosophical offramp because this matters a lot more than people seem to think. Most discussions of AI-driven abundance treat it as an upgrade. Things will be like now, just cheaper and faster. I think that’s wrong in a fundamental way.

Every behavior that evolved in every living thing on this planet was shaped under conditions of scarcity. Our psychology, our social structures, our sense of purpose and meaning. Remove scarcity as a fundamental operating constraint and you don’t get utopia by default. You get an environment that no living thing has ever encountered, interacting with systems optimized in ways we don’t fully understand. That isn’t Star Trek. It’s genuinely alien.

I’ve been thinking about this for a long time and I keep arriving at the same wall: I cannot know what that world is like until I’m in it. I’m probably better prepared for it than most people, and I am still woefully underprepared for what it actually means. The only difference is I know that. And that might be worth something.

So What Do You Actually Do

Stop pretending AI isn’t going to be a thing. Stop treating it as just next-token prediction, as a clever autocomplete, as a tool that will only affect someone else’s job. It will have a material impact on your work. I don’t care what you do for a living.

The only people in knowledge work who will maintain any real footing are people who can demonstrate excellent taste and judgment.

Those are the only things right now that resemble a moat. The ability to clearly communicate intent, to break large problems into their component pieces, to know what good looks like when you see it. AI can help you get better at some of that. It cannot replace the part of you that knows what’s worth doing in the first place.

That’s not a productivity tip. That’s survival advice.

They say the best time to plant a tree was twenty years ago. That only works if the ground is stable. The ground isn’t stable. It’s on fire.

So stop thinking in terms of long-term optimization and start thinking in terms of triage.

  • What actually matters?
  • What is worth preserving?
  • What systems break first?
  • Where do you still have leverage?

Because this isn’t about getting ahead anymore.

It’s about not getting crushed.