So what's next?
If we extrapolate the growth curve of AI outward for just a few more years, it’s going to change everything. What should a rational person do to get ready?

I’ve been on a bit of a Dario Amodei kick this weekend. I listened to his chat with Ross Douthat while driving, and I re-read his two essays, Machines of Loving Grace and The Adolescence of Technology, last night. I think, given some of my own personal and professional experience with generative AI’s impact on software engineering, and doing some back-of-the-napkin extrapolation of current trends, that Amodei’s “country of geniuses in a datacenter” is both plausible in the relatively near-term and massively disruptive to, well, just about everything.
So the pressing question is how should one act in the face of such a belief?
The situation
Let’s start by examining some of Amodei’s claims and concerns. The idea of “the compressed 21st century” is a compelling one–a century of change in 5-10 years. Imagine if in 2016 we were living in 1916, and ten years later we’re living in the modern world, at least technologically. In some ways it would feel like magic: there was a massive polio outbreak in 1916, and of course today polio is virtually eradicated, with only Afghanistan and Pakistan still suffering from endemic poliovirus. In other ways it would be terrifying: look at how close we came to nuclear armageddon during the Cold War, and that was with decades to develop safeguards and lines of communication. If the imperial powers who were slaughtering each other in the trenches of Europe in 1916 were handed ICBMs, I’m fairly certain I wouldn’t be here writing this today.
Suddenly living in the world of 2126 in 10 years would be just as magical and terrifying. All of our diseases cured, but autonomous weapons that don’t refuse illegal orders. Millions or billions of genius-level intelligences working on the hardest problems across every intellectual field (like AlphaFold on protein folding, but for everything), while millions or billions of people are put out of work. Nearly perfect systems of surveillance and control could cement authoritarian regimes in ways that would be almost impossible to resist. The vast majority of productive assets could be controlled by just a few actors (both nation-states and corporations), making today’s worsening inequality look absolutely egalitarian in comparison.
The speed at which changes might happen will make it much more likely the shortcuts will be taken and local maximums will be locked in ways that don’t benefit the majority of people. The way that emergency measures tend to outlast their emergencies is illustrative–think about the PATRIOT Act after 9/11, or the way the system China built for monitoring COVID-19 continues to be used to monitor the population.
Of course, not everyone is convinced. The same revolution paying out over 50 years instead of 5 would still be a tremendous advance for humanity without so many of the downsides that speed will bring. And it might not happen at all–would a billion instances of Claude Opus 4.6 or GPT 5.2 or Gemini 3 be transformative? Maybe, but probably not in all of the ways that Amodei and others are predicting.
What should I do?
I’d like to answer a “what should we as a collective whole do?” question, but I don’t think I’m qualified. Instead, I’ll look at what I–a mid-to-late career software engineer with two teenagers (one in middle school, one about to go to college) should do.
- Invest, specifically in the stock market / private companies with exposure to the productivity gains that “a country of geniuses in a datacenter” might bring. Eventually everything might have that exposure, but at least at first it will be unevenly distributed. A world where I lose my primary income but my investments increase 10x is actually not a bad world at all (of course I’m fortunate to be seeing this from my perch as someone who has already been able to accumulate wealth–if I were 23 right now I’d have a very different viewpoint).
- Maintain optionality, specifically reducing fixed obligations and staying as liquid as possible, while avoiding any bets that assume the world will stay the same, or will go in a particular way at the exclusion of all others. In all likelihood some mix of all of the best and worst-case scenarios will be what actually happens, and being as flexible as possible, with reserves to invest in promising new opportunities, is probably the right meta-strategy.
- Embrace the centaur phase of software engineering, even if it’s short. Looking at how different my job is today compared to just six months ago, I think that software engineering will be hugely impacted by even the short-run gains we’re likely to see. That doesn’t mean there’s not opportunity, but leaning into the tools and learning as much as I can about being proficient in this new and rapidly changing world is going to be key. At some point, though, it’s possible that continuing as a software engineer will have negative expected value compared to investing my time elsewhere.
- Engage more heavily in politics. If anything at all like the AI maximalist case occurs, the social contract will either adapt very rapidly or break in potentially very violent ways, and the quality of our political leadership is likely to be key in determining which road we end up on. Our current leadership in the United States is, hrm, not inspiring, with its persistent focus on increasingly obsolete industries, and this is easy mode.
- Help my kids navigate the new world. This is the bullet with which I’m the least comfortable. My eldest will be graduating college in 2030, but my youngest will be graduating college in 2035. Even at a slower-than-projected rate of change, he will be graduating into a very different world than we’re in today. What mix of skills will be useful? How should they think about STEM when so many knowledge worker jobs may not exist anymore? How should they hedge against the possibility that LLMs hit a wall and there isn’t something that follows quickly behind to maintain the rate of change? Honestly I don’t have great answers for any of them, and it worries me deeply.
Don’t panic

There are no certainties in life. This might all fizzle and end up being a 0.5% to 1% increase in productivity globally–nothing to sneeze at, but not world-changing, either. But humans are universally quite bad at thinking about exponentials, and AI growth is following such a curve. It feels like something big is happening. It makes sense to think about your own future and how it might look in a world as alien to us today as our lives would look at the people of late Edwardian era.
And, as always, know where your towels are.
14 Sources ›
- article AI Will Transform the Global Economy. Let's Make Sure It Benefits Humanity.
- article Anthropic's CEO Says We're in the 'Centaur Phase' of Software Engineering
- video Anthropic's CEO: 'We Don't Know if the Models Are Conscious'
- paper Coded Social Control: China's Normalization of Biometric Surveillance in the Post-COVID-19 Era
- article DeepMind's AlphaFold Changed How Researchers Work
- article Don't Believe the AI Hype
- article Great Polio Epidemic
- article Machines of Loving Grace
- article Meta's AI Chief Yann LeCun on AGI, Open-Source, and AI Risk
- article Something Big Is Happening
- article The Adolescence of Technology
- paper The Simple Macroeconomics of AI
- article The World After Coronavirus
- article Trump Wins Another Fake Award — but He Actually Deserves This One