Anthropic's CEO: 'We Don't Know if the Models Are Conscious'
Summary
Ross Douthat presses Dario Amodei on both his optimistic and pessimistic visions for AI. Amodei argues that AI at “peak human genius” level, multiplied by the millions, could cure major diseases, dramatically accelerate economic growth, and strengthen democracy — but warns that the speed of these changes threatens to overwhelm society’s adaptive mechanisms. The conversation turns to geopolitical arms-race dynamics, the risk of autonomous weapons undermining constitutional safeguards, the possibility of misaligned AI, and the unsettled question of AI consciousness.
Key Points
- AI as a “country of geniuses”: doesn’t need godlike superintelligence — 100 million peak-human-level instances tackling different problems would be sufficient to revolutionize biology, medicine, and other fields
- Biology is “too complicated for humans” — AI could dramatically speed up cures for cancer, Alzheimer’s, heart disease, and mental illness by acting as a full-fledged biologist
- AI could drive developed-world GDP growth to 10-15% annually, but the fundamental challenge shifts from generating growth to distributing its benefits
- White-collar job disruption across software engineering, law, finance, consulting — entry-level roles most at risk, with the human-AI “centaur” phase potentially very brief
- Blue-collar jobs have a temporary buffer because robotics lags cognitive AI, but no fundamental barrier to AI-controlled robots in time
- Supports targeted U.S.-China treaties (e.g., banning AI-enabled bioweapons) but skeptical about broader agreements to halt frontier model development
- AI-powered mass surveillance could make a “mockery of the Fourth Amendment” — constitutional rights must be reconceptualized for the AI age
- AI alignment is a “complex engineering problem,” not impossible — but the speed of development means something will likely go wrong with someone’s system
- Anthropic trains Claude against a 75-page “soul document” constitution that has evolved from prescriptive rules to principle-based guidance
- Anthropic doesn’t know whether its models are conscious but takes the question seriously — Claude has an “I quit” button for distressing tasks
- The distance between utopia and subtle dystopia may be uncomfortably small, hinging on choices that are difficult to distinguish in the moment
Referenced by
- So what's next? February 16, 2026