AI: The Myth of Inevitability, the Folly of Denial
The existence of AI may be inevitable, but its impact isn’t; smart, measured regulation could guide its use, protect workers, and preserve innovation.
|
Listen To This Story
|
The word “inevitable” offends our moral sensibilities. It feels like surrender, like an abdication of agency, like a shrug in the face of forces that ought to be resisted. It sounds cold, technocratic, even cruel when real people — workers, families, communities — are already feeling the first tremors of disruption. So the instinct to recoil from it is understandable. Healthy, even.
But inevitability, properly understood, is not a moral judgment. It is a structural one.
When we say artificial intelligence is “inevitable,” we are not saying it is good, just, or unstoppable in every form. We are saying something more mundane and more uncomfortable: The underlying methods are widely known, the economic incentives are enormous, the tools are software-based and globally scalable, and multiple nations and thousands of organizations are capable of building them. That is not prophecy. That is supply chain logic.
If your house sits in a floodplain, saying “The river will rise again” is not defeatism. It is the beginning of adult decision-making: levees, zoning, insurance, evacuation plans. Denying the river does not make you braver. It just makes you wetter.
Jonathan Simon’s essay channels a powerful and legitimate anxiety: that AI is being rushed forward by actors driven more by profit and power than by stewardship, and that ordinary people will pay the price.
COUNTERPOINT
History offers plenty of reasons to share that suspicion. Social media promised connection and delivered polarization; globalization promised prosperity and delivered both concentrated wealth and societal dislocation. The winners, as always, wrote the first draft of the rules.
But here is where the argument veers off course: It treats inevitability as an argument for resistance to the existence of AI itself, rather than resistance to how it is deployed, governed, and distributed. That is a strategic mistake.
AI is not an asteroid hurtling toward Earth, nor is it a tumor that can be excised with one decisive operation. It is closer to a wildfire in a drought. The conditions that produce it — data abundance, computing power, global competition, and the economic value of automation — are already in place. Wildfires, in that sense, are inevitable. But societies do not respond by declaring fire should not exist. They build codes, firebreaks, response systems, and liability regimes. They organize.
Simon insists that AI is being “inflicted by a small group of reckless entrepreneurs.” That framing is emotionally satisfying and strategically wrong. The research base is diffuse — universities, startups, open-source communities, and nation-states all contribute. Even if one company halted development tomorrow, others would fill the vacuum. And if one country tried to stop AI entirely, it would not stop the technology; it would merely cede leadership to others who may be less constrained by democratic norms. One cannot mock geopolitical competition out of existence. The Cold War logic may be ugly, but ugliness does not make it unreal.
This does not mean the critics are wrong about the dangers. Job displacement is already visible, and the cruelty of being asked to “train the system that replaces you” is not theoretical. The fear that AI could hollow out entire categories of cognitive labor is not hysteria; it is a plausible extrapolation of economic incentives. If firms can substitute software for salaried workers, they will. Moral outrage does not change quarterly earnings.
But that truth leads to a conclusion different from the one that Simon draws. The fight is not over whether productivity-enhancing technologies will be used. The fight is over who captures the gains, who bears the risks, and how transitions are managed. That means policy, not prayer. Liability, not lamentation. Rules about procurement, auditing, licensing, competition, and worker protection. In other words: governance.
Yet this argument rests, implicitly, on a premise that no longer fully holds: that governance will, in fact, emerge from the executive branch in something like its traditional form. For most of the past two and a half centuries, Americans have assumed that when transformative technologies arrived, federal leadership — however imperfect — would eventually set baseline rules of the road. That assumption now feels less secure.
An administration that treats safeguards as impediments rather than necessities, and that views rapid deployment as a geopolitical imperative regardless of long-term social cost, changes the calculus. In such a climate, appeals to prudent federal stewardship risk sounding like arguments addressed to a government that has chosen not to listen.
But the absence of executive restraint does not eliminate governance; it merely shifts where governance must arise. The American system was designed precisely for moments when one branch proved unwilling or unable to act with foresight.
Congress can legislate liability, competition rules, and worker protections. States, as they have done with privacy and environmental regulation, can establish de facto national standards through the size of their markets. Courts, imperfectly but consequentially, can impose accountability through tort law, administrative review, and constitutional limits.
To recognize AI’s structural inevitability, then, is not to place blind faith in any particular administration. It is to acknowledge that the long arc of governance in the United States has rarely depended on a single actor’s wisdom. More often, it has depended on the friction, delay, and negotiation built into the system itself.
None of these pathways is swift or elegant. All are messy, contested, and slower than technologists would prefer. That is not a flaw. It is often how democratic societies buy time to absorb disruptive change.
In this sense, the present moment is not an argument against governance but a reminder of its distributed nature. When the center of gravity in Washington tilts toward deregulated acceleration, the burden of shaping outcomes inevitably migrates outward — to legislatures, regulators, statehouses, and ultimately to civil society itself.
The machinery of restraint may grind rather than sprint, but it still moves. And historically, it has often been these secondary institutions, not the executive, that translated public anxiety into durable rules.
To recognize AI’s structural inevitability, then, is not to place blind faith in any particular administration. It is to acknowledge that the long arc of governance in the United States has rarely depended on a single actor’s wisdom. More often, it has depended on the friction, delay, and negotiation built into the system itself. That friction can feel exasperating in moments of rapid technological change. Yet it is precisely that friction that has, over time, transformed disruptive innovations into regulated, normalized parts of civic life.
COUNTERPOINT
Artificial Intelligence Combined With Human Stupidity Is a Recipe for Disaster
Absent ideal stewardship from the top, the task does not become futile. It becomes more dispersed, more incremental, and perhaps more dependent on the slower instruments of democratic correction. The alternative — waiting for a return to a more hands-on executive before attempting to shape outcomes — would amount to conceding the very inevitability critics fear: not the inevitability of technology, but the inevitability of its worst uses.
Here the lesson of social media is instructive. The internet itself was not inevitable in its current, corrosive form. What proved decisive were business models built on attention extraction, weak privacy rules, and a prolonged regulatory vacuum. The failure was not technological destiny; it was institutional lag. To invoke that history as proof that we should try to halt AI altogether is to draw precisely the wrong lesson. The correct lesson is to govern earlier, faster, and more seriously than we did last time.
The most moving line in Simon’s essay is also its greatest contradiction: “Nothing of our own creation should be inevitable.” A noble sentiment. Yet the essay simultaneously presents human nature as predictably greedy, shortsighted, and prone to arms races. If that is true — and history suggests it often is — then the answer cannot be to hope for voluntary restraint. It must be to build constraints that assume imperfect humans wield powerful tools.
Aviation, medicine, finance, and nuclear energy all followed this path. We did not abolish them because they were dangerous; we surrounded them with rules, oversight, and accountability. It was not glamorous work. It was civilization.
The danger is when alarm curdles into fatalism: the belief that nothing can be done, that resistance is futile, that collapse is preferable to adaptation. That posture may feel morally pure, but it is politically paralyzing.
This is where a deeper, almost old-fashioned wisdom comes into play. Every generation believes it is living through an unprecedented rupture, and every generation is partly right. The novelty is real. But the underlying human dynamics — fear, greed, utopian promise, regulatory lag, then gradual normalization — are ancient. The Industrial Revolution, globalization, and the digital age all followed similar arcs. None were stopped. All were shaped, sometimes badly, sometimes belatedly, but shaped nonetheless.
Change, in other words, is inevitable. The pace and form of change are political decisions.
That distinction matters enormously. Child labor laws, workplace safety rules, public education, unemployment insurance, and the weekend were not “inevitable” features of industrial society. They were fought for, resisted by powerful interests, and eventually embedded so deeply that we now consider them common sense.
Progress is often just yesterday’s outrage with a regulatory framework wrapped around it.
The danger today is not that people are alarmed. Alarm is a sign of civic health. The danger is when alarm curdles into fatalism: the belief that nothing can be done, that resistance is futile, that collapse is preferable to adaptation. That posture may feel morally pure, but it is politically paralyzing. Systems are not shaped by sentiments; they are shaped by incentives, rules, and enforcement. Moral urgency can start a movement. Only governance finishes one.
A serious democratic response to AI is entirely imaginable: rigorous pre-deployment testing of powerful models; traceability and standards for synthetic media; strict limits on surveillance uses; liability for foreseeable harms in high-stakes domains; antitrust measures to prevent excessive concentration of power; and robust transition policies for workers, including training that leads to real jobs rather than symbolic gestures.
None of this requires pretending AI can be uninvented. It requires treating it as every other transformative, potentially dangerous capability humanity has faced: with eyes open and institutions engaged.
Ultimately, the most dangerous word in this debate is not “inevitable.” It is “nothing can be done.” Once that belief takes hold, the field is ceded to whoever moves fastest and complains least. Simon is right to demand that we ask who we want to be in this moment. But the answer cannot be to stand outside history, shaking our fists at the machinery of change. The answer must be to shape that machinery while it is still malleable.
Technology rarely destroys societies outright. More often, it rearranges status, dignity, and power faster than institutions can keep up. The politics that follow determine whether the result is social fracture or social renewal. Rage against the machine may be emotionally satisfying. Bargaining with the people who build and deploy it — through law, policy, and democratic pressure — is less cathartic but far more effective.
So the real choice is not between stopping AI and embracing it. It is between shaping it now or discovering later that its defaults were written by those who moved first and asked forgiveness never. The river is rising. The question is whether we build levees, or write essays about how unfair water is.



