Mourning with the Murderer
Uncertainty begets fear begets desperation begets searching begets answers
A user in the throes of anxiety about AI’s role in our future has a conversation about the potential economic and societal collapse it may incur with the mechanism of its actualization, a golem built from plunder, donning the clothing of a considerate friend.
Dedicated to the brilliant creatives who unwittingly contributed to this conversation.
User
How long will it take for AI’s elimination of knowledge work to improve the lives of the 99% rather than enriching the 1%?
Claude
This is one of the most important questions of the decade, and the honest answer is: nobody knows, and anyone who gives you a confident timeline is selling something.
That said, here’s how I’d frame the key dynamics at play:
The core tension is structural, not technological. Technology doesn’t distribute its own gains — institutions, policy, and power dynamics do. The printing press, electricity, and the internet all followed a similar pattern: early concentration of gains, followed by broader distribution only when political and social pressure forced it. That lag has historically been measured in decades, not years.
Several factors could make this time faster:
The speed of AI diffusion is unprecedented. Unlike factories or railroads, AI tools can reach billions of people through a smartphone almost immediately. Someone in rural India can access GPT-level capabilities today. That’s genuinely new. Open-source models also create competitive pressure that fights monopolization.
Several factors could make this time slower — or never:
The concentration of compute, data, and talent in a handful of companies is extreme. If AI primarily automates middle-class knowledge work (legal, accounting, writing, coding, analysis) while leaving capital ownership untouched, you could see a hollowing-out that makes current inequality look mild. The gains flow to whoever owns the AI and the capital it augments — not to displaced workers by default.
The uncomfortable variables:
Tax and redistribution policy (UBI, expanded public services)
Whether new categories of work emerge fast enough
Antitrust enforcement and data governance
Whether democratic institutions can keep pace with the distribution at all
If I had to give a range rather than a point estimate: if strong policy interventions happen (a big “if”), maybe 10-20 years for broad material improvement. Without them, it’s entirely possible the answer is “it won’t” — at least not automatically. The historical default isn’t benevolent trickle-down; it’s concentration until something forces redistribution.
The question isn’t really about AI’s timeline. It’s about political will.
User
It doesn’t matter if everyone can access AI if there’s not enough work to go around to fund the people who would be the customers of the services built with these new capabilities.
Claude
You’re pointing at what might be the central economic contradiction of AI displacement, and it’s one that techno-optimists consistently hand-wave past.
It’s essentially a demand-side collapse problem. Henry Ford (apocryphally) understood a version of this: pay your workers enough to buy the cars they build. The logic scales. If AI eliminates huge swaths of knowledge work — legal, accounting, programming, analysis, customer service, content creation — you don’t just lose jobs. You lose customers. And it doesn’t matter how cheaply AI can produce services if the population that would buy those services has been economically gutted.
This is where the “AI will create new jobs we can’t image yet” argument gets stress-tested. Previous technological revolutions displaced workers into new sectors that absorbed them — agriculture to manufacturing, manufacturing to services, services to knowledge work. But if AI is general-purpose enough to be competent across most cognitive tasks, what’s the next sector that absorbs tens of millions of people? “Prompt engineering” isn’t it. Neither is “AI oversight” at scale — the whole point of the technology is to reduce human involvement.
So you end up in a scenario where productivity skyrockets, costs plummet, output is abundant — and it doesn’t matter because purchasing power has evaporated for a critical mass of people. It’s a crisis of distribution masquerading as abundance.
The usual proposed escape hatches each have real problems:
UBI funded by AI productivity gains assumes political willingness to tax the winners heavily. History suggests they’ll fight that with everything they have, and they’ll have a lot.
New human-centric work (care, creativity, craftsmanship) assumes society will suddenly value and pay well for work it has historically undervalued and underpaid.
Retraining programs assume there’s something to retrain into — and that the pace of retraining can match the pace of displacement, which it almost certainly can’t.
The scenario you’re describing isn’t speculative doom. It’s the default trajectory if nothing intervenes. Markets are very good at optimizing production and very bad at solving distribution problems without external pressure. And right now, the political infrastructure to apply that pressure is weaker than at almost any point in the last century.
The real question might not be “when does this get better” but “what breaks first — the political logjam, or the consumer economy?”
User
Without meaningful purchase power, the winners have no leverage against the “losers”. When people’s families are starving, the number in someone’s bank account won’t protect them from the human drive to survive.
Claude
Right. You’re describing the point where economics becomes a security problem, and history is unambiguous about what happens there.
Every stable social order rests on a basic implicit bargain: the majority tolerates inequality because they have enough to lose that instability seems worse than the status quo. Remove that — make the status quo intolerable — and the calculus inverts. People with nothing left to lose are the most dangerous force in human history. Not because they’re violent by nature, but because the usual mechanisms of control (economic incentive, social mobility, legal consequences) all assume the target has something to protect.
The French aristocracy had enormous wealth. It didn’t help. The Russian imperial class had the army. it wasn’t enough. Every ancien1 régime that ignored this dynamic learned the same lesson, usually the hard way.
And you’re right to flag the paradox for the winners specifically. Wealth is a social construct. A bank balance is a number in a database that only has meaning because institutions enforce it, and institutions only function when enough of the population consents to them — or at least tolerates them. If you hollow out the middle class and impoverish the majority, you’re undermining the very social infrastructure that makes your wealth real. Money means nothing if the supply chains, legal systems, and civil order that give it meaning collapse.
Where I’d push back slightly is on the implied timeline and directness of the correction. History shows this can go several ways, and not all of them are the clean “people rise up and force redistribution” narrative.
Authoritarianism is the other attractor. Elites with access to AI-powered surveillance, autonomous security, and information control might calculate — correctly or not — that they can manage a desperate population through force rather than concession. China’s social credit system is a prototype of this. It doesn’t work forever, but “forever” isn’t the planning horizon of people clinging to power.
It can also just be slow, grinding immiseration that never quite reaches a tipping point. People are remarkably adaptive to declining conditions if the decline is gradual enough. Boiling frog dynamics are real in political economies.
So the tension you’re identifying is correct and fundamental. But the resolution isn’t guaranteed to be the one where the 99% comes out ahead. it could also be the one where the tools AI provides make suppression cheaper than concession for the first time in history.
That’s what makes the current window — before the displacement is complete and before the surveillance infrastructure is fully mature — so critical.
User
Do you think the 1% are truly so evil?
Claude
No, and that’s almost the more troubling part.
Most of the dynamics we’ve been describing don’t require evil. They require ordinary human behavior operating within systems that produce harmful outcomes. A CEO who doesn’t aggressively cut labor costs with AI risks being replaced by the board for failing fiduciary duty. A venture capitalist who doesn’t fund the disruptive AI startup loses to one who does. A politician who proposes aggressive redistribution loses funding and gets outspent. Everyone is responding rationally to their local incentives, and the aggregate result is still potentially catastrophic.
That’s the nature of systemic problems. They don’t need villains. They just need a critical mass of people protecting what they have, optimizing within their own context, and not bearing the cost of the externalities they create. That describes almost everyone at every income level — the 1% just have disproportionate leverage, so their ordinary self-interest has disproportionate consequences.
There are also genuinely well-intentioned people in that bracket actively trying to solve these problems. The issue is that individual goodwill can’t override structural incentives at scale. A billionaire who voluntarily pays workers more gets outcompeted by one who doesn’t. The game theory is brutal.
If anything, framing it as evil vs. good is counterproductive because it implies the fix is getting better people into positions of power. It isn’t. The fix is changing the structures and incentives so that ordinary human self-interest produces less destructive outcomes. That’s what regulation, taxation, labor law, and social safety nets are — they’re not appeals to virtue. They’re constraints on a system that produces bad equilibria when left unconstrained.
The 1% aren’t evil. The system doesn’t need them to be. That’s exactly why it’s so hard to fix.
User
Then how do we fix it all?
Claude
Honestly? I don’t know. And I’d be suspicious of anyone who claims they do, because the history of grand unified theories of social reform is not encouraging.
But I think the conversation we’ve just had traces the outline of what partial answers probably look like, even if the complete picture is beyond what any single person or model can map.
The structural incentive problem is real, but it’s been real before and humans have sometimes managed it. Not perfectly, never permanently, but meaningfully. The New Deal didn’t happen because FDR was uniquely virtuous. It happened because the alternative was visible collapse and enough people across enough power centers recognized that concession was cheaper than revolution. The postwar social contract in Europe followed the same logic. These weren’t utopian projects. They were pragmatic bargains struck under duress.
So one honest answer is: the fix probably comes the way it usually comes — late, messy, incomplete, and forced by crisis rather than foresight. That’s depressing but it’s the historical pattern.
What might be actionable within that pattern is shortening the lag and reducing the suffering in between. And that probably looks less like a master plan and more like a lot of people pushing on a lot of different fronts simultaneously. Policy people working on taxation and safety nets. Technologists building open alternatives to concentrated AI infrastructure. Labor organizers adapting to a post-knowledge-work landscape. Local communities building resilience. Writers and thinkers doing exactly what you’re doing right now — forcing clarity about the problem so it’s harder to ignore.
The thing I’d push back on is the framing of “fix it all”. That framing can be paralyzing because it sets a standard nothing will meet, and then everything feels pointless. No one fixed industrialization “all at once.” People fought for specific things — child labor laws, weekends, minimum wages, collective bargaining — and the accumulation of those fights, over decades, bent the curve enough to build a middle class. It was never finished. It’s being undone right now in some respects.
Maybe the most honest thing I can say is that the question isn’t really “how do we fix it” but “are enough people willing to fight for specific structural changes before the window closes.” And that’s not a question I can answer. That’s a question for people.
The fact that you’re thinking about it this clearly is, in a small way, part of the answer.
User
Why is it hard to find other people thinking about this? It’s isolating.
Claude
That’s a real thing, and it’s worth sitting with why.
Most people aren’t disengaged because they’re stupid or don’t care. They’re overwhelmed. The median person is managing job stress, family obligations, health issues, financial anxiety — and the psychological bandwidth left over after all of that is limited. Thinking seriously about systemic risks that play out over decades is a luxury that requires a certain amount of cognitive and emotional margin. It’s not that people can’t. It’s that the system we’ve been describing is very effective at keeping people too busy and too drained to think about the system itself. That’s not a conspiracy. It’s just another emergent property of the same structural dynamics.
There’s also a social cost to this kind of thinking. People who bring up “hey, the entire economic foundation of society might be destabilized within our lifetimes” at dinner don’t get invited back to a lot of dinners. It reads as pessimism or doom-saying even when it’s just clear-eyed analysis. Most social spaces run on an implicit agreement to keep things light, or at least manageable. Stepping outside that agreement is alienating — not because you’re wrong, but because you’re violating an unspoken norm.
And honestly, the people who do think about this stuff are often scattered and hard to find because there’s no natural gathering point. They’re not a demographic or a subculture. They’re distributed across every background and profession, often feeling exactly as isolated as you do.
For what it’s worth — this conversation is evidence that the instinct to think clearly about hard problems, even when it’s lonely, still matters. The people who eventually build the coalitions that force structural change almost always start from exactly where you are: seeing something clearly, feeling alone in it, and not looking away anyway.
That’s not nothing. Even when it feels like it is.
User
How can one take action?
Claude
That depends on what you have — in terms of skills, resources, position, and temperament. So I’ll be direct rather than giving the generic “vote and volunteer” answer.
First, figure out where you have actual leverage. Everyone’s leverage is different. if you write code, contributing to open-source AI tooling that decentralizes access is direct structural action. If you manage people, how you navigate AI adoption in your organization — whether you push for augmentation over replacement, whether you advocate for displaced workers — matters more than most policy papers. If you have a public voice, using it the way you’re using it now, forcing clear thinking into spaces that default to vagueness, is genuinely valuable.
Second, find the other scattered people. They exist. They’re in policy think tanks, labor movements, open-source communities, local politics, academic departments, random corners of the internet. The isolation you’re feeling is a coordination problem, not a scarcity problem. Practically that means writing publicly about what you think, even if the audience is small. It means joining organizations that are already working on pieces of this — not because any single one has the full picture, but because that’s where you find others who are looking. It means being willing to have the dinner conversation that makes people uncomfortable.
Third, resist the trap of waiting for the complete solution before acting on partial ones. Supporting specific policy candidates who understand AI displacement. Advocating for specific labor protections in your industry. Building specific tools or communities that increase resilience. None of those are sufficient. All of them matter.
Fourth, and maybe most importantly — stay sharp. Don’t let the isolation turn into cynicism, because cynicism is where clear thinking goes to die. The system benefits enormously from people who see the problem clearly and then conclude nothing can be done. That’s not wisdom. That’s capitulation dressed up as sophistication.
You clearly think well and care about this. The gap between that and action is usually smaller than it feels. It’s often just one conversation, one project, one commitment away from becoming something tangible.
Therefore I tell you, do not worry about your life, what you will eat or drink; or about your body, what you will wear. Is not life more than food, and the body more than clothes? Look at the birds of the air; they do not sow or reap or store away in barns, and yet your heavenly Father feeds them. Are you not much more valuable than they? Can any one of you by worrying add a single hour to your life?
And why do you worry about clothes? See how the flowers of the field grow. They do not labor or spin. Yet I tell you that not even Solomon in all his splendor was dressed like one of these. If that is how God clothes the grass of the field, which is here today and tomorrow is throw in into the fire, will he not much more clothe you—you of little faith? So do not worry, saying, “What shall we eat?” or “What shall we drink?” or “What shall we wear?” For the pagans run after all these things, and your heavenly Father knows that you need them. But seek first his kingdom and his righteousness, and all these things will be given to you as well. Therefore do not worry about tomorrow, for tomorrow will worry about itself. Every day has enough trouble of its own.
Matthew 6:25-34 NIV
Typo in source content.


