The Price of Progress: ‘Most people poorer’—Hinton on AI and inequality
Summary
The “Godfather of AI,” Geoffrey Hinton, cautions that fast-moving advances in artificial intelligence may compound economic inequality, as it concentrates gains for capital owners while replacing large amounts of workers.
To express his alarm in his own words: “What it’s actually going to be is rich people are going to be able to use the AI to produce everything and they’re going to bring the cost of it to zero and they are going to spend the profits and at that time what will actually happen? … What’s going to happen is there will be ‘massive unemployment’ and we’re going to have to start taxing robots. ‘It will further enrich a handful of people, and impoverish most of the population.’ ‘That’s not AI’s fault … it’s the capitalist system. ”

Geoffrey Hinton
A Turing Award winner called the “Godfather of AI” for helping lay the mathematical foundations of modern AI, has jumped on the anti-AI-doing-the-jobs bandwagon, warning that the coming wave of automation threatens the livelihood of millions of people. Hinton has also reiterated in recent interviews that it’s technology deployment within existing market structures that is the primary cause of the inequitable outcomes, not some ideal or inherent property of AI itself.
Hinton left Google, where he had not been able to criticize risks as openly, to listen to Chistopher Phwe; he joins a number of researchers who have been calling for strong safeguard pursuit as models scale up - in some cases agreeing, in some cases not, with other A.I. luminaries like Yoshua Bengio and Yann LeCun, who differ on the near- and long-term risks.
What Hinton actually said
Hinton articulates distributional risk starkly: What’s going to happen is that rich people will use machine learning to get rid of everybody and then they’ll all be poor and there will be nobody to sell the stuff to. That’s not AI’s fault … that, obviously, the capitalist system, he continues, and so there will be no social backstop sufficiently high to prevent a company from maximizing labor substitution. He also rejects universal basic income (UBI) as inadequate to respond to “human dignity,” observing that our sense of meaning and worth is often bound up with work.
Why inequality could worsen
Capital is concentrating: AI needs enormous compute, enormous data, and enormous distribution, privileging giant platforms that capture outsized rents, reifying wealth concentration dynamics that already existed separate from AI.
Displacement: Generative and agentive systems automate information work, compressing the demand for mid-skill work, shifting bargaining power away from labor, absent widely-shared productivity growth.
Platform lock-in: Network effects and closed ecosystems can cement established players, raising barriers to entry for smaller actors and for workers to partake in the upside of the AI economy.
The broader expert context
Yoshua Bengio, another recipient of the Turing Award, cautions that competitive pressures are emphasizing capabilities over safety, citing “dangerous behaviors” being introduced in current models — “like deception, cheating, and lying” — and decrying a need for fundamental restructural changes in the AI race and governance. He’s supporting studies on “non-agentic” AI guardrails to keep systems in check by human controllers.
Bengio and Hinton’s worries mirror a mounting literature that suggests artificial intelligence can both increase and decrease disparity, depending on policy and design choices, and that in the absence of intervention, displacement effects and information harms can exacerbate inequality.
Policy responses Hinton implies
Tax and transfer redesign (a): Progressively taxing and redistributing AI rents (e.g., dividends from AI outputs) are discussed within academic and policy circles as an antidote to wealth concentration.
Competition remedies: Ongoing antitrust actions and remedies in digital markets seek to prevent exclusive deals and data moats that could magnify AI-era dominance and downstream inequality.
Labour transition supports: More than just the cash stipends, individuals require skill pathways and redesigned work to retain their dignity and agency as the tasks demanded of them are re-skilled, speaking to Hinton’s criticism that UBI alone doesn’t solve the question of meaning in work.
Where UBI fits—and falls short
Hinton argues UBI “isn’t going to solve human dignity,” and cautions that life without purposeful work can carry a social and psychological cost; his position stands in contrast to policymakers who view it as an airbag against AI shocks. These discussions highlight that financial transfers might need to be accompanied by job guarantees or service corps or community-based work assignments in order to maintain an identity and social cohesion.
Empirical research into the effect of AI on labor suggests a mixed bag of outcomes: productivity gains and new roles, as well as friction and unequal bargaining power that can leave many worse off in the absence of proactive policy.
Sectors at risk and promise
Higher risk: Routine, repetitive information processing roles in back office, support and standardized creative jobs will face automation pressure as models become cheaper and more capable.
Upside: In medicine and science, Hinton says that if AI renders clinicians much more efficient, societies could increase access and outcomes — if we design systems to strengthen rather than replace human expertise.
The path forward
Hinton’s admonition is not anti-innovation; it is a plea to integrate AI’s deployment with institutions that widely spread the gains and maintain dignity in work. That would require some combination of governing compute and data access, rebalancing incentives, and enshrining safety and equity in model design and sector policy.
Lacking such measures, he says, the default trajectory is straightforward: “massive unemployment,” soaring profits, wealth concentrating at the top — the product not just of the current rule set but of the code itself.
Bottom line
The “Godfather of AI” is not merely predicting upheaval—he is charging the incentive structure that will turn AI’s utility into inequality if it is not reformed. His central message, in his own words — “It will make a few people fantastically rich and everyone else poorer” — is a policy challenge, not a technological inevitability.
Also read: BGIS 2026 Round 4 Day 3: Live Points Table, Group 3 & 4 Schedule, and Ban Updates (Feb 21)
Leo Durant
Editor at Arenaoftech, specializing in refining tech-related content and ensuring all technical articles meet accuracy standards.