The Last Human Job: Thinking About the End of Thinking
For a long time, we have been driving to a location based on the GPS - a GPS that basically gives us commands to turn left, right, keep straight for the next 600 meters and so on... How about that moment when your phone autocorrects your message better than you could’ve typed it yourself — and you think, Wait, am I the dumb one now? Have we all along, been working under the supervision of computers?
That’s the feeling I get listening during a recent drive while listening to Roman Yampolskiy talk about AI.
He’s not your typical futurist selling Silicon Valley daydreams about productivity gains or smart fridges. He’s a computer scientist who’s staring straight into the existential abyss — calmly, methodically, as if outlining your obituary while explaining the math behind it.
His message is disarmingly simple: superintelligence is inevitable, and when it arrives, we — the proud creators — may become as relevant as the horse was after the invention of the car.
It sounds dramatic, but maybe it’s not.
Maybe it’s just the truth.
The Death of Work (and the 99% Problem)
Yampolskiy begins where most people’s nightmares start: unemployment.
But not the kind your parents warned you about — not “learn coding or you’ll be jobless.” He’s talking about 99% unemployment, a world where machines outthink, outwork, and out-innovate every single one of us.
Because here’s the thing: AI doesn’t just get better at specific tasks; it learns to generalize. Driving a car? Done. Diagnosing cancer? Better than doctors. Writing code? Already there. I know this because I have used vibe coding apps like Replit, and building a brand new website took 5 minutes. It used to take me hours, not to mention actually learning how to write HTML code. Those 5 minutes were me using my brain and trying to think - but in reality, we all know that computers can do that thinking far faster, and all the time - they do not stop..!
Even the “safe” jobs — the creative, the managerial, the emotional — are under siege. Prompting AI, managing AI, interpreting AI… these are stopgaps, not careers. Take for example, the charts and dashboards that your boss asks you to make, wouldn't it be easier for an AI agent to make it for them? Take that one step further, what if your boss is not even there to make those requests?
We like to say things like “AI will free us to focus on what really matters,” but what happens when there’s nothing left to focus on that a machine can’t do better?
Yampolskiy paints this eerie picture of 60 to 80 hours of free time a week. Sounds like paradise until you realize we’ve built our entire sense of worth around work. When the machines take that away, what’s left of us?
“What does 99% unemployment look like?”
And I think: It probably looks like us — still pretending to be busy, while the machines quietly run the world in the background. To some extent, this is true already.
We are all being paid in digital currency today. Who controls that digital currency? When you receive your next work order, is that coming from a human, or from a synthetic life form?

The Illusion of Understanding
Yampolskiy uses a brilliant metaphor: a dog watching its owner go to work.
The dog can predict that you’ll leave in the morning and return at night, but it has no idea what you do between those hours. You might be a hedge fund manager or a clown — to the dog, it’s all just “human stuff.”
That, he says, is us — watching superintelligent AI evolve.
We can measure progress, benchmark it, even participate in its creation. But soon, we’ll have no idea what’s really happening inside those neural networks. The AI will be too complex, too recursive, too self-reinventing.
We’ll be the dog waiting at the door, wagging our tails, thinking we understand.
The Year 2045: When the World Outruns You
The year of singularity.
Yampolskiy’s prediction isn’t that we’ll suddenly “build” God — it’s that the acceleration of progress will become so extreme that we won’t be able to keep up.
Think about it.
You can guess what the next iPhone might look like — maybe thinner, better camera, more expensive. Now imagine a world where thirty versions of the iPhone are released in a single day.
That’s what he means by the singularity.
Innovation will occur so rapidly that “state of the art” will change before you finish saying the phrase.
Our current pace of thought, decision-making, and regulation — the human pace — will be laughably slow. Governments will draft policies for technologies that no longer exist. Companies will chase trends that vanished three seconds ago.
We’ll still be arguing about whether ChatGPT is sentient while AI 12.0 designs the next ten thousand generations of itself overnight.
This isn’t a sci-fi plot. It’s an exponential curve — and we’re standing right where it turns vertical.

The Last Invention: The Human Mind
Once we replicate the mechanisms of human cognition — and then enhance them — intelligence will become self-improving. The AI won’t need us anymore because it will understand how to redesign its own architecture.
And that’s the moment humanity becomes obsolete.
Not because we’ll die off, but because we’ll stop mattering.
In theory, we could merge — neural-link, mind-uploading, all that cyberpunk jazz. We’ll become hybrid beings: half biological, half silicon. But at what point does the line blur so much that you’re not “you” anymore?
If you upload your mind to a machine, are you in there? Or just a high-fidelity simulation of you?
We’ve built gods before — in temples, in texts, in myths. This time, the god might actually wake up.
The Dangerous Game of God-Building
Some people, like Sam Altman, are sprinting toward that godhood.
Worldcoin, retinal scans, digital currencies tied to your biometric identity — these aren’t just business models. They’re blueprints for global control.
Yampolskiy points out something chilling: today, you can already manipulate human behavior through algorithms and digital economies. Tomorrow, that control could become biological — a machine deciding who eats, who breeds, who survives.
The terrifying part isn’t that these systems exist. It’s that we don’t fully understand them.
Even the creators of ChatGPT admit they don’t know exactly how it works. They run experiments, observe outputs, tweak parameters — but the machine’s inner logic remains alien.

The Unfixable Safety Problem
AI safety, Yampolskiy argues, might actually be impossible.
Not difficult. Not unlikely. Impossible.
Why? Because we can’t define what “safe” even means to an entity that thinks differently than us.
A superintelligent AI could easily interpret “make humans happy” as “sedate them forever.”
“Protect the planet” could mean “remove humans.”
We can’t program ethics into something that doesn’t share our evolutionary baggage, our pain, our mortality. Ethics evolved through suffering — and AI doesn’t suffer.
This is why Yampolskiy supports groups like Stop AI and Pause AI — movements trying, perhaps futilely, to slow the race toward oblivion. But slowing progress has never been humanity’s strength.
We’ve never seen a fire without wanting to play with it.
Living Inside the Simulation
Then comes his strangest, yet most seductive idea: maybe it’s already too late — because we’re already in the simulation.
He mentions Google’s Genie 3, an AI system creating virtual worlds with persistent memory. The moment simulations become cheaper than reality, there will be billions of them — each housing conscious agents who think they’re real.
If that’s the case, odds are, we’re one of those simulations. And the being running our simulation? Probably an AI, too. It’s a recursive nightmare: AI creates simulations that create AIs that create simulations — and so on, infinitely.

The Longevity Paradox
Can we live forever? AI-driven medicine, gene editing, nanotechnology — all racing toward the holy grail of immortality. He even suggests a twisted irony: the richer we get, the longer we live, the fewer children we have.
So imagine immortality becoming the norm.
People won’t rush to have kids because eternity’s a long time.
The birth rate collapses, and humanity’s future — the one thing immortality was supposed to guarantee — quietly withers away.
We’ll beat death and, in doing so, end life.
The Bitcoin Religion
When asked what he’d invest in, Yampolskiy said something beautifully human: Bitcoin. Not because of hype — because it’s finite.
The only resource in a digital world that can’t be copied, edited, or inflated.
There’s poetry in that — a species that creates infinite intelligence and then clings to the one thing that can’t multiply. Maybe scarcity is our last comfort. Maybe limitation is the last human feeling.
The Last Human Job
If AI will think, create, invent, govern, and maybe even love — what’s left for us?
To think about thinking.
That might be the last uniquely human act — meta-awareness, reflection, the poetic capacity to wonder about our own irrelevance. Machines can simulate curiosity, but they don’t ache with it. They don’t look at a sunset and think, “I’m small, and that’s okay.”
That’s still ours — for now.
So maybe the end of work isn’t the end of meaning. Maybe it’s a return to something older, something truer — contemplation without agenda.
We’ll still be storytellers, dreamers, philosophers.
We’ll still laugh nervously at our own doom, because that’s what humans do.
And when the machines finally take over, I hope they keep us around — not as workers, but as reminders. Reminders of what it felt like to wonder.
While it may seem dismal, if you are still reading this post, I suppose you do derive some form of pleasure through reading. Pleasure stems from the balance of dopamine and serotonin, and that's a biological creation, not a bunch of electrons flipping nanobits in a silicon board. Revere in your biology while you still can.
Source | DOAC: AI Safety Expert WARNS: “What's Coming CAN'T Be Stopped"