LLMs Are Our Sophons
(Written with help from an LLM — but the ideas are still mine. Let’s keep it that way.)
(This is version my original post on titled LLMs: The Three-Body Lock-in, but this is a reflection of how I think and write, and the other, the “more professional”).
In The Three-Body Problem, Liu Cixin introduces the sophon — a near-invisible device sent to Earth by an alien civilization. It doesn’t destroy anything. It doesn’t attack. It simply interferes, quietly stalling human scientific progress by disrupting experiments at the subatomic level.
The sophon itself isn’t terrifying.
What is terrifying is its subtlety, its boring-ness — the way it creates failure without warning, progress without direction, and effort without result.
That’s fiction. But the idea should feel uncomfortably familiar.
Because today, we’re building our own sophons. We call them LLMs.
I say this as someone actively building AI. This isn’t a knee-jerk reaction. It’s a warning from inside the work. If we care about the future of intelligence, we need to be honest about what we’re creating — and what it’s doing to us now.
LLMs Aren’t the Threat. Our Use of Them Is.
Language models are everywhere — in your browser, your code editor, your inbox. They finish your sentences, summarize your meetings, generate your ideas. They’re fast. Useful. And seductive.
But like sophons, the danger isn’t what they do.
It’s what we might stop doing because of them.
Let me be clear: I believe AGI is coming. I believe it's inevitable. And I’m working toward it — not as a replacement for human thinking, but as a tool to amplify it. True AGI could be one of the greatest achievements of our species.
But today’s LLMs are not that.
Treating them like they are is how we start the slide into stagnation.
They Don’t Learn. And We’re Starting to Forget.
LLMs are static. Their knowledge is locked to the date they were trained. They can’t learn. They can’t reason. They just remix — returning the most statistically probable answer based on what’s come before.
The problem isn’t just that they don’t think.
It’s that they sound like they do.
And the more we depend on them, the easier it becomes to lower our standards. We adapt our thinking to fit their output. We shape our creativity around what they can generate. Over time, we mistake fluency for insight. Pattern-matching for cognition.
We’re not just using them.
We’re being reshaped by them.
All This... Built From Us
Here’s the tragic twist: everything LLMs do is built on our work — our code, our writing, our science, our language, our failures, our breakthroughs. Centuries of human thought.
And yet, we’re at risk of turning that achievement into the ceiling of our progress.
These systems aren’t discovering anything new. They’re not moving knowledge forward. They’re just echoing what we've already done — in increasingly convincing ways.
It would be a deep loss if we handed over the steering wheel at the exact moment we should be accelerating.
(And unlike in the Netflix adaptation — which wasn’t bad — this point lands harder in print than on screen.)
Built to Save Money, Not Advance Thought
In Liu’s books, the sophons are deployed strategically — to prevent Earth from developing defenses.
Today, we’re deploying LLMs to save on payroll.
This isn’t about amplifying human potential. It’s about cutting costs. Companies are using LLMs to replace writers, analysts, designers, coders — not because it improves the work, but because it’s cheaper.
They’re not replacing drudgery.
They’re replacing effort.
And with that, we lose curiosity, struggle, and depth — the ingredients of actual progress.
At least the Trisolarans had a long-term plan. Here, we’re doing it for quarterly earnings.
The Lock-In Is Already Happening
The worst part? We’re adapting to the tools.
Prompts replace drafts.
Templates replace thought.
Autocompletion replaces exploration.
We’re bending ourselves to fit their limitations. And that’s the quiet lock-in. Not imposed, but embraced. Not visible, but everywhere.
We are, in real time, training ourselves to think within the boundaries of prediction engines.
It’s like watching civilization evolve into its own Netflix adaptation — well-produced, but missing the depth that made the original so powerful.
What We Do Now
This isn’t a call to abandon AI. I’m not afraid of where it could go. I’m afraid of where we stop.
AGI is worth pursuing. But we won't get there by pretending that today's LLMs are already intelligent. We won’t get there by normalizing tools that generate answers without understanding, or replacing people with systems that can't reason.
We still have time to course-correct.
We can treat LLMs as scaffolding — tools to support deep thought — or we can let them become our sophons, beautiful constraints we wrap around our own potential.
I’m not afraid of LLMs becoming smarter than us.
I’m afraid of us forgetting how to be smart without them.If we still believe in a better future, we must stay awake — and keep thinking for ourselves.