The Fragility of Borrowed Intelligence
Aishwarya Khanduja shines a lot on the subtle patterns that give modern LLMs their peculiar writing style. There is a certain hypnotic quality to them; Aishwarya wonders—as we export more and more of our thinking to these AIs, will their style of language manipulate our own thought-patterns? And could this convergence between human and machine cognition introduce new types of fragility into our economy and broader culture?
You can spot AI-influenced writing by its cadence before you even notice its content.
Tweetable aphorisms: Every few paragraphs conclude with a tidy maxim you could drop into X or Instagram without context.
Stock metaphors: Everything becomes an ecosystem, a veneer, a scaffolding, a shadow.
Premise-to-punchline rhythm: Long dependent clause, then dash, then quip.
Parallel rhetorical escalations: “What happens when X? What happens when Y? What happens when Z?”
Canon name-drops: McLuhan, Ship of Theseus, Plato’s cave—appearing like seasoning rather than substance.
Authority sprinkling: An economist here, a cognitive scientist there, but never integrated deeply into the argument.
The result tends to be a hypnotic cadence. Like the hook of an Olivia Rodrigo bop, it smooths complexity into something familiar, comforting, and consumable. We nod along because the rhythm is recognizable. It quiets our insecurity about whether we truly understand.
That, of course, is the point.
The same rhythm that lulls us into agreement also lulls us into dependence. What feels smooth and familiar on the page feels smooth and familiar in life. We begin trusting cadence over content, and convenience over resilience. We’re drawn to rhythms that shelter us from uncertainty. That’s how cadence turns into dependence, and dependence turns into fragility.
It feels like we are sleepwalking into digital feudalism through our dependence on Artificial Intelligence. I see my friends optimizing their workflows around these tools: writing, analysis, decision-making, therapy. It feels obviously good—until you zoom out.
Your productivity becomes tied to AI access. Your company’s competitiveness depends on your productivity. Scale this across the economy and suddenly entire sectors are hostage to whoever controls the algorithms.
What happens when they dial back capabilities? Restrict access? Or the tech just breaks? Millions who rebuilt their work processes around GPT-5 suddenly find themselves stuck with GPT-4. Companies that structured operations around AI assistance hit walls they never learned to climb.
We’re losing institutional knowledge about how to function without new “tools”. Each integration creates civilization-level single points of failure.
***
Here’s the twist: everything you just read—my warning about AI dependency—was written using the very patterns I identified as AI-like.
The aphorisms.
The ecosystem metaphors.
The premise-punchline cadence.
The McLuhan name-drop.
The sweeping civilization-level claims.
If you found the argument compelling, was it the logic that persuaded you—or the cadence itself?
This is the recursive trap. Either I’m unconsciously infected by these patterns, or I’m consciously manipulating you by using them, or the patterns themselves have become inescapable. Each possibility leads to the same unsettling conclusion: we may already be too deep inside the system to think our way out of it.
And if you didn’t notice until I pointed it out—well, again, that’s exactly the point.
This trap shows us that once the patterns of thought themselves are captured, fragility multiplies. It’s not just that our work depends on the tool, it’s that our way of thinking bends around it. And fragility takes root in both dimensions.
ORIGINALLY PUBLISHED SEPTEMBER 2025
To read Aishwarya’s predictions on where things may be heading, check out our collection: Life in the Scroll Age.