For the past few years, “AI” has been marketed as the dawn of a new intelligence. We’re told it thinks, understands, reasons, creates, and will soon replace vast swaths of human labor. CEOs speak in breathless terms about digital minds and world-changing breakthroughs. Investors pour trillions into companies positioned as the infrastructure of this new era.
But strip away the hype, and what remains looks far less revolutionary — and far more familiar.
Large Language Models (LLMs), the stars of the current AI boom, are not thinking machines. They do not understand language, ideas, or the world. They are statistical systems trained to predict the most likely next token in a sequence based on massive amounts of data. That’s it. No beliefs, no goals, no comprehension — just probability distributions over text.
Calling this “intelligence” is not a scientific claim. It’s a marketing strategy.
The most effective sleight of hand in the AI boom has been linguistic.
LLMs are described as:
These words are borrowed directly from human cognition, even though the systems themselves have none of the underlying properties those words imply. When a model outputs a fluent explanation, people intuitively assume there must be an internal understanding behind it — because that’s how humans work.
But LLMs don’t know what they’re saying. They don’t check facts. They don’t hold concepts. They don’t reason in the way humans or even classical AI systems do. They generate text that looks like reasoning because the training data contains examples of reasoning-shaped text.
This isn’t a minor semantic issue. It’s the foundation of the hype. By encouraging anthropomorphic interpretations, AI vendors let users and investors overestimate what the technology actually is.
What LLMs do have is scale — and scale costs money. A lot of it.
Training and running modern AI models requires:
This is not a “software revolution” in the traditional sense. It’s an industrial operation, closer to mining or heavy manufacturing than to writing code. The environmental footprint is growing rapidly, and the marginal gains are increasingly expensive.
Yet the public narrative insists this is an inevitable, exponential march toward artificial general intelligence — conveniently justifying ever-larger capital expenditures.
Every tech bubble needs three ingredients:
The AI boom has all three.
NVIDIA has become the most obvious beneficiary of AI hype. Its GPUs are essential for training and inference, and demand has exploded. But this creates a dangerous feedback loop.
Whether the end products are economically viable becomes almost secondary. The infrastructure sells regardless — a classic gold-rush dynamic.
Companies like Oracle, Amazon, Microsoft, and Google position themselves as indispensable AI platforms. Their pitch is simple: the future runs in the cloud, and AI needs the cloud.
This allows them to monetize AI enthusiasm immediately, even if downstream applications never deliver the promised productivity revolution. Training experiments, failed startups, and overhyped pilots still generate compute bills.
The risk is socialized among startups and investors, while the revenue is privatized by infrastructure providers.
OpenAI occupies a unique role: part research lab, part product company, part storytelling engine. Its leaders speak confidently about imminent breakthroughs, existential risks, and transformative power — often in the same breath.
The message is remarkably consistent:
None of this requires proving that current models actually understand anything. The promise alone sustains the valuation.
AI executives rarely make falsifiable claims. Instead, they rely on vague timelines, selective demos, hypothetical scenarios, appeals to inevitability, and fear of being left behind.
This isn’t accidental. It’s persuasion under uncertainty. When outcomes are unclear, confidence becomes a substitute for evidence.
Saying “this is advanced statistical pattern matching” doesn’t raise capital. Saying “this will change everything” does.
Despite the hype, measurable economy-wide productivity gains remain modest. Many deployments struggle with reliability, verification costs, legal risk, security concerns, and maintenance overhead.
Humans often spend significant time correcting, validating, or constraining AI outputs. That time is rarely counted when return-on-investment claims are made.
The story being sold is one of machine intelligence, inevitable dominance, and limitless growth. That story justifies extraordinary valuations, massive energy consumption, and frantic investment — even as the underlying technology shows diminishing returns.
If history is any guide, what follows is not the end of AI, but the end of the illusion:
The scam isn’t that LLMs exist. It’s that we were encouraged to believe they were something they’re not — and to pay any price to be part of the future they promised.
-Yes, this text was created by an LLM.