One of the things that consistently keeps me up at night is the sheer idiocy and nonsensical approach global executives and investors are taking toward the AI field today. We are witnessing a historic misallocation of resources.
Here is the top-level problem: AI as it exists today has a massive, unprecedented capability to change the world right now. The ROI is sitting right in front of us, almost guaranteed, high-probability wins in efficiency, personalization, and discovery.
But somehow, the global executive brain has collectively decided to ignore these near-certain wins to chase the AGI (Artificial General Intelligence) fairy tale on a timeline that is currently colliding with the hard floor of physical reality.
We are ignoring the "boring" billions to chase the "imaginary" trillions, and in doing so, we are creating a systemic risk that could freeze the industry for a decade when the hype eventually hits the wall!
1. The Energy Wall: Math Doesn’t Care About Your Vision Board
The energy story alone should have ended half the AGI fantasies. We are attempting to build a god-like intelligence on a 20th-century power grid that is already wheezing.
Data centers consumed approximately 415 TWh of electricity in 2024, accounting for roughly 1.5% of global electricity use [1]. While that sounds manageable, the trajectory is vertical. IEA projections show global electricity demand from data centers potentially doubling to over 1,000 TWh by 2030 in their base case—equivalent to the entire electricity consumption of Japan [2].
Now, layer the “AI everywhere” mandate on top of that. We aren't just talking about training a model once; we are talking about:
- Continuous fine-tuning.
- Trillions of inference calls.
- Autonomous agent loops running 24/7.
- Background enterprise rollouts.
- The 300th consumer app that generates "motivational quotes for founders who can't sell."
The hardware isn't helping as much as you'd think. A single NVIDIA H100 GPU can draw up to 700W at peak TDP [3]. That is one chip. Not a server, not a rack—one piece of silicon. When you scale that to clusters of 100,000+ GPUs (like the ones currently being built), you aren't just building a software product; you are building a sovereign-level utility crisis.
Unless your AGI plan includes “inventing abundant, cheap, fusion-level energy” as a prerequisite or you secretly hired "Tony Stark" to help with that, you aren't building intelligence. You are building a billing problem!
2. Silicon is Not an Asset—It’s Milk
Even if we solve the power crisis, the economics of the frontier path are fundamentally broken. We treat these massive H100 clusters like real estate assets on balance sheets, but they behave like perishable goods.
The newest AI accelerators are like milk:
- Insanely valuable for a very short window.
- The next generation (Blackwell and beyond) arrives, offering massive jumps in performance-per-watt.
- Your previous multi-billion dollar fleet becomes a "yesterday's performance" joke that costs more to power than it returns in value.
This creates a Capex Treadmill. The AGI path demands more power, more specialized cooling, more grid upgrades, and more frequent hardware refreshes. Meanwhile, the "business value" promised to shareholders remains, for many, a very expensive PowerPoint presentation.
3. The LLM Delusion: Probability is Not Consciousness
Let’s stop the religion: LLMs are not "AI" in the way sci-fi promised us; they are gigantic, sophisticated probability engines with world-class PR.
They are useful, sometimes brilliant, and certainly transformative. But they lack the basic traits of "intelligence" that executives casually throw around in boardrooms:
- They do not self-start: They require a prompt, an event, or a trigger.
- They do not continuously learn: Training and inference are separated by a "frozen" state for stability and cost reasons.
- They do not ground themselves: They don't go out and "learn" the world; they ingest a static snapshot and hallucinate the gaps. Someone needs to always "handhold" them when feeding the data (i.e. tokenization, pre-processing, etc.)
When an executive says "we are close to AGI," they usually mean "we are close to spending a lot more of your money."
4. The Data Desert and the "Synthetic Sludge" Problem
The first wave of LLMs ate the "low-hanging fruit" of human knowledge: the entire public internet. Now, the industry is hitting two brutal walls.
A) We are running out of humans.
Epoch AI estimates the effective stock of high-quality, human-generated public text at roughly 300 trillion tokens. They project that frontier models will fully utilize this entire stock between 2026 and 2032 [4]. You cannot "just scale the data" if the data doesn't exist.
B) The "Model Collapse" Paradox.
As we flood the internet with AI-generated content, the "training buffet" is becoming poisoned. Peer-reviewed research has demonstrated Model Collapse: when models are trained on recursively generated synthetic data, they lose the ability to represent the "tails" of the distribution. They degrade, simplify, and eventually become useless [5].
We are heading into a world where good human data is a luxury, cheap data is "synthetic sludge," and the internet is becoming a hall of mirrors.
5. The Economic Paradox: Who Buys the Products?
Let’s say the "AGI utopia" happens. We automate 50% of the workforce. Costs drop. Efficiency skyrockets.
Then what?
The standard pitch is:
- Companies slash salaries and headcount.
- Margins explode.
- ...But purchasing power collapses because the "former employees" are now the "unemployed."
- Demand for the product implodes.
This isn't a victory; it’s a demand-side implosion. Even the IMF frames the "magical" solution of Universal Basic Income (UBI) as a policy nightmare with massive financing implications and macro-stability risks [6]. It is not a "patch" you just slap onto a broken economy.
Chasing AGI to replace humans is a race to a market where no one can afford your product!
The Alternative: Real ROI via Horizontal Value
Instead of chasing a lose-lose fantasy, we should be pushing AI value horizontally. We don't need a new global energy grid or synthetic minds to create massive wealth today. We just need to stop being lazy with the tech we already have.
1. True Personalization (Beyond the "Sticker" Phase)
Today, "personalization" is a joke. It’s a dark mode setting or a "Recommended for You" list that is 40% wrong. That’s not personalization; that’s a sticker.
True personalization means the UI/UX is fluid. If you and I own the same phone, and I pick up yours, I should struggle to use it because it has genuinely shaped itself around your specific cognitive load, terminology, and stress patterns.
Not just layout. The experience, your workflows, your terminology, your timing, your stress patterns and your habits.
This isn't sci-fi. It’s product execution, governance, and good taste. It’s hard because it requires competence, not just more compute. And that is exactly why it is hard: it requires competence, not hype.
Before, true personalization was almost impossible because it did not scale. The cost of building bespoke experiences was a bottomless ROI hole.
But with today’s AI, it becomes feasible to generate and maintain personalized UX at scale.
For example:
- The OS as a Partner: Your phone knows you have a high-stakes meeting and adapts the notification layer to "Shield Mode" without you asking or it knows you have an exam result coming and adapts the experience to support you before you even open the email.
- Biometric Safety: Your car detects an "angry driving" heart-rate pattern and de-escalates the cabin environment or restricts high-performance modes until you're centered.
2. Individual-Level Analytics: The Death of the "Cohort"
Analytics has existed forever. Humans have always wanted to understand behaviour. Even dictatorships run on it.
Modern analytics is a series of averages. We group people into "buckets" because we couldn't handle the complexity of the individual. It's still:
- Aggregated
- Grouped
- Averaged
- Dumbed down enough to fit charts
Now imagine we take only publicly available signals:
- Your text posts
- Your videos/images
- Your social connections
- Your professional history
- Your reaction patterns
- Your reviews
- Your public interests
Then we ask:
- Would this product, in this shape, in this color, at this price, convert for you?
Not by guessing. By modelling your preferences, your constraints, your timing, your psychology. Then add consensual first-party data:
- Purchase history in-platform
- Interaction history
- Returns and complaints
- Support chats
Then optionally add biometric signals with explicit consent:
- Sleep
- Heart rate variability
- Stress markers
- Routine stability
With that and with current AI, we can move to Decision Support:
- Instead of asking "Will this 18-35 demographic buy this?", we ask "Does this product, at this price, reduce regret for this specific person?"
- By modeling individual preferences, constraints, and even biometric stress markers (with explicit consent), we move from extracting value to creating fit.
And yes, this can be abused. That is why governance matters, but the existence of risk is not an argument for stupidity. It is an argument for doing it properly!
The Final Warning: The Risk of the Over-Promise
My biggest fear is that the "AGI or Bust" mentality will poison the entire planet. When the physical boundaries of power, data, and economics finally force a correction, the bubble won't just leak or even burst, it will explode like a nuke!
When it explodes, boards will panic. Budgets will freeze. And tragically, the "Good AI", the tools that increase human capability and unlock new experiences, will be defunded alongside the fantasies.
We don't need machines that replace us. We need machines that allow us to do things that were previously impossible. In the industrial era, the winners weren't the ones who just cut headcount; they were the ones who used machines to create entirely new categories of existence and build things that never existed or were impossible to build before!
It’s time to stop chasing the AGI fairy tale and start building the AI reality.
Please, let’s focus on win-win scenarios rather than lose-lose ones.
There is no sustainable “win-lose” here. Even if the short-term spreadsheet looks green, the long-term system will collect the debt.
Either everyone wins, or everyone loses.
Pick your choice:
Either everyone wins, or the spreadsheet wins for a quarter while the system collects the debt!
Sources
[1] International Energy Agency (IEA), Energy and AI: Executive Summary. Data centre electricity consumption estimated at ~415 TWh, roughly 1.5% of global electricity in 2024.
[2] International Energy Agency (IEA), Electricity 2024: Analysis and Forecast to 2026. Projections for data center supply rising to over 1,000 TWh by 2030 (Base Case).
[3] NVIDIA, H100 Tensor Core GPU Specifications. Technical documentation showing Max TDP up to 700W.
[4] Epoch AI, Will we run out of data? Limits of LLM scaling based on human-generated data (2024). Analysis of the 300T token limit and exhaustion timelines.
[5] Shumailov et al., AI models collapse when trained on recursively generated data, Nature (2024). Peer-reviewed study on the degradation of generative models.
[6] International Monetary Fund (IMF), Universal Basic Income: Debate and Impact Assessment (Working Paper). Framing UBI as a high-trade-off policy with significant financing and incentive challenges.