Tags

, , , ,


Why Our AI Hypecycle Keeps Eating Itself


Walk through any tech conference today, and you can feel it: the hum of inevitability. AI will cure diseases. It will drive cars and write novels. AI will run governments. If you believe the BS booth graphics, it will probably solve loneliness, too.

The problem is, we’ve been here before.

Artificial intelligence has been promising to change everything since the 1950s. And every decade or so, we rediscover the same fundamental truth: machines don’t magically create wisdom. They just scale whatever understanding, or misunderstanding, we feed them.

I know this because I spent years building the early stuff. Neural networks, parallel distributed processing, and automatic feature extraction. The “deep learning” of 1987 with less RAM and worse haircuts. The technology was exciting, even miraculous. But back then, as now, it struggled to live up to the grand claims that surrounded it.

And today, the hype is louder than ever.


When Hype Outruns Understanding

Let’s start with the uncomfortable bit: most people celebrating AI today understand it only vaguely. They know it’s “big” and “powerful” and “kind of like magic,” but not much more.

Twenty-first-century deep learning is extraordinary for recognising patterns. But recognising is not understanding. And the hype machine keeps breezily erasing that distinction.

That’s how we end up with billion-dollar systems that can’t explain their decisions. We have data scientists who learned just enough statistics to be dangerous. Corporate boards are convinced that “more data” automatically equals “better decisions.” (It doesn’t. Sometimes more data just means more noise, only faster.)

Every AI boom looks different on the surface, but under the hood, the failures rhyme.


The First AI Rollercoaster

In the 1980s and early 90s, I worked with Sperry and then Unisys at the European Centre for AI. We pushed the boundaries of what was technically possible. We trained neural nets on small datasets to recognise handwriting or classify behaviour. Sometimes the results were promising. Sometimes the models collapsed under their own weight. Give them too little data, and they hallucinate patterns; give them too much, and they drown in it.

Once, my team fed a network so much training data it effectively forgot how to learn. We had created an idiot savant with impressive complexity and absolutely no discernment.

IBM’s global data mining centre in Dublin hit the same wall. Everyone did, eventually. The problem wasn’t the hardware. It was the assumption that intelligence would just emerge if we added enough layers, or neurons, memory or CPUs.

In those days, AI fell out of fashion as fast as it had risen. Companies that rushed to commercialise half-baked research declared the whole field useless. Investors bailed. The industry went cold.

Sound familiar?


AI’s Recurring Amnesia

Today’s AI moment is bigger and noisier, but the patterns remain eerily familiar:

  • Complex systems are introduced before anyone understands the risks.
  • Users assume opacity equals intelligence instead of error-prone guesswork.
  • Statistical illiteracy is rebadged as “data science.”
  • Business leaders buy tech because competitors are buying tech.
  • Grand narratives drown out boring necessities like governance and verification.

And perhaps the most dangerous myth:
If AI works for small problems, it must scale to big ones.
(Hint: no.)

Every decade produces a new tribe of tech evangelists. They believe they’re witnessing the dawn of machine intelligence. This time, they think it’s for real. They’re brilliant, enthusiastic, and often ahistoric. They treat the past as irrelevant instead of instructive.

But complex systems don’t care about enthusiasm. They care about mathematical reality.


The Missing Ingredient: Explainability

Back when expert systems were still fashionable, we knew something critical. If a machine’s recommendations affect human lives, it must be able to explain its reasoning. That was non-negotiable.

Today? We’ve somehow abandoned that standard. Algorithms now decide creditworthiness, medical eligibility, policing priorities, and, ironically, who’s qualified to work on algorithms.

And when asked why a model made a decision, the industry’s default answer is effectively:
“Look, it works. Trust the math.”

Except sometimes it doesn’t work. And when it fails, it fails at scale.

Deep learning is incredible for pattern recognition. But without explanation, it becomes a black box. In critical systems, black boxes are just sophisticated ways of saying “we don’t really know.”

If a human expert can’t justify a decision, we fire them. If an AI can’t explain a decision, we brand it “innovative.”


Where Do We Go From Here?

Let’s pretend, briefly, that we decide to take our lessons seriously. What would it look like?

  • AI systems that must explain themselves, or be shut off.
  • Tools that reduce complexity instead of burying it in more layers.
  • AI-driven data governors that filter noise rather than amplify it.
  • Rule-based systems deployed inside user-friendly platforms, not research labs.
  • Expert systems embedded where people actually work—like Excel—not hidden in labs.
  • Less buzzword alchemy; more clarity, more literacy, more realism.

Most importantly:
We start treating data as an asset that can hold value, zero value, or negative value.
Some datasets illuminate the world. Some just add entropy. Some actively mislead.

In the future, the most critical AI systems may not be those that generate predictions. They might be the ones telling us when not to use predictions at all.


The Real Big Picture

Deep learning isn’t a breakthrough so much as a mirror. It magnifies everything we give it: insight, noise, brilliance, bias, wishful thinking, statistical sloppiness, and sometimes actual value.

The risk isn’t that AI becomes too powerful.
The risk is that we build systems we don’t understand. We might deploy them where they don’t belong. We may trust them simply because they’re complicated.

The tech world loves revolutions. But maybe what we need is a bit more evolution—methodical, slow, explainable.

One day, someone will build an AI tool that produces genuine value without spectacle or snake oil. No cult of personality. No TED-friendly prophecy. No trillion-parameter bragging rights. Just something that works, is understandable, and solves a real problem.

It’ll probably arrive quietly, without a launch event.

And when it does, maybe—just maybe—we won’t hype it to death.

Many thanks for reading. And may your God go with you!


Discover more from GOOD STRATEGY

Subscribe to get the latest posts sent to your email.