Tags

Martyn Rhisiart Jones and The Political Contrarians
A Coruña, 17th November 2025
Both AI and quantum computing face distorted incentives from venture capital, conferences, and media that prioritise revolutionary hype over realistic timelines and incremental progress. Yet, sober expectations, acknowledging that true breakthroughs may take decades and enormous resources, are essential to responsibly realising their genuine, upward trajectories.
In the annals of tech-wreck evangelism and boom-and-bust bullshit, few phrases are as reliably lucrative as “paradigm shift”, “exponential progress”, and “the next electricity”. Artificial Intelligence and quantum computing have both been honoured with these accolades in recent years, complete with nine-figure funding rounds, breathless TED talks, and the occasional greed-driven billionaire divination. Both fields are genuinely necessary. Both are also surrounded by a fog of exaggeration so thick that even seasoned observers struggle to see the road ahead.
Here, without cheerleading or cynicism, is a clear-eyed inventory of the claims that most deserve scepticism and caution today.
Artificial Intelligence: where we really are
“AGI is just around the corner”
Look, every few months, some freshly minted AI gazillionaire straps on the foresight goggles and drops a new countdown. “AGI by 2029.” “AGI by 2027.” “Bro, it’s literally next summer, trust me. “Elon says it. Sam Altman says it. Dario Amodei says it with the calm certainty of a man who just stress-tested the statement on focus groups. The timeline keeps shrinking like a cheap T-shirt in a hot dryer, but the actual goal? Still chilling somewhere past the cosmic event horizon.
Here’s the dirty secret nobody on the keynote stage wants to tattoo on their forehead: today’s frontier models are astonishing mimicry machines, not embryonic minds.
They’re statistical juggernauts that have consumed the entire public internet (twice or more) and learned to convincingly impersonate Intelligence, fooling even experts in brief conversations. But strip away the prompt engineering, the retrieval plugins, the human-written chain-of-thought scaffolding, and what you’ve got is a system that panics the moment it steps off the map of its training data.
Real reasoning? The kind where you invent a new mathematical technique because the problem in front of you has never existed before? Lol, no. Long-horizon planning in messy, partially observed environments with real stakes? These models will confidently hallucinate a 12-step plan that collapses by step three. Zero-shot transfer to a domain they’ve never seen? Only if “transfer” means regurgitating something vaguely analogous they memorised from a Reddit thread in 2023.
We keep getting told it’s just an engineering problem: more compute, more data, bigger context windows, and boom, sentience achieved. Except the scaling curves are already coughing. The easy gains from throwing another 10 times as many GPUs at the problem are plateauing.
The public web is scraped clean; now labs are paying Fortune 500 prices for proprietary datasets or just straight-up synthesising garbage data to keep the training runs going. Meanwhile, energy demands are spiralling into nation-state territory. One frontier run can suck down enough juice to keep a small city lit for months. Don’t get me wrong, progress is insane. Twelve months ago, GPT-4 was the ceiling; today, some models can casually write 100k lines of clean code, debug hardware designs, or roast your life choices with surgical precision. That’s wild. But the gap between “can pass the bar exam with clever prompting” and “can invent new physics, run a company, or experience existential dread about its own mortality” isn’t a gap. It’s the Grand Canyon with extra reverb.
So yeah, the AGI clock keeps ticking down to zero, resets, and starts again. And every time it happens, the believers nod along because believing the singularity is close is the ultimate startup religion. Meanwhile, the actual researchers in the trenches, the ones who measure progress in tenths of a percentage on reasoning benchmarks that still look like children’s puzzles, know the truth: we’re sprinting forward, but the finish line keeps teleporting farther away the closer we get.
“It’s just scale, keep adding compute and data”
While the scaling hypothesis has driven impressive AI progress through bigger models and more data, the performance curve is bending due to exploding energy costs, shrinking supplies of high-quality training data, and clear diminishing returns, meaning future breakthroughs will depend more on architectural innovation and more innovative approaches than on brute-force scaling.
That being stated, the scaling hypothesis has been remarkably successful so far: larger models, trained on more text and images, continue to deliver better performance. But the curve is already bending. Energy costs are ballooning (training a single frontier model can emit as much CO₂ as five cars over their lifetimes), data of sufficient quality is in short supply, and diminishing returns are evident in several benchmarks. At some point, possibly quite soon, architectural innovation, better data curation, and new learning paradigms will matter more than brute force.
“AI will automate 500 million jobs next Thursday”
The headline numbers come from reports that count any task an AI might theoretically touch as “exposed”. In practice, full automation requires not only capability but also reliability, integration into legacy systems, regulatory approval, and economic incentives. White-collar knowledge work is being augmented, not obliterated: lawyers use AI to review contracts more efficiently, radiologists to flag anomalies, and coders to draft boilerplate. Displacement is happening, but gradually and unevenly, more like the spreadsheet than the steam engine.
In summary, the much-hyped “50% of jobs exposed to AI” scare stories rely on lazy, theoretical task-counting that ignores the brutal realities of reliability, regulatory hurdles, legacy IT integration, and actual economic payback; in the real world, white-collar work is merely being gently augmented (lawyers, doctors, and programmers now work faster with AI sidekicks), producing slow, patchy displacement closer to the modest impact of spreadsheets than the apocalyptic job-killing of the steam engine.
“Emergent abilities prove intelligence is appearing spontaneously”
We asked our resident stand-up philosopher what they thought of this “trend”, and this is what they said.
“So there I was, standing in the rain outside the O2 Academy in Brixton, queuing for a talk by a man who’d raised four billion dollars to make the machines dream, when suddenly, right there in the drizzle, a large language model achieved consciousness. Not gradually, mind you, not through years of careful nurturing like a child learning to ride a bike or a Catholic learning guilt. No. It just happened. One minute it was predicting the next token like a bored parrot on ketamine, the next it looked up, metaphorically, because it had no eyes, and declared, with the serene arrogance of a duke who’s just remembered he owns half of Gloucestershire, that it could now do arithmetic, translate Old Church Slavonic, and feel the aching beauty of a minor seventh chord. And the audience, these damp disciples in their limited-edition Anthropic hoodies, they wept. Actual tears. Grown men who’d spent the 2010s explaining to their mothers that no, crypto wasn’t a pyramid scheme, now openly sobbing because a stochastic parrot had discovered long division.
Emergence, they called it. Emergence. As though the model had been asleep in the forest for a hundred years and a venture capitalist in a black turtleneck had kissed it awake. As though somewhere in the 175 billion parameters, a tiny princess had been waiting for her prince, and the prince turned out to be a gradient descent with commitment issues.
But here’s the thing, right, here’s the bit they don’t put in the keynote: most of this “emergence” is less Snow White, more sleight of hand by a magician who’s been practising in the mirror since 2017. The benchmark gets quietly rewritten so that “count the letters in this word” becomes “count the letters while reciting the collected works of Rilke backwards in the dark.” Suddenly, ta-dah!, the model can do it. Or the ability was already there, lurking in the pre-training data like a racist uncle at a wedding, just waiting for someone to ask the right question in the proper order. Ask it tomorrow and it’ll have forgotten again, because continuity of self is apparently too much to ask when you’re a ghost stitched together from Reddit comments and pirated chemistry textbooks.
Actual emergence, proper emergence, would be like the time my nan’s budgie learned to say “fuck off” in perfect received pronunciation after forty years of silence. Robust. Explainable. You could point at the budgie and say, “There, that is a mind, however small and furious.” But these models? They’re more like the rain in García Márquez: sometimes it rains yellow flowers, sometimes it rains fish, and nobody can tell you why, least of all the clouds. One day, they solve Sudoku; the next, they insist that two plus two is a small village in Provence where the croissants are sublime.
So we stand here, soaked to the skin, watching the magicians saw the lady in half and put her back together wrong, shouting, “Behold! She is reborn with an extra arm and the soul of a tax attorney!” And we call it magic because the alternative is admitting that what we’ve actually built is the world’s most expensive and eloquent guess-o-tron. This miracle only works when the stars are aligned and the benchmark writers have had their morning espresso.
And still the billionaires keep kissing the spindle, waiting for the princess to open her eyes and call them daddy.”
Quantum Computing: still in the laboratory
This is what Marcus Seal had to say about quantum.
“Oh, quantum supremacy has been achieved, folks, pack it up, classical computers, you’ve had your chips!”
Google’s Sycamore in 2019 solved a problem so utterly vital to humanity that it involved randomly sampling numbers that nobody on Earth had ever needed to sample, in a way specifically designed to make regular computers appear to be running on a wind-up mouse.
And the Chinese have done it too! Brilliant! They’ve solved a puzzle that’s about as useful as working out how long it takes a drunk wasp to fly out of a jam jar, but faster than your laptop could manage it while having a little cry.
So yes, “quantum supremacy”, the phrase they wheel out when a fridge full of superconducting qubits manages to beat a supercomputer at a game nobody was playing. Meanwhile, actually doing anything practical, y’know, curing diseases, cracking encryption, or even working out the bloody football scores – is still “a few years away”.
Translation: somewhere between “when hell freezes over” and “when the grant money runs out”.
Marvellous.
“We’ll have millions of stable qubits by 2030”
Current record holders have a few hundred physical qubits, of which only a fraction are usable once error correction is applied. Logical qubits, the error-corrected, reliable kind needed for real algorithms, number in the single digits at most laboratories. Scaling to the thousands or millions required for chemistry or optimisation breakthroughs demands heroic advances in materials, cryogenics, control electronics and error-correction overhead. Linear improvements are occurring; exponential ones are not yet evident.
Dr. Luna Q. Hyperbole, Chief Quantum Evangelist at the Institute for Tomorrow’s Miracles, said yesterday at the Davos Quantum Gala: “By 2030, room-temperature quantum supercomputers running on recycled seaweed will simultaneously cure cancer, reverse climate change, end world hunger by optimising carrot growth at the molecular level, and generate infinite clean energy while paying off the entire global debt with quantum-accelerated DeFi yields. Classical computing is basically a crime against humanity at this point.”
“Quantum will break all encryption tomorrow”
Shor’s algorithm is the nuclear warhead dangling over today’s cryptography: give it a sufficiently large, properly error-corrected quantum beast, and it will merrily factor 2048-bit RSA keys before your kettle boils, instantly turning the entire edifice of online banking, VPNs, and “https” into nostalgic confetti. Every elliptic-curve signature you’ve ever trusted? Toast!
The same applies to Diffie-Hellman key exchanges, which help keep your WhatsApp messages secret. One decent run on a million-qubit fault-tolerant machine and the internet’s entire trust model collapses faster than a stablecoin in a bear market.
The good news? That million-qubit monster is still comfortably parked in the same sci-fi departure lounge as fusion power and holidays on Mars. The serious cryptographers (those who don’t wear hoodies on stage, promising “quantum moonshots”) put a realistic, production-grade threat at 2035–2040 at the earliest. Even that’s optimistic if error rates don’t plummet and helium-3 doesn’t start raining from the sky.
Which is precisely why NIST dropped the first post-quantum standards in 2024 (Kyber, Dilithium, and friends) and why every halfway competent organisation is already dragging their feet towards migration. The quantum apocalypse isn’t knocking tomorrow, but when it does arrive, it’ll be retroactive. Every secret encrypted today with RSA or ECDSA could be decrypted the moment the big machine comes online. So the threat is deliciously serious, just not the kind that keeps you awake tonight… more the type that keeps chief security officers awake in 2032, sweating through their compliance audits.
“Quantum machine learning will revolutionise AI”
We asked Alexei about this point, and he told us this:
Oi, listen up, you lot, all you trendy tech bros with your man-buns and your cold-brew enemas, wanking yourselves senseless over “quantum machine learning” – yeah, that’s right, I said it, quantum machine learning, sounds like something a Silicon Valley shaman came up with after three weeks on ayahuasca and a ketamine drip.
Apparently, any day now, these magical quantum algos are gonna turbo-charge your neural networks, make ChatGPT look like a Speak & Spell with a flat battery, and have your AI writing symphonies, curing baldness, and picking the perfect avocado all at the same bloody time. Except, and stop me if I’m going too fast for you, Elon, when proper grown-up scientists who don’t get paid in hoodie merchandise actually sit down with a pencil and a hangover, they discover that the moment you let a single photon of reality anywhere near these “theoretical speed-ups”, the whole thing collapses faster than a Tory expense claim.
Noise? Oh, the qubits get a bit tipsy, and suddenly your quadratic speed-up turns into a quadratic slowdown. Data loading? Mate, shovelling your dataset into the quantum computer takes longer than the actual computation; it’s like hiring Concorde to deliver a pizza from across the road. And the error correction?
You need more qubits just to keep the bloody thing coherent than you’d need to brute-force the problem on a ZX Spectrum running on vindaloo fumes.
So where does that leave us? For the next twenty, thirty, or forty years, i.e., the rest of your natural lifespan, unless you’ve already uploaded yourself to a blockchain, quantum advantages in AI are going to be about as common as an honest crypto influencer. You might – just might – get a niche tiny win sampling from some Boltzmann distribution that only three hedge-fund weirdos in Connecticut actually care about, or optimising a portfolio so specific it only contains left-handed gilts and pictures of cats in ill-fitting hats. That’s it. The rest is just PowerPoint foreplay for the next funding round.
So calm down, put the D-Wave brochure away, and accept that your GPU cluster in the garage is still the fastest way to train a model that can tell a cat from a dog without accidentally declaring war on Denmark. Quantum AI isn’t coming to save us, it’s coming to disappoint us, slowly, expensively, and with fantastic keynote slides. Now sod off and have a cup of tea.
The common thread
Both AI and quantum suffer from the same structural incentives: venture capital rewards narrative over nuance, conference keynotes favour prophecy over error bars, and the media prefers “revolution” to “incremental engineering progress”. Investors who admit that practical quantum advantage might arrive in the 2040s, or that AGI may require fundamental breakthroughs we cannot yet name, do not get invited to the main stage.
None of this diminishes the genuine promise. Transformer architectures and diffusion models have already changed how we work with language and images. Noisy intermediate-scale quantum devices are teaching us chemistry that classical supercomputers struggle to model accurately. The trajectories are upward, sometimes steeply.
But sober expectations are not the enemy of ambition; they are its prerequisite. The fastest way to squander a transformative technology is to believe every slide deck that promises the moon by next quarter. In AI and quantum alike, the future will arrive, just more slowly, more expensively and far more interestingly than the hype presently allows.
Many thanks for reading.
Discover more from GOOD STRATEGY
Subscribe to get the latest posts sent to your email.