Quantum algorithms and classical envy

Quantum ML algorithms are the kind of thing that get announced in the University’s Great Hall with trumpets and excessive confidence, usually by wizards who’ve had too much confidence and not enough sleep. They promise shortcuts through impossible problems, hidden patterns in chaotic data, and the sort of computational advantages that make investors reach for their chequebooks before asking awkward questions about error rates.

In practice, they’re more like the Alchemists’ Guild on a productive day: occasionally brilliant, frequently baffling, and prone to producing results that require extensive interpretation, generous assumptions, and a willingness to ignore the smell of burning mathematics.

The search for cheap beer

Quantum annealing is the art of finding the lowest energy state in a system, which is essentially asking the universe to do your optimisation homework while you wait. Imagine you’re searching every pub in Ankh-Morpork for the establishment serving the cheapest beer that won’t actually blind you. A classical algorithm would visit each pub methodically, possibly dying of liver failure before completing the survey.

Quantum annealing, by contrast, tries to slide gracefully down the energy landscape like a drunk rolling downhill, hoping to settle in the deepest valley rather than getting stuck in some mediocre ditch halfway down. The quantum system explores multiple routes simultaneously through the magic of superposition, guided by carefully tuned magnetic fields that encourage it toward the global minimum.

Does it work? Sometimes. When it does, it’s genuinely impressive. When it doesn’t, you’ve just spent considerable money and effort to discover that your qubits got distracted by a local minimum that corresponds to the third-cheapest pub, which is run by Dibbler’s cousin and serves something called “beer” only by the most charitable definition.

The D-Wave quantum annealers are the most prominent example, purpose-built for this task and not particularly good at anything else. They’re the quantum equivalent of a device that can only answer questions of the form “which arrangement of these things minimises this particular metric?” Useful if that’s your question. Less useful if you were hoping for general-purpose quantum supremacy.

Negotiating with mathematics

The Variational Quantum Eigensolver, or VQE, is what happens when you realise your quantum computer can’t actually solve the problem directly, so you make it negotiate with classical optimisation until something acceptable emerges. It’s quantum computing meets compromise, with all the satisfaction and frustration that implies.

VQE is designed to find the lowest eigenvalue of some mathematical object called a Hamiltonian, which in quantum chemistry means finding the ground state energy of a molecule. This matters because understanding molecular behaviour requires knowing what energy state the system naturally settles into, and classical computers struggle with this once molecules get even moderately complicated.

The quantum part prepares candidate quantum states and measures their energy. The classical part looks at those measurements and suggests adjustments: “Try making that qubit rotate a bit more, and entangle those two less.” Back and forth they go, like Vetinari and The Patrician’s Council negotiating the annual rat quota, until they converge on something both sides can live with.

It’s hybrid quantum-classical computing, which sounds sophisticated until you realise it means your quantum processor is less solving the problem and more providing informed opinions while classical computers do the actual optimising. The quantum advantage here is real but modest: you can study slightly larger molecules than before, assuming your quantum hardware cooperates and your classical optimiser doesn’t spend three weeks circling the same mediocre solution.

QAOA, Good enough, probably

The Quantum Approximate Optimisation Algorithm, mercifully abbreviated to QAOA, has given up on perfection from the start. It’s right there in the name: approximate optimisation. The algorithm knows it’s probably not finding the absolute best solution, and it’s made peace with that. In Ankh-Morpork terms, it’s the “close enough for government work” approach to quantum computing.

QAOA tackles combinatorial optimisation problems, finding the best arrangement of discrete things from an enormous number of possibilities. Think route planning, resource allocation, or determining the optimal way to stack boxes in a warehouse so that the one you need isn’t inevitably at the bottom.

The algorithm works by alternating between two types of quantum operations: one that encodes the problem you’re trying to solve, and one that mixes up the quantum states to help explore possibilities. Run this back and forth several times, measure the result, and you get a candidate solution. Is it the best solution? Unclear. Is it a good solution? Possibly. Will it do? Depends how desperate you are.

The real question with QAOA is whether its approximate solutions are better than what classical approximate algorithms already produce, and whether the quantum overhead is worth whatever marginal improvement you might get. The jury remains out, conducting lengthy deliberations while classical algorithms continue doing the actual work.

Cosmic dice rolling

Quantum sampling is the algorithm equivalent of asking the universe to roll a very complicated set of dice for you, weighted according to quantum probability amplitudes. The goal is to draw samples from probability distributions that are murderously difficult to sample from classically, distributions with so many dimensions and such intricate correlations that classical methods would still be chugging away when the sun expands to engulf the Earth.

This matters for problems in statistics, machine learning, and physics where you need representative samples from complex probability distributions. Thermal equilibration of quantum systems, Bayesian inference, certain types of generative models, all potential applications, assuming the quantum sampler actually produces samples from the distribution you wanted rather than some decoherence-mangled approximation.

The challenge is verification. When you ask a quantum computer to sample from a distribution so complicated that classical computers can’t handle it, how do you check whether it’s done the job correctly? You can’t exactly sample from the distribution classically to compare. You’re left with statistical tests, plausibility arguments, and a vague hope that the quantum computer hasn’t just generated expensive random numbers while you weren’t looking.

Google’s quantum supremacy experiment was essentially a quantum sampling demonstration: they sampled from a distribution so complicated that classical computers would need thousands of years to do the same task. Impressive as a technological milestone. Less clear as a practical application, since nobody actually wanted samples from that particular distribution except as proof of concept.

When classical algorithms are better (most of the time)

The uncomfortable truth that quantum computing evangelists tend to mumble through quickly before changing the subject: classical algorithms are better at almost everything, almost all of the time. They’re faster, cheaper, more reliable, better understood, and they don’t require refrigeration units that consume enough electricity to power a small town.

Classical machine learning has had decades of optimisation. The algorithms are mature, the hardware is spectacular, and the software ecosystem is vast. GPUs can perform trillions of operations per second, and they do it while remaining at temperatures compatible with human survival. Classical optimisation algorithms have been refined through countless applications across every industry, and they work.

Quantum algorithms, by contrast, are largely theoretical, phenomenally expensive to run, and prone to errors that require extensive correction to maybe, possibly produce something useful. For actual deployed machine learning systems, recommendation engines, image recognition, natural language processing, classical algorithms aren’t just better, they’re in an entirely different league. It’s not close.

The quantum advantage, where it exists, is narrow and specific: certain optimisation problems, particular sampling tasks, specialised quantum simulations. These are real advantages! They matter! But they’re not replacing classical ML any time soon. More likely, quantum algorithms will find niche applications where their specific strengths align with genuine needs, while classical computing continues handling the vast majority of actual work.

Classical algorithms stand at the edge of the quantum laboratory like the Watch observing a complicated incident at the Alchemists’ Guild. They’re unimpressed by the excitement, sceptical of the claims, and confident that when the smoke clears, someone will need to call them in to sort out the mess using reliable, boring, room-temperature methods that actually work. They’re usually right.