Quantum supremacy and other tall tales¶
Quantum computing has accumulated more hype than a Dibbler product launch, with similarly questionable relationship to reality. Every few months, another press release announces that quantum computers are about to revolutionise everything from drug discovery to financial modelling, conveniently timed with funding rounds or grant applications. The actual capabilities of current quantum computers receive less attention, presumably because “we successfully ran a calculation on fifty-three noisy qubits for twenty microseconds before decoherence ruined everything” lacks the inspirational quality investors prefer.
The term “quantum supremacy” itself exemplifies the marketing problem. It originally meant demonstrating a quantum computer performing any task, however useless, faster than any classical computer. Google achieved this in 2019 by sampling from a probability distribution that nobody actually needed samples from. This was a genuine technical milestone but got reported as though quantum computers had suddenly become useful, which they hadn’t.
The gap between what quantum computers can do and what breathless press releases claim they can do resembles the gap between The Patrician’s actual policies and what people think his policies are. Both involve extensive speculation, wishful thinking, and a tendency to credit quantum mechanics with capabilities it doesn’t possess. Understanding what quantum computers genuinely offer requires separating the technology from its publicity department.
What quantum computers can actually do now¶
Current quantum computers can perform very specific calculations on very limited problem sizes under very controlled conditions. They can sample from certain probability distributions, simulate small quantum systems, and run variational algorithms that might, occasionally, produce useful results for optimisation problems. These are real capabilities and they matter for research, but they’re nowhere near general-purpose utility.
Google’s Sycamore processor demonstrated quantum supremacy by sampling from random quantum circuits. This showed that quantum computers can do something faster than classical computers, which was the point. It didn’t show that quantum computers are ready to solve real problems because sampling from random circuits isn’t a real problem. It’s a benchmark, like testing whether your car can exceed two hundred kilometres per hour on a test track. Impressive, but not directly relevant to driving to the shops.
IBM’s quantum processors can run algorithms like VQE and QAOA on small instances of optimisation and chemistry problems. They’ve been used to calculate molecular ground states for simple molecules, solve toy optimisation problems, and demonstrate quantum machine learning concepts. These experiments work but don’t outperform classical computers on practically relevant problems. They’re proof-of-concept demonstrations, showing what might be possible once quantum computers improve.
IonQ, Rigetti, and other quantum computing companies offer cloud access to their quantum processors. These systems have dozens of qubits with varying connectivity and error rates. Researchers use them to test quantum algorithms, explore quantum phenomena, and publish papers. Businesses use them to understand quantum computing and prepare for eventual quantum advantage. Nobody uses them for production workloads because the error rates, decoherence times, and limited qubit counts make classical computers vastly more practical.
The most genuine current application is quantum simulation, where quantum computers model quantum systems. This is useful because classical computers struggle with quantum simulations as the system size grows. Even current noisy quantum computers can simulate quantum phenomena that classical computers find difficult. This matters for materials science, quantum chemistry, and fundamental physics research. It’s not revolutionary yet, but it’s genuinely useful work that classical computers handle poorly.
Everything else quantum computers claim to do is either demonstration, aspiration, or marketing. The hardware exists, the basic functionality works, but practical advantages remain elusive for nearly all real-world problems. We’re in the era of noisy intermediate-scale quantum devices, which is exactly as underwhelming as it sounds.
What they might do eventually¶
Once quantum computers improve substantially, with error correction, thousands of logical qubits, and gate operations accurate to one part in ten million or better, they might actually become useful for specific applications. These are genuine possibilities backed by theoretical understanding, but the timeline ranges from “decades away” to “maybe never” depending on who you ask and how recently they’ve sought funding.
Breaking encryption through Shor’s algorithm is the most famous potential application. A large-scale, error-corrected quantum computer could factorise large numbers efficiently, breaking RSA encryption and similar schemes. This would have dramatic implications for cryptography and security. The required quantum computer would need millions of physical qubits implementing thousands of logical qubits performing trillions of gate operations accurately. We currently have dozens of noisy qubits performing thousands of operations before errors dominate. The gap is substantial.
Quantum simulation should genuinely improve as quantum computers scale. Simulating quantum chemistry, materials science, and quantum field theories becomes exponentially harder for classical computers as system size increases but only polynomially harder for quantum computers. A sufficiently large quantum computer could simulate molecular dynamics, material properties, and drug interactions that classical computers cannot handle. This could accelerate drug discovery, materials design, and fundamental physics research. Timeline uncertain, but the physics supports this optimism.
Optimisation through QAOA and quantum annealing might eventually outperform classical approaches for certain combinatorial problems. Logistics, scheduling, resource allocation, and portfolio optimisation could potentially benefit once quantum computers become large and reliable enough. The advantage depends on whether quantum speedups survive realistic error rates and whether classical algorithms improve faster than quantum hardware. Current evidence is mixed, with some problems showing promise and others showing that classical methods remain superior.
Machine learning applications remain speculative. Quantum kernel methods, quantum neural networks, and quantum sampling might eventually offer advantages for specific ML tasks. These advantages would likely be marginal rather than revolutionary and limited to particular problem structures. Most ML workloads will remain classical because classical ML hardware is spectacular and quantum computers will remain specialised, expensive, and difficult to use even after substantial improvements.
The critical assumption underlying all these possibilities is that quantum error correction works at scale, that we can build quantum computers with millions of qubits maintaining coherence for extended periods, and that the engineering challenges of scaling quantum hardware prove tractable. These are reasonable assumptions but not guaranteed. We might hit fundamental barriers. We might find that quantum computers remain niche tools for quantum simulation while classical computers continue dominating everything else.
What they will never do¶
Quantum computers will never replace classical computers for general-purpose computing. They’re specialised devices for specific problems where quantum effects provide advantages. Everything else remains better suited to classical hardware, which is faster, cheaper, more reliable, and doesn’t require cooling to millikelvin temperatures.
They will not solve P versus NP, which is a question about the fundamental nature of computational complexity and has nothing to do with quantum mechanics. Quantum computers provide polynomial speedups for some problems and exponential speedups for very specific problems like factoring. They don’t magically solve NP-complete problems in polynomial time. Anyone claiming otherwise has fundamentally misunderstood either quantum computing or computational complexity, quite possibly both.
They will not achieve consciousness, develop artificial general intelligence, or otherwise become sentient. Quantum mechanics doesn’t grant consciousness any more than classical mechanics does. Confusion about quantum measurement and the role of observers has led to unfortunate speculation about quantum computers and consciousness. This speculation is rubbish. Quantum computers are calculating machines that happen to use quantum effects. That’s all.
They will not break the second law of thermodynamics, enable time travel, or otherwise violate fundamental physics. Quantum mechanics is strange but it’s still physics, bound by the same conservation laws and thermodynamic principles as everything else. Quantum computers manipulate quantum states within entirely normal physical constraints.
They will not make tea, do your washing up, or handle any physical task that requires interaction with the macroscopic world. Quantum computers compute. They don’t have actuators, don’t interface with teapots, and would decohere immediately if you tried to integrate them with kitchen appliances. This should be obvious but apparently requires stating given some of the claims floating about.
They will not solve every problem faster than classical computers. Most problems either don’t have known quantum algorithms that provide speedups or have quantum algorithms that provide only modest improvements not worth the overhead of quantum hardware. Quantum advantage exists for specific problem classes. For everything else, classical computers remain superior and will continue being superior even after quantum computers improve dramatically.
The limitations aren’t temporary technical problems to be overcome with better engineering. Some are fundamental to how quantum computers work. Others reflect the reality that classical computing has had eighty years of optimisation while quantum computing is still in early development. The hype cycle suggests quantum computers are nearly ready to replace classical computing. The reality is that quantum computers are specialised tools that might, eventually, complement classical computers for particular applications.
Spotting snake oil quantum offerings¶
The quantum computing field has attracted the usual collection of opportunists, optimists, and con artists that any emerging technology accumulates. Distinguishing legitimate quantum computing work from elaborate nonsense requires scepticism, some technical knowledge, and attention to warning signs.
Red flag the first is vague claims about quantum advantage without specifics. “Our quantum algorithm solves X faster” requires details about problem size, error rates, comparison with state-of-the-art classical methods, and realistic assessment of overhead. If the marketing materials mention quantum speedups without these details, assume the claims are aspirational at best.
Red flag the second is quantum solutions to problems that don’t need quantum computers. If someone proposes quantum machine learning for standard classification tasks on classical data, they’re either experimenting for research purposes or selling snake oil. Classical ML works excellently for these problems. Quantum computers add expense and complexity without advantages. Anyone claiming otherwise should demonstrate their quantum approach outperforming classical methods on realistic problems with realistic hardware.
Red flag the third is misunderstanding basic quantum mechanics. Claims about quantum computers “trying all possibilities simultaneously” or “exploring parallel universes” indicate the speaker has learned quantum mechanics from pop science articles rather than textbooks. This doesn’t automatically mean their product is fraudulent, but it suggests they don’t understand what they’re selling, which rarely ends well.
Red flag the fourth is no access to actual quantum hardware. Many “quantum” services run classical simulations of quantum algorithms, which is fine for research and development but not quantum computing. If the vendor can’t specify which quantum hardware platform they’re using, how many qubits, what connectivity, and what error rates, they’re probably running classical simulations and calling them quantum for marketing purposes.
Red flag the fifth is aggressive timelines. Legitimate quantum computing researchers speak in terms of decades for practical quantum advantage in most applications. Anyone promising quantum solutions to your business problems within months is either selling consulting services to explore quantum computing (which is honest) or selling fictional quantum capabilities (which isn’t).
Red flag the sixth is quantum solutions to problems that aren’t primarily computational. “Quantum-inspired algorithms” for business strategy, quantum approaches to organisational management, or quantum frameworks for creative thinking are using “quantum” as a synonym for “sounds impressive” rather than actual quantum mechanics. These offerings are harmless if cheap but indicate intellectual dishonesty if expensive.
The legitimate quantum computing companies are IBM, Google, IonQ, Rigetti, PsiQuantum, and similar firms actually building quantum hardware or developing real quantum algorithms. They’re honest about limitations, clear about timelines, and focused on genuine research rather than immediate commercial applications. Consultancies helping businesses prepare for eventual quantum computing can be legitimate if they’re honest about uncertainty and don’t promise near-term advantages that don’t exist.
Why your startup doesn’t need quantum ML¶
Unless your startup is specifically in quantum computing research, quantum algorithm development, or has a problem that classical computers genuinely cannot handle, you don’t need quantum machine learning. You need normal machine learning, which works, scales, and doesn’t require refrigeration units.
Classical ML has been optimised extensively. Neural networks train on GPUs with trillions of operations per second. Classical optimisation algorithms handle billions of parameters reliably. The software ecosystem is mature with libraries, frameworks, and tools for every common task. Engineers understand classical ML, debugging works normally, and deployment doesn’t require physics PhDs.
Quantum ML is experimental technology that doesn’t yet provide practical advantages. The algorithms are theoretical or proof-of-concept demonstrations. The hardware is limited to dozens or hundreds of noisy qubits. The software ecosystem is nascent. Error rates are high. Results are unreliable. Deployment is impossible outside research settings. Nothing about this suggests readiness for production use.
The consultant telling you to explore quantum ML is either genuinely excited about emerging technology and wants to help you prepare for the future, which is well-intentioned but premature, or they’re selling consulting hours and know that quantum computing sounds impressive to executives, which is less honest. Either way, the advice is probably wrong for your actual needs.
If you’re curious about quantum computing, by all means experiment. Use cloud-based quantum processors to run small algorithms, learn how quantum circuits work, understand what quantum computers might eventually offer. This is reasonable preparation for a future where quantum computers become practical. Just don’t confuse experimentation with deployment or mistake research projects for business advantages.
The time to adopt quantum ML is when it demonstrably outperforms classical ML on problems you actually face, when the hardware is reliable enough for production use, when the cost is justifiable, and when you have staff capable of maintaining quantum systems. None of these conditions currently hold. They might hold in ten years, twenty years, or never. Until then, focus on classical ML, which already works brilliantly for nearly everything.
The occasional exception exists. If you’re specifically working on quantum chemistry simulations, certain cryptography problems, or specialised optimisation where quantum computers genuinely offer advantages, then quantum computing might be relevant now. These are niche applications. Most startups aren’t in these niches. Most businesses aren’t. Most problems are best solved classically and will remain so for the foreseeable future.
The quantum computing revolution is coming, possibly, eventually, maybe. When it arrives, it will be gradual rather than sudden and specialised rather than general-purpose. Quantum computers will become useful tools for specific applications while classical computers continue handling everything else. This is less exciting than the hype suggests but considerably more realistic. In the meantime, your startup should focus on technologies that work now rather than technologies that might work eventually, assuming you’re interested in actually solving problems rather than impressing investors with buzzwords about quantum supremacy.