What comes next (probably chaos)¶
Predicting the future of quantum computing resembles predicting the weather in Ankh-Morpork during the rainy season. You know something wet and unpleasant is coming, but the specifics remain determinedly uncertain until they arrive. The field has been promising revolutionary breakthroughs within five to ten years for approximately the last thirty years, and shows every sign of continuing this tradition well into the future.
Several trajectories are plausible. Quantum computers might mature into practical tools that genuinely transform certain computational domains while classical computers continue dominating everything else. They might remain perpetually experimental, useful for research but never quite achieving the practical advantages required for widespread adoption. They might experience sudden breakthroughs that dramatically accelerate timelines, or they might hit fundamental barriers that relegate them to niche applications. Nobody knows which trajectory we’re on because we’re still in the early chapters and quantum mechanics hasn’t revealed its plot.
What’s certain is that the next decade will involve continued hardware improvements, ongoing research into quantum algorithms, persistent overhyping of incremental progress, and gradual clarification of where quantum computers actually provide advantages versus where they’re expensive distractions. Also certain is that whatever timeline researchers currently predict will prove optimistic, because quantum computing has consistently required more time than expected to achieve less than hoped.
Quantum internet or entanglement-based networking¶
The quantum internet is a proposed network where quantum states are transmitted between nodes using entanglement rather than classical data transmission. This would enable quantum key distribution across long distances, distributed quantum computing where multiple quantum processors collaborate on computations, and quantum sensor networks that exploit entanglement for enhanced precision. It’s scientifically fascinating, technically challenging, and might actually happen eventually.
Current quantum communication uses quantum key distribution over optical fibres or free-space links between two points. This works for distances up to a few hundred kilometres but doesn’t scale to global networks because quantum signals decay too quickly. Quantum repeaters, which would amplify quantum signals analogously to how classical repeaters amplify classical signals, are theoretically possible but practically difficult. Conventional amplification requires copying quantum states, which is forbidden by the no-cloning theorem. Quantum repeaters must use entanglement swapping and quantum error correction, which adds substantial complexity.
China has demonstrated quantum communication between satellites and ground stations, establishing quantum links over thousands of kilometres using space-based relays. This proves long-distance quantum communication is possible but doesn’t constitute a quantum internet. It’s point-to-point communication rather than a network where arbitrary nodes can communicate quantumly. Building actual quantum networks requires quantum repeaters, quantum routers, and quantum network protocols, none of which exist in practical form yet.
The timeline for quantum internet deployment depends on solving multiple technical challenges simultaneously. Quantum memory must store quantum states reliably while waiting for network operations. Quantum repeaters must work at sufficient speeds and fidelities to enable practical communication. Quantum network protocols must handle routing, congestion, and reliability in ways that account for quantum mechanical constraints. These problems are being actively researched but remain far from solved.
Applications of quantum internet include secure communications using quantum key distribution extended to global scale, distributed quantum computing where multiple quantum processors work together on large problems, and quantum sensor networks that exploit entanglement for precise measurements. These applications are genuinely useful if quantum internet becomes practical. Whether it does depends on whether the technical challenges prove tractable and whether the advantages justify the substantial infrastructure costs.
The realistic timeline is decades. Demonstrating quantum internet principles in laboratory settings is already happening. Building metropolitan-scale quantum networks is plausibly achievable within ten to fifteen years. Deploying global quantum internet infrastructure comparable to today’s classical internet requires solving problems that don’t yet have solutions and building infrastructure that costs far more than classical networking. It might happen eventually, but “eventually” is measured in decades at minimum, possibly longer.
Fault-tolerant quantum computers or decades away, always¶
Fault-tolerant quantum computing means quantum computers with sufficient error correction that they can run arbitrarily long computations reliably. This requires error rates below specific thresholds, sufficient physical qubits to implement error correction with adequate redundancy, and error correction protocols that work in practice not just theory. Achieving fault tolerance has been the goal since quantum computing began, and it remains the goal decades later with timelines that perpetually hover around “ten to twenty years away.”
Current quantum computers are noisy intermediate-scale quantum devices. They have dozens to hundreds of qubits with error rates around 0.1 to 1 percent and no error correction. These systems can run short quantum circuits producing approximate results for limited problems. They’re useful for research and algorithm development but not for practical applications requiring reliable computation.
The path to fault tolerance requires multiple advances simultaneously. Physical qubit error rates must decrease to around 0.01 percent or better. Systems must scale to thousands or millions of physical qubits. Error correction must be implemented efficiently without consuming all quantum resources for correction overhead. Control electronics must manage these larger systems with sufficient precision. None of these requirements has been fully met yet.
Google, IBM, IonQ, and other quantum computing companies are making steady progress. Qubit counts increase gradually, error rates decrease slowly, and error correction demonstrations become more sophisticated. This progress is real but incremental. Moving from today’s noisy systems to fault-tolerant quantum computers requires improvements in multiple areas by factors of ten to a thousand, which takes time even with sustained research investment.
Optimistic timelines suggest fault-tolerant quantum computers might arrive in the 2030s. Pessimistic timelines suggest they might never arrive if fundamental barriers emerge or if engineering challenges prove intractable. Realistic timelines acknowledge massive uncertainty and suggest that fault tolerance is achievable in principle but might take considerably longer than optimists hope.
The implications for quantum ML are that practical quantum machine learning requires fault-tolerant quantum computers for most applications. Current noisy quantum computers can demonstrate quantum ML algorithms on toy problems but can’t outperform classical ML on realistic tasks. Once fault tolerance arrives, quantum computers might provide genuine advantages for specific ML problems. Until then, quantum ML remains largely experimental regardless of how many research papers describe quantum neural networks or quantum kernel methods.
Quantum machine learning maturity or lack thereof¶
Quantum machine learning is currently in the phase of scientific development where researchers demonstrate proof-of-concept algorithms, publish papers showing theoretical speedups, and occasionally run small demonstrations on quantum hardware. This is valuable research that advances understanding of what quantum computers might eventually do for machine learning. It’s not mature technology ready for practical deployment.
The data loading problem remains fundamental. Classical ML datasets contain millions or billions of classical data points. Loading this data into quantum states requires time and quantum operations that often eliminate theoretical quantum speedups. Until this problem has practical solutions, quantum ML algorithms with impressive theoretical complexity remain impractical for real-world datasets.
Quantum advantage for ML remains unproven on practically relevant problems. Demonstrations of quantum ML algorithms running on quantum hardware generally involve tiny datasets, simplified problems, or comparisons against naive classical algorithms rather than state-of-the-art classical ML. Papers claiming quantum advantages often hide caveats in assumptions about data access models or problem structure that don’t hold for real ML applications.
Classical machine learning continues improving rapidly. Neural network architectures become more sophisticated, training algorithms more efficient, and hardware more powerful. GPUs, TPUs, and specialised AI accelerators provide enormous computational power for classical ML. Quantum ML must not only work but must work well enough to justify its substantial additional complexity and cost compared to classical alternatives that keep getting better.
The realistic trajectory is that quantum ML remains primarily a research field for the next decade or two. Researchers will continue exploring quantum algorithms, understanding their fundamental capabilities and limitations, and identifying specific problem structures where quantum advantages might emerge. Practical deployment awaits fault-tolerant quantum computers, solutions to the data loading problem, and clear demonstrations that quantum approaches outperform state-of-the-art classical methods on problems people actually care about.
Exceptions might emerge for specialised applications. Quantum ML for processing data from quantum sensors could provide advantages because the data is already quantum. Quantum ML for certain quantum chemistry or materials science applications might prove useful once quantum computers can handle relevant problem sizes. These niche applications are plausible before general quantum ML maturity.
The broader field of machine learning will remain overwhelmingly classical. Neural networks will continue training on GPUs and making predictions on CPUs or edge devices. Classical algorithms will continue handling the vast majority of ML workloads because they work, they scale, and they’re economically viable. Quantum ML, if it matures at all, will be a specialised tool for specific problems rather than a general replacement for classical ML.
Integration with classical ML pipelines¶
Any practical quantum ML deployment will involve extensive classical computing infrastructure with quantum computers handling specific subroutines. The integration challenges are substantial and often underestimated in research focusing on quantum algorithms without considering deployment practicalities.
Classical preprocessing must prepare data for quantum computation, which involves encoding classical data into quantum states, selecting relevant features, and potentially reducing dimensionality to fit limited qubit counts. This preprocessing must run efficiently because if it takes longer than classical ML would take to solve the entire problem, the quantum advantage disappears. Optimising classical preprocessing for quantum consumption is a substantial engineering challenge distinct from quantum algorithm development.
Hybrid quantum-classical algorithms require repeated communication between classical and quantum systems. Each iteration involves classical optimisation, quantum circuit execution, measurement, classical post-processing, and parameter updates. The latency of this communication loop affects algorithm performance significantly. Cloud-based quantum processors add network latency. On-premises quantum computers require careful software architecture to minimise communication overhead. None of this is automatic, all of it requires careful engineering.
Classical post-processing must interpret quantum measurement results, apply error mitigation, validate outputs, and integrate quantum results into downstream systems. This post-processing is often more complex than the quantum computation itself because quantum outputs are noisy probability distributions rather than clean results. Building robust post-processing that handles quantum noise gracefully is essential for reliable systems.
ML pipeline orchestration tools like Apache Airflow, Kubeflow, or MLflow weren’t designed for quantum computing. Integrating quantum steps into these pipelines requires custom components, monitoring, and error handling. Quantum circuit execution is slower and less reliable than classical computation, which affects pipeline design. Retries, fallbacks, and graceful degradation must account for quantum-specific failure modes.
Model serving and inference are predominantly classical. Even if quantum computers help train ML models, inference typically happens on edge devices, mobile phones, or cloud servers that won’t have quantum processors. This means quantum-trained models must export to classical formats, which limits what quantum training can achieve. The inference constraints must inform training design from the start.
Monitoring and observability for quantum-classical pipelines requires tracking both classical metrics like latency and throughput and quantum metrics like circuit fidelity and error rates. Standard monitoring tools don’t understand quantum computations. Custom monitoring that correlates quantum hardware status with ML pipeline performance is necessary but not straightforward to implement.
The realistic path forward involves incremental integration. Start with classical ML pipelines that work. Add quantum components for specific subroutines where quantum might help. Maintain classical fallbacks so the system remains operational when quantum components fail. Gradually expand quantum integration as hardware improves and as you develop expertise. Avoid big-bang quantum migrations that replace working classical systems with unproven quantum alternatives.
Why we’re still probably ten to twenty years from practical quantum ML¶
The timeline for practical quantum ML deployment is at least a decade and more likely two or more, assuming progress continues and no fundamental barriers emerge. This timeline frustrates stakeholders expecting faster progress but reflects the accumulated challenges that must be overcome for quantum ML to become genuinely useful.
Quantum hardware must improve by factors of ten to a thousand across multiple metrics simultaneously. Qubit counts must increase from hundreds to millions. Error rates must decrease from around 0.1 percent to 0.01 percent or better. Coherence times must extend from microseconds to milliseconds or longer. Connectivity must improve so qubits can interact more flexibly. Each of these improvements is being pursued actively, but achieving them all requires sustained research investment over many years.
Error correction must become practical reality rather than theoretical possibility. Current error correction demonstrations show the principles work but consume most quantum resources for correction overhead. Practical error correction requires crossing thresholds where correction overhead becomes manageable and where systems scale to thousands of logical qubits. This requires both hardware improvements and algorithmic advances, neither of which happens quickly.
Quantum algorithms must demonstrate clear advantages over classical methods on problems people actually face. Current demonstrations compare quantum algorithms against classical baselines that are often not state-of-the-art. Demonstrating quantum advantages requires rigorous comparison against the best classical algorithms running on modern hardware, which is challenging because classical ML is extraordinarily effective. Finding problems where quantum genuinely helps requires extensive exploration that’s barely begun.
The data loading problem requires solutions that work for realistic datasets. Theoretical quantum algorithms assume efficient quantum data access that doesn’t exist for classical datasets. Practical solutions might involve quantum random access memory, clever encoding schemes, or hybrid approaches that avoid full data loading. These solutions are actively researched but remain immature. Without solving data loading, quantum ML algorithms remain theoretical exercises.
Software ecosystems must mature. Classical ML benefits from frameworks like TensorFlow, PyTorch, and scikit-learn that make ML accessible to practitioners without deep algorithmic expertise. Quantum ML needs equivalent frameworks, debugging tools, and educational resources. Building this ecosystem takes years of community effort, which is only beginning for quantum ML.
Economic viability must be demonstrated. Quantum computers are expensive to build and operate. Quantum ML must provide sufficient advantages to justify these costs compared to classical alternatives. This requires not just technical quantum advantages but advantages large enough that the total cost of quantum deployment is less than classical alternatives. Achieving this economic viability might never happen for many ML applications.
Industry adoption requires risk tolerance that most organisations lack. Deploying quantum ML to production systems means accepting higher failure rates, less reliability, and more complexity than classical systems. Most organisations adopt new technology cautiously, waiting for others to prove viability. Quantum ML is far from the proven reliability required for mainstream adoption.
Training and expertise must develop. Quantum ML requires knowledge spanning quantum mechanics, quantum computing, and machine learning. This expertise is rare and takes years to develop. Educational programmes are emerging but will take time to produce the workforce needed for widespread quantum ML deployment. Without this workforce, adoption is limited regardless of hardware maturity.
The accumulated timeline for hardware improvements, algorithmic development, software ecosystem maturity, economic validation, and workforce development is measured in decades. Optimists might argue for ten years, but twenty is more realistic given historical technology adoption patterns and the specific challenges facing quantum computing. The timeline might be shorter if breakthroughs occur or longer if fundamental barriers emerge.
This doesn’t mean quantum ML research should stop. Research is necessary to determine what’s possible and what isn’t. But realistic planning should assume that practical quantum ML deployment remains distant, that most ML workloads will remain classical, and that quantum ML will likely be a specialised tool for specific problems rather than a general ML platform.
The future of quantum ML is genuinely uncertain in ways that make prediction difficult. We might be at the beginning of a transformative technology that reshapes computing over the next few decades. We might be exploring a interesting but ultimately niche capability that finds limited practical application. We might discover fundamental barriers that prevent quantum ML from ever achieving practical advantages for most problems. All of these outcomes remain plausible, and distinguishing between them requires patience, continued research, and honest assessment of progress versus hype as both unfold over the coming years and decades.