Quantum computing has always had a strange relationship with belief.
For some, it’s the next great shift in computing. The thing that could change materials science, drug discovery, optimisation, cybersecurity, and parts of artificial intelligence. For others, it sits in the same mental folder as fusion power.
Always promising. Always expensive. Always just far enough away to make the next funding round sound reasonable. That scepticism isn’t foolish. In many ways, it’s healthy.
Enterprise leaders have seen enough technology hype cycles to know how this usually goes. A breakthrough becomes a headline. The headline becomes a market narrative. The market narrative becomes a boardroom question.
Then someone in IT has to explain why the thing everyone is suddenly excited about isn’t ready to be deployed across a live business environment by Friday. Quantum computing is very much living inside that tension.
The technology is progressing. Google’s Willow chip showed meaningful progress in quantum error correction, while IBM has set out a roadmap toward a fault-tolerant quantum computer by 2029. But the gap between a major research milestone and a commercially useful enterprise system is still wide.
Google’s own research blog noted that many useful quantum applications would require billions, if not trillions, of reliable operations, while current devices still need major performance improvements. So the real question isn’t whether quantum computing doubters are right or wrong.
That’s too simple, and simple is where this topic starts getting sloppy. The better question is this: which doubts are grounded in reality, and which ones could leave enterprises underprepared for a risk that’s already moving closer?
Why Quantum Computing Still Feels Distant
Quantum computing still feels distant because the technology hasn’t crossed the line from impressive demonstration to broad enterprise utility. It can do things classical computing can’t easily do in controlled settings, but most organisations still can’t point to a clear, production-ready business case.
That’s the uncomfortable middle ground. Quantum is real. The science is real. The progress is real. But for most enterprises, the practical value is still hard to hold in your hand.
The Organisation for Economic Co-operation and Development (OECD) described quantum computing in March 2026 as “highly promising and highly uncertain.” Its report found that business readiness is still held back by limited technological maturity, unclear use cases, high costs, training needs, and shortages of talent that combines quantum expertise with industry knowledge.
That’s a fairly neat summary of why scepticism is rising. It’s not because people don’t understand the potential. It’s because potential doesn’t automatically translate into value.
The gap between research breakthroughs and usable systems
Quantum research often moves in milestones. A chip improves. An error rate drops. A benchmark is beaten. A new roadmap is published. These are meaningful developments, but they don’t mean the average enterprise can suddenly hand its logistics problems, financial modelling, or cybersecurity workload to a quantum machine.
That difference matters.
A research breakthrough proves that the field is moving. A usable system proves that the movement matters to the business.
Right now, most quantum computing challenges still come down to stability, scale, and error correction. Qubits, the quantum version of classical bits, are fragile. They’re affected by noise, interference, and tiny environmental changes. That means quantum computers can make errors long before they complete the kind of calculation that would matter commercially.
This is why headlines about “quantum supremacy” or “quantum advantage” need to be read carefully. A machine can beat a classical computer on a narrow benchmark and still not solve a real business problem better than the systems already in use.
Inside Enterprise Quantum Bets
How IBM, AWS, Google and challengers are turning quantum pilots into enterprise advantage, and which bets CIOs should back through 2030.
That doesn’t make the achievement meaningless. It just means the achievement isn’t the finish line.
Why fault tolerance is the real milestone that hasn’t been reached yet
The real milestone is fault-tolerant quantum computing.
In plain English, fault tolerance means a quantum computer can detect and correct its own errors while it keeps working. That’s essential because useful quantum computing will require long, complex calculations. If the machine can’t manage errors as it goes, the final answer becomes unreliable.
Think of it like trying to copy a book by hand while someone keeps shaking the table. You might get a few words right. You might even get a page right. But if the shaking continues, you’re not going to produce a clean manuscript. Fault tolerance is the system that steadies the table, spots the mistakes, and keeps the work usable.
IBM defines fault-tolerant quantum computing as the ability to detect and correct errors in real time so the computer can continue delivering accurate results. It also says useful fault-tolerant systems need to run hundreds of millions of logical operations, not just small experimental demonstrations.
That’s why qubit count alone isn’t enough. More qubits sound impressive, but more fragile qubits can also mean more opportunities for errors. What matters is whether those qubits can be organised into stable logical qubits that support reliable computation.
And that’s still the hard part.
The commercial use case problem
The commercial case for quantum computing is strongest where classical systems hit real limits. That includes molecular simulation, advanced materials, some optimisation problems, and certain finance or logistics scenarios.
But “could be useful” and “is worth enterprise investment now” aren’t the same sentence.
Building Hybrid Quantum Systems
Explore the tech stack behind quantum adoption, from cloud access and NISQ algorithms to integrating PQC and hybrid workflows into core systems.
The OECD’s 2026 report argues that quantum readiness shouldn’t be treated as a rush toward near-term production deployment. It frames readiness as early exposure, capability building, and learning how the technology may become relevant as it matures.
That’s a sensible way to think about it.
Most enterprises don’t need a quantum computing department tomorrow. They do need to understand where quantum might affect their risk profile, which business problems may eventually fit, and what skills or partnerships they’d need if the technology matures faster than expected.
For now, the doubters have a point. The commercial use case is still uneven. The return on investment is still hard to prove. And for many teams already dealing with cloud complexity, artificial intelligence governance, cyber resilience, and budget pressure, quantum can feel like one more expensive future problem standing in the doorway.
Not wrong. Just early. Which, in enterprise technology, is often another way of saying “not yet funded.”
The Fusion Comparison Isn’t As Far Off As It Sounds
The comparison between quantum computing and fusion research sounds harsh, but it isn’t entirely unfair.
Both fields are scientifically serious. Both attract major investment. Both promise enormous impact if they reach maturity. And both have spent years living in the gap between technical progress and practical delivery.
That’s exactly why sceptics reach for the analogy.
Why long timelines and heavy investment fuel scepticism
Long timelines create doubt because they make progress harder to judge.
If a technology is always five, 10, or 20 years away, leaders eventually stop hearing the details. They just hear the pattern. Another breakthrough. Another roadmap. Another major claim about future impact. After a while, even legitimate progress starts to sound like background noise.
Quantum computing has also attracted heavy funding from governments, technology companies, and start-ups. That level of investment can be a sign of confidence. It can also raise expectations faster than the technology can meet them.
That’s where quantum computing hype becomes a problem.
Scaling Data Centers for Quantum
Inside the hidden infrastructure and post‑quantum cryptography work leaders need before qubit counts surge and security debt matures.
The hype doesn’t usually come from the scientists doing the careful work. It comes from the translation layer between research, market excitement, and executive expectation. By the time a technical milestone reaches a non-specialist audience, it can start sounding much closer to commercial reality than it actually is.
That gap feeds scepticism. And honestly, it should.
Enterprise leaders shouldn’t be expected to treat every research advance as a buying signal. They need to ask what has changed, what hasn’t, and what the operational consequence actually is.
Where the comparison breaks down
The fusion comparison starts to break down because quantum computing doesn’t need one single “it works now” moment to matter.
Fusion is often framed as close to binary from a commercial perspective. Either it delivers practical, sustained, economically useful energy, or it doesn’t. Quantum is different.
Quantum progress can matter incrementally.
Better error correction matters. Better chips matter. Better hybrid quantum-classical workflows matter. Better quantum-safe cryptography matters. Even if full-scale fault-tolerant quantum computing is still years away, the surrounding ecosystem is already moving.
Google’s Willow chip, announced in December 2024, showed that error-corrected qubits can improve as they get bigger. Google described this as the first quantum processor where error-corrected qubits get exponentially better as they scale.
That doesn’t mean Willow is ready to break encryption or transform enterprise computing. It isn’t. But it does weaken the idea that nothing useful is happening until one dramatic breakthrough arrives.
Quantum may not arrive like a switch being flipped.
It may arrive as a series of uncomfortable updates to the enterprise risk register.
Incremental Progress Is Already Changing The Conversation
The strongest argument against quantum scepticism isn’t that doubters are wrong about timelines. It’s that timelines aren’t the only thing that matters.
Quantum computing can still be immature while also being strategically relevant. That’s the part that gets missed when the debate becomes too binary.
Why incremental improvements still matter
Incremental improvements matter because quantum computing is an engineering problem as much as a science problem now.
2025: Data Strategy Meets GenAI
Executive view of how generative AI, data governance, and DRPs reshape decision-making, talent needs, and enterprise analytics operating models.
The field doesn’t just need more theory. It needs better hardware, lower error rates, stronger control systems, scalable architectures, improved software, and useful ways to connect quantum resources with classical computing environments.
IBM’s 2025 roadmap says it plans to deliver IBM Quantum Starling, a large-scale fault-tolerant quantum computer, by 2029. The company says Starling is intended to run 100 million quantum gates on 200 logical qubits. It also states that roadmap details represent its current intent and are subject to change, which is exactly the kind of caveat enterprises should pay attention to.
That’s not a guarantee. It’s a signal.
The important part isn’t that every roadmap date will land perfectly. Roadmaps in emerging technology have a habit of walking around when no one’s looking. The important part is that major players are moving from vague promise toward specific engineering targets.
That changes the conversation from “is quantum real?” to “what happens if progress keeps compounding?”
Early enterprise-relevant use cases
The clearest near-term enterprise use cases are not general-purpose quantum computers replacing classical systems. That’s not where we are.
The more realistic path is hybrid.
Hybrid quantum-classical computing means quantum systems handle specific parts of a problem while classical systems do the rest. That matters because enterprises already run complex environments. They don’t need another isolated technology island. They need systems that can fit into existing workflows, or at least near them.
The OECD points to potential commercial applications across pharmaceuticals, advanced materials, energy, finance, and transportation. It also stresses that firms should avoid pursuing quantum solutions for problems that classical systems can already handle efficiently.
That’s the practical line.
Quantum shouldn’t be treated as a prestige project. If a classical system can solve the problem well enough, use the classical system. The strategic value sits in the bottlenecks: the problems that are too complex, too slow, or too costly for current methods.
That might include molecular modelling for drug discovery, new material design, portfolio optimisation, supply chain modelling, or energy system simulation. But each use case still needs to prove itself against cost, access, skills, and integration reality.
Why “mainstream adoption” won’t be a single moment
A lot of quantum conversations still circle around the idea of “Q-day.” In cybersecurity, this usually means the point when a quantum computer becomes powerful enough to break widely used public-key cryptography. In broader technology discussions, it can mean the day quantum becomes impossible to ignore.
But mainstream adoption probably won’t feel like a single day.
It’ll be slower and messier than that. Some industries will move first because their problems fit the technology better. Some security teams will move early because their data has long-term value. Some organisations will wait until vendors bake quantum-related capabilities into platforms they already use.
That’s usually how enterprise technology really arrives. Not as a trumpet blast. More like a leak in the ceiling. At first it’s small enough to ignore. Then one day there’s a bucket in the hallway and everyone’s pretending they saw it coming.
The practical point is simple: enterprise quantum adoption is likely to be uneven. Waiting for universal maturity may feel safe, but it can also delay the preparation that takes the longest.
The Real Risk Isn’t If Quantum Arrives But How It Arrives
For enterprises, the most immediate quantum issue isn’t whether they’ll use a quantum computer to solve business problems. It’s whether quantum progress changes the security assumptions their current systems depend on.
That’s where the conversation gets more serious.
Quantum computing doesn’t need to be commercially mainstream to create risk. It only needs to become strong enough, in the hands of the wrong actor, to weaken the cryptography protecting sensitive data.
Why encryption is at the centre of the quantum debate
Encryption is central to the quantum debate because much of today’s digital trust depends on maths that quantum computers may eventually solve much faster than classical computers.
This doesn’t mean every form of encryption breaks in the same way.
The biggest concern is public-key cryptography, including systems such as Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC). These are widely used for secure web traffic, digital signatures, identity, software updates, and key exchange. A sufficiently capable quantum computer could threaten the hard maths problems that protect those systems.
The National Institute of Standards and Technology (NIST) explains that a mature “cryptographically relevant” quantum computer could solve some of these problems in days or hours, instead of the billions of years estimated for conventional computers. NIST also says nobody knows exactly when such a machine will appear, with estimates ranging from a few years to a few decades.
That uncertainty is the whole problem.
The risk isn’t that Google’s Willow chip, or any current public quantum system, is about to modern encryption. It isn’t. The risk is that migration takes years, sensitive data can remain valuable for decades, and adversaries don’t need enterprise teams to be ready. They just need them to be late.
Symmetric encryption, including the Advanced Encryption Standard (AES), is a different case. Quantum algorithms may reduce its effective security level, but this is not the same as saying “quantum breaks AES” in the same way it threatens RSA or ECC. That distinction matters because vague fear creates bad decisions. Precise risk creates useful ones.
The “harvest now, decrypt later” problem
The phrase “harvest now, decrypt later” sounds dramatic, but the idea is simple.
An attacker steals encrypted data today and stores it. They can’t read it yet. But if quantum capability advances enough in the future, they may be able to decrypt it later.
That makes some data vulnerable before a cryptographically relevant quantum computer exists.
Not all data has the same shelf life. A password reset token from last year probably doesn’t matter much now. But government records, intellectual property, legal documents, healthcare information, financial records, merger details, critical infrastructure designs, and classified communications may still matter 10 or 20 years from now.
That’s why post-quantum cryptography matters now.
Post-quantum cryptography refers to encryption methods designed to resist attacks from both conventional and future quantum computers. NIST finalised its first three post-quantum cryptography standards in August 2024, covering key establishment and digital signatures. It encouraged system administrators to begin transitioning to the new standards as soon as possible.
This is where quantum doubters can get caught.
They may be right that useful quantum computers are still years away. But if the systems that need upgrading also take years to identify, test, replace, and govern, then “years away” isn’t a comfort. It’s the migration window.
Why organisations can’t wait for certainty
Waiting for certainty sounds responsible. In this case, it can become the opposite.
Cryptographic migration is slow because encryption is everywhere. It sits inside applications, operating systems, identity systems, cloud services, certificates, devices, vendors, embedded systems, and forgotten legacy tools that nobody wants to touch because the person who understood them left in 2017.
The United Kingdom’s National Cyber Security Centre (NCSC) describes post-quantum cryptography migration as a national, multi-year technology change programme. Its guidance sets target milestones for large organisations: complete discovery and initial planning by 2028, carry out highest-priority migration activities by 2031, and complete migration by 2035.
Those dates are useful because they make the risk concrete.
This isn’t “panic now.” It’s “start the boring work early enough that it doesn’t become panic later.”
And yes, the boring work is usually where resilience lives.
Where Quantum Doubters Are Right
Quantum doubters are right about several things. The technology is not mature enough for broad commercial use. Timelines remain uncertain. Many business cases are still speculative. And a lot of public discussion does skip over the hardest engineering problems.
That matters because healthy scepticism protects enterprises from waste.
It keeps boards from funding shiny experiments that don’t connect to business value. It gives security leaders room to prioritise actual risk over vendor noise. It stops emerging technology strategy from turning into a vibes-based procurement process, which is nobody’s finest hour.
Timelines are still uncertain
The first thing doubters are right about is timing.
Even when vendors publish clear roadmaps, those roadmaps are still projections. They depend on technical progress, manufacturing capability, funding, talent, supply chains, and breakthroughs that may not arrive on schedule.
IBM’s 2029 goal is one of the clearest timelines in the market, but IBM itself notes that roadmap information reflects current intent and may change.
That doesn’t make the roadmap weak. It makes it honest.
Emerging technology timelines are always partly evidence and partly ambition. Enterprise leaders should read them as directional signals, not delivery guarantees.
Most enterprises aren’t ready to use quantum yet
Most enterprises aren’t ready because readiness requires more than interest.
It requires people who understand the technology, business leaders who know which problems matter, technical teams that can run controlled experiments, and governance structures that can decide when a pilot is useful rather than decorative.
The OECD found that readiness barriers include lack of C-level executive buy-in, general scepticism, lack of investment, integration challenges, skills shortages, and unclear use cases.
That’s not a small list. It’s basically the enterprise adoption bingo card.
And it’s why “just start using quantum” is poor advice. Most organisations would be better served by targeted learning, cryptographic inventory work, and selective monitoring than by trying to force a quantum use case into a business problem that doesn’t need one.
Hype has outpaced practical value in some areas
Quantum hype often creates the impression that the technology is closer to general business transformation than it really is.
That’s a problem because hype doesn’t just mislead optimists. It also hardens sceptics.
When people are repeatedly told that a technology is about to change everything, and then it doesn’t, they don’t become more nuanced. They become more dismissive. That can be just as risky as blind enthusiasm.
The better position is much less exciting, which is usually a good sign.
Quantum computing is making real progress. It’s not broadly enterprise-ready. Its strongest near-term relevance may sit in cybersecurity planning, capability building, and targeted research partnerships. Its long-term value could be significant, but it won’t apply evenly across every organisation or every workload.
That’s not a headline designed to light up a conference stage. It is, however, much closer to reality.
Where Quantum Doubters Are Underestimating The Risk
The weakness in the doubter position is that it can confuse “not ready for deployment” with “not relevant yet.”
Those aren’t the same thing.
A technology can be too immature for mainstream adoption and still mature enough to change planning assumptions. Quantum computing sits in exactly that awkward space.
Security timelines don’t wait for perfect technology
Security planning has to move before the threat fully materialises.
That’s annoying, but it’s true.
If organisations wait until a cryptographically relevant quantum computer exists, they’ll already be behind. By then, sensitive historical data may have been collected. Long-lived systems may still depend on vulnerable algorithms. Vendors may not be ready. Internal teams may not know where cryptography is used. And migration will be competing with every other urgent security priority.
NIST’s position is clear: post-quantum encryption standards are ready for use, and organisations should begin the transition to protect data in the quantum era.
This doesn’t mean every system needs to be ripped apart immediately. That would be chaos with a budget line. It means organisations need a controlled plan, starting with visibility.
Geopolitical competition changes the risk profile
Quantum computing isn’t just a technology race. It’s a strategic race.
Governments care because quantum capability could affect defence, intelligence, economic competitiveness, secure communications, and critical infrastructure. That means enterprise leaders can’t think about quantum as only a vendor market or research field.
The geopolitical angle changes the risk because it introduces uneven access.
If a powerful state or well-funded actor reaches meaningful capability before the wider market is prepared, the first impact may not look like a product launch. It may look like intelligence advantage. Or quiet decryption. Or pressure on systems that were assumed to be safe because no public quantum computer had reached that level yet.
That’s why the uncertainty cuts both ways.
We don’t know exactly when quantum systems will become cryptographically relevant. But we also don’t know who will reach that point first, how visible it will be, or how quickly defensive migration will catch up.
Waiting for clarity can become a strategic mistake
Enterprises often wait for clarity because clarity feels safe. But in technology risk, clarity sometimes arrives after the cost has already gone up.
Cloud is a useful comparison. Many organisations waited until cloud adoption became unavoidable before dealing with governance, cost control, identity, data movement, and operational ownership. The result wasn’t that cloud failed. The result was that complexity arrived faster than the operating model.
Quantum could follow a similar pattern.
The risk isn’t that every enterprise needs to become quantum-native. The risk is that leaders dismiss it completely, then discover later that their cryptographic dependencies, vendor contracts, sensitive data protection, and long-term infrastructure plans all needed attention earlier.
That’s where scepticism becomes less useful.
Scepticism should sharpen the plan. It shouldn’t cancel the planning.
What Enterprise Leaders Should Actually Do Now
Enterprise leaders don’t need to choose between quantum hype and quantum denial.
The practical path sits between them. Treat quantum computing as a long-term capability with near-term security implications. Don’t overinvest in speculative use cases. Don’t ignore cryptographic readiness. And don’t let the lack of certainty become an excuse for standing still.
Start with cryptographic visibility and inventory
The first step is knowing where cryptography exists across the business.
That sounds simple. It usually isn’t.
Most organisations have encryption scattered across applications, identity systems, databases, websites, cloud platforms, certificates, devices, application programming interfaces, vendor products, backups, and legacy infrastructure. Some of it is documented. Some of it is hidden inside systems no one has reviewed in years.
A useful cryptographic inventory should identify:
- Which systems use public-key cryptography
- Which algorithms are in place
- Which data those systems protect
- How long that data needs to remain confidential
- Which suppliers or platforms control part of the migration path
- Which systems are too old, fragile, or critical to change quickly
This is not glamorous work. It is exactly the kind of work that prevents future panic.
Track post-quantum standards and migration timelines
Standards are now mature enough to track seriously.
NIST finalised Federal Information Processing Standards (FIPS) 203, 204, and 205 in 2024. These cover a key encapsulation mechanism for secure key establishment and digital signature algorithms for authentication and integrity. NIST also selected HQC in 2025 as an additional backup algorithm based on a different mathematical approach.
For enterprise teams, the practical move is not to memorise every algorithm name. It’s to build a migration approach that can adapt as standards, products, and vendor support evolve.
That means staying close to guidance from bodies like NIST and the NCSC, asking suppliers about their post-quantum plans, and aligning internal security roadmaps with realistic migration windows.
Treat quantum as a long-term capability with near-term implications
The balanced view is simple.
Quantum computing is not ready to transform most enterprise operations today. But the preparation work, especially around security, is already relevant.
That means leaders should separate quantum into two tracks.
The first track is capability watching. This includes monitoring hardware progress, identifying possible use cases, supporting small experiments where there’s a clear business reason, and building enough internal literacy to avoid being led entirely by vendors.
The second track is risk readiness. This includes cryptographic inventory, post-quantum migration planning, supplier engagement, and data classification based on how long information needs to remain protected.
One track can move slowly. The other needs to start now.
That’s the balance quantum doubters sometimes miss. You don’t have to believe every promise to prepare for the parts that are already becoming unavoidable.
Final Thoughts: Scepticism Is Rational But Inaction Is Risky
Quantum scepticism makes sense.
The technology is still difficult, expensive, and unevenly useful. Fault-tolerant quantum computing hasn’t arrived at broad commercial scale. Most enterprises don’t yet have a clean business case. And the hype around quantum has, at times, moved faster than the machines themselves.
But scepticism only helps when it improves judgement.
The evidence now points to a more careful conclusion. Quantum computing isn’t ready to reshape every enterprise workload, but it’s already reshaping enterprise risk. Error correction is improving. Vendor roadmaps are becoming more specific. Standards bodies are preparing the cryptographic transition. Security agencies are publishing timelines because migration won’t be quick.
So the better posture is neither belief nor dismissal.
It’s readiness without theatre.
Enterprises don’t need to pretend quantum is arriving tomorrow. They also can’t afford to behave as if it will never matter. The work now is to understand where the risk is real, where the value is still speculative, and where preparation can be done calmly before urgency makes it expensive.
That’s the kind of technology conversation EM360Tech is built for: clear enough to cut through the noise, grounded enough to avoid the hype, and practical enough to help leaders make better decisions before the future becomes another rushed change programme.
Comments ( 0 )