Supremely What? Forget the Race - The Rules Just Changed
If you follow quantum computing, you’ve likely seen the headlines from the latest breakthrough: a 12-qubit quantum computer performed a task that would require any classical computer at least 62 bits of memory. On the surface, this sounds like another round in the ongoing “mine is bigger [or should it be smaller?] than yours” contest between quantum and classical machines.
But to focus on those numbers is to miss the real story.

This achievement, from a collaboration between the University of Texas at Austin and Quantinuum, isn’t just another milestone; it’s a fundamental shift in the entire conversation about quantum advantage. They didn’t just win a race; they changed the rules of the game. To understand why, we need to look back at what “quantum supremacy” used to mean.
The Old Definition: A Race on a Slippery Track
The term “quantum supremacy,” coined by physicist John Preskill in 2011, originally described a simple goal: get a quantum computer to perform any task, even a useless one, that no classical computer could feasibly complete in a reasonable time. It was about speed.
The most famous example was Google’s 2019 announcement. Their Sycamore processor performed a calculation in 200 seconds that they claimed would take the world’s fastest supercomputer 10,000 years. It was a stunning claim, but it had a vulnerability. IBM quickly countered that by using a different classical algorithm, they could solve the same problem in just 2.5 days.
This back-and-forth revealed the core issue with defining supremacy as a race against time: you can never be sure you’re racing against the fastest classical runner. The classical “world record” is always subject to change with new algorithms and better hardware. The goalposts were constantly moving.
Why “12 vs. 62” Isn’t What You Think
This brings us back to the new result. The headlines comparing 12 qubits to 62+ bits are catchy, but a direct comparison of physical units is misleading. A qubit isn’t just a more powerful bit.
A classical bit stores a definite state: 0 or 1. A memory of 62 bits can hold exactly one of possible numbers at a time. A qubit, however, exists in a superposition: a complex blend of both 0 and 1 simultaneously. When you entangle 12 qubits, they don’t act as 12 individual units. They become a single, unified system described by one vector in a 4,096-dimensional space. The difference is between having access to a huge multidimensional vector space (quantum) versus a single point in a gigantic list (classical). I find it difficult to picture exponentials, but is a number with 18 digits. It’s thousands of times larger than the estimated number of grains of sand on Earth or seconds in the age of the universe. Sifting throught that to find the one you are looking for is a big task.
The real question isn’t “how many bits is a qubit worth?” but rather, “for this specific task, how much classical information is needed to achieve the same result?” The researchers proved that to replicate the statistical output of their 12-qubit system, any classical algorithm must store and process a minimum of 62 bits’ worth of information. The comparison isn’t about hardware; it’s about the fundamental information required to solve the problem.
A New Benchmark: “Quantum Information Supremacy”
This is where the genius of the new approach lies. The researchers deliberately shifted the contest from time to information. They introduced a new benchmark they call “quantum information supremacy”.
The concept is rooted in a field called communication complexity. Imagine two people, Alice and Bob, who need to solve a problem together, but they can only communicate a limited amount of information. The researchers designed a problem where the quantum solution is exponentially more efficient. In the quantum version, Alice can encode her information into a 12-qubit state and physically send that state to Bob. In the classical version, she would have to send a message that is provably at the very least 62 bits.
They then cleverly reframed this two-person communication problem as a memory test on a single device. “Alice” became the quantum computer at the beginning of the calculation (), and “Bob” was the same computer at the end (). The 12 qubits themselves served as the “message” carried through time.
The crucial difference is that this advantage isn’t based on a hunch about classical algorithms. It’s grounded in a mathematical proof. There is no future algorithmic discovery or supercomputer that can ever bridge this gap. The goalposts are no longer moving; they are set in stone.
The Real Prize: Proof that Hilbert Space is Real
While establishing a permanent quantum advantage is a historic achievement, the most profound implication is what it tells us about the nature of reality itself.
For a century, quantum mechanics has been described by a mathematical framework called Hilbert space: an abstract, exponentially vast space where quantum states live. A system of n qubits, as we saw, exists in a space of dimensions. For decades, some physicists have wondered if this was just a convenient mathematical tool, or if this immense complexity was a true feature of the physical world. Could it be that all quantum systems were “secretly classical,” describable by a much smaller set of parameters?
This experiment provides the most direct evidence to date that the answer is no.
By demonstrating a task that provably requires an exponential amount of classical information to replicate, the experiment confirms that the exponential vastness of Hilbert space is not just a theory. It is a physically accessible resource that we can harness for computation. We have sent a probe into this high-dimensional reality and confirmed that the map is real.
This doesn’t mean a quantum computer that can break encryption or design new medicines is right around the corner. But it provides a solid, unassailable foundation of verified capability. It proves that the arena for future quantum algorithms is as vast and powerful as the theory predicted.