Google Quantum AI Unveils Willow, a Quantum Processor That Proves the Bigger the Better
105-qubit chip showcases a new approach to error correction, delivering lower error rates the bigger it gets.
Google Quantum AI, Alphabet's quantum computing and artificial intelligence arm, has unveiled a new quantum processor, Willow, which the company claims delivers "state-of-the-art" performance — and an "exponential" improvement in error correction, a necessary step towards scaling the technology up into useful numbers of quantum bits or "qubits."
"The Willow chip is a major step on a journey that began over 10 years ago. When I founded Google Quantum AI in 2012," explains Harmut Neven, the founder and lead of Google Quantum AI, "the vision was to build a useful, large-scale quantum computer that could harness quantum mechanics — the 'operating system' of nature to the extent we know it today — to benefit society by advancing scientific discovery, developing helpful applications, and tackling some of society's greatest challenges. As part of Google Research, our team has charted a long-term roadmap, and Willow moves us significantly along that path towards commercially relevant applications."
A traditional computer works with binary bits, capable of being either a zero or a one — true or false, on or off. A quantum computer uses quantum bits, or "qubits," which can be a zero, a one, or a superposition of the two. This gives them the potential, in theory at least, to dramatically outperform traditional computers at specific tasks — to the point that governments around the world are investing in "post-quantum" security systems, in the fear that a theoretical and currently impossibly-large future quantum computer could easily factorize the large primes used in modern public-key cryptographic systems.
We're not quite at the point of having such a system yet, but Google claims it's getting closer to scaling the technology from a dozen or so qubits to hundreds or thousands — and that its Willow chip, built to demonstrate a new approach to error correction required to reach that scale, has already beaten the world's fastest supercomputer in a random circuit sampling benchmark, taking five minutes to complete what would have taken a claimed 10 septillion years.
"Today in Nature, we published results showing that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes," Neven claims. "We tested ever-larger arrays of physical qubits, scaling up from a grid of 3×3 encoded qubits, to a grid of 5×5, to a grid of 7×7 — and each time, using our latest advances in quantum error correction, we were able to cut the error rate in half. In other words, we achieved an exponential reduction in the error rate. This historic accomplishment is known in the field as 'below threshold' — being able to drive errors down while scaling up the number of qubits. You must demonstrate being below threshold to show real progress on error correction, and this has been an outstanding challenge since quantum error correction was introduced by Peter Shor in 1995."
Willow isn't a theoretical thought exercise: Google claims to have built it in-silicon, delivering 105 qubits on a physical chip manufactured at the company's own Santa Barbara fabrication facility. The company admits, however, that there's still one big milestone still to be broken: demonstrating a use-case for quantum computing, in which a useful computation — rather than a simple benchmark — is proven to be delivered with a performance beating that of classical binary computers.
More information on Willow is available on the Google blog; the company has also published a tour of its Quantum AI lab, complete with an explainer for quantum computing concepts. The paper referred to by Neven, meanwhile, has been published in the journal Nature under closed-access terms.