For quantum computing to ever fulfill its promise, it will have to deal with errors. That’s been a real problem until recently, because although scientists have come up with error-correction codes, the quantum machines available couldn’t use them. Now researchers have finally created a small quantum-computing array that for the first time performs with enough accuracy to allow for error correction, paving the way toward practical machines that could outperform ordinary computers.
Today’s classical computers perform calculations using bits, which can be either 1 or 0. Quantum computers get their potentially amazing ability to make many simultaneous calculations by using quantum bits, or qubits, which can exist as both 1 and 0 at the same time. The challenge is that such systems must use error correction to preserve the fragile quantum states of qubits long enough to run calculations.
Error correction in the quantum world is much stranger than in classical computing. In classical computing you can simply make multiple copies of a bit at any stage in a calculation, and if the original goes afoul, you just use the copies to restore it. But physics doesn’t allow you to copy quantum information exactly. Any attempt to do so disrupts the computation. But quantum-computing pioneer Peter Shor discovered that there’s a way around this problem. The quantum information of one qubit can be spread across multiple qubits—hundreds or even thousands—that share a quantum connection called entanglement. Quantum error-correction codes take advantage of these other qubits to uncover the errors without really resorting to copying the value of the original qubit.
Most quantum error-correction schemes involve very simple classical processing but require quantum logic operations with an accuracy of more than 99.999 percent. But one method, known as surface code, can get away with a lower accuracy threshold of 99 percent by shifting much more of the scheme’s complexity to the classical processing. An experimental system detailed recently in Nature demonstrated the first surface-code architecture to achieve that needed 99 percent accuracy.
“We made a significant advance in the fidelity that brought it to this important limit, and we did it in such a way that we know how we’re going to scale up to more and more qubits,” says one of the prototype’s creators, John Martinis, a professor of physics at the University of California, Santa Barbara.
Martinis and his colleagues used superconducting quantum circuits that represent one of several possible hardware architectures for quantum-computing systems. The qubits themselves are Josephson junctions—two layers of superconductor separated by a thin insulating layer.
By creating an arrangement of five qubits in a line, the researchers showed that they could perform the logic operations at the heart of modern computing with an accuracy of 99.92 percent for a quantum logic gate involving one qubit and 99.4 percent for a quantum logic gate involving two qubits.
“The surface code tolerates a lot of error and doesn’t ask much from the hardware,” says Austin Fowler, a staff scientist at UCSB who also worked on the quantum error-correction device.
The USCB team’s success with using surface code on a linear array of qubits could lead to a full 2-D grid of qubits capable of making important calculations. The qubits would be arranged in a checkerboard pattern where “white squares” would hold data qubits for performing operations and “black squares” would contain measurement qubits that detect and correct errors in the neighboring data qubits. In this setup, the surface code can indirectly measure possible errors in the data qubits without disturbing their delicate quantum states.
IBM researchers have also done pioneering work in making surface-code error correction work with superconducting qubits. In research posted online to the arXiv repository last November, one IBM group demonstrated a smaller three-qubit system capable of running surface code, although that system had a lower accuracy—94 percent.
“Both our result and the one from UCSB are showing the promise for superconducting qubits, and that architectural and engineering challenges lie ahead of us and should begin to be addressed to get toward a fault-tolerant quantum computer,” says Jerry Chow, a research staff member at IBM’s Thomas J. Watson Research Center, in Yorktown Heights, N.Y.
Improved accuracy for superconducting qubits makes the technology a serious rival of other quantum-computer systems, such as the use of trapped ions as qubits.
Still, researchers must continue to improve accuracy rates before they can achieve highly reliable quantum computations, which may still require 1000 or even 10 000 physical qubits to encode a single logical qubit. The idea of connecting thousands of qubits without causing interference among neighboring qubits also presents a huge engineering challenge, though one that doesn’t run afoul of any physics.
“The physics of coupling and control is not going to change,” says Rami Barends, a postdoctoral fellow in physics at UCSB. “But what you’ll have to come up with is the wiring and control done in a 2-D system without hampering the fidelity.”
The next step for the UCSB team is to run simple error-correction experiments—a huge first for the quantum-computing field. Previously, researchers showed how to correct big errors that were deliberately injected into quantum-computing arrays. But the UCSB researchers want to show how to correct for natural errors that arise in the course of real quantum-computing operations. A combination of improved accuracy and rigorous error correction could eventually realize the dream of practical quantum computers.
“We have all the requirements for starting to tackle error correction for the first time,” says Julian Kelly, a graduate student in physics at UCSB. “People have gone through the motions before, but there has never been a practical way of reducing errors in the system.”
This article originally appeared in print as “Amazingly Accurate Quantum Computing.”