Researchers used Oak Ridge National Laboratory’s Quantum Computing User Program, or QCUP, to perform the first independent comparison test of leading quantum computers.
The study surveyed 24 quantum processors and ranked results from each against performance numbers touted by such vendors as IBM, Rigetti and Quantinuum (formerly Honeywell). The research team concluded most of the machines yielded acceptable performance by current quantum standards and found what may be a useful means to test the claims made by a variety of vendors.
“I think this study illustrates how difficult the task can be to capture a consistent benchmark for a technology as new and as volatile as quantum computing,” said Elijah Pelofske, the study’s lead author and a student researcher at New Mexico Tech and Los Alamos National Laboratory. “Our understanding of quantum computing continues to evolve, and so does our understanding of the appropriate benchmarks.”
Findings appeared in IEEE Transactions on Quantum Engineering.
Classical computers store information in bits equal to either 0 or 1. In other words, a bit, like a light switch, exists in one of two states: on or off.
Quantum computing uses the laws of quantum mechanics to store information in qubits, the quantum equivalent of bits. Qubits can exist in more than one state simultaneously via quantum superposition and carry more information than classical bits.
Quantum superposition means a qubit, like a spinning coin, can exist in two states at the same time — neither heads nor tails for the coin, neither one frequency nor the other for the qubit. Measuring the value of the qubit determines the probability of measuring either of the two possible values, similar to stopping the coin on heads or tails.
The more qubits, the greater the possible superposition, which enables an exponentially larger quantum computational framework with every new qubit. That difference from classical computing could fuel such innovations as vastly more powerful supercomputers, incredibly precise sensors and impenetrably secure communications — all elements of the quantum computing revolution hoped for by proponents.
But first, scientists must find ways to improve the consistency and accuracy of quantum computing. Current quantum computers have high error rates caused by noise that degrades qubit quality. The problem’s so common the current generation of quantum computers has become known as noisy intermediate-scale quantum, or NISQ. Various programming methods can help reduce these errors, but they have yet to be perfected.
Those noise rates haven’t slowed interest, as more scientists and companies every year seek to explore quantum computing’s possibilities.
“We’ve reached the point where quantum computers are starting to just appear all around us,” said Stephan Eidenbenz, a computer scientist at LANL and senior author of the study. “A lot of large companies, small startups and national laboratories are building different types of quantum computers. They’re increasingly becoming available to the general public. We as scientists would like to develop some system to rank these machines by using reliable benchmarks. Ours was the first study of this type we’re aware of.”
The team settled on quantum volume, which calculates how successfully a quantum processor can execute a particular type of random complex quantum circuit, as a metric. The higher the quantum volume number, the faster the machine — at least in theory.
“This measure isn’t perfect, but it tells you which quantum computers will be able to execute quantum circuits of a certain size and depth reasonably well,” Pelofske said. “We’re going to have a certain number of errors in all the computations on these computers. Quantum volume gives us a measure that allows us to compare device capabilities across the board. Some of these vendors publish their machines’ quantum volume measures, so we wanted to see if we could verify those numbers.”
The team reviewed previous studies on quantum volume and obtained access to 24 quantum processors, including Quantinuum’s H1-2 computer, which had the largest quantum volume of those tested and was made available through an allocation of computing time via QCUP.
Results showed most of the machines performed close to advertised quantum volume but seldom at the top numbers advertised by vendors.
“We did indeed have trouble verifying the quantum volume for each device as reported by the vendors,” Eidenbenz said. “That’s not to imply the vendors have been untruthful. They have a better understanding of their devices than we or the average user do, so they can coax a little more performance out of the machine than we can. There were certain optimizations we did not try to make, for example. We wanted to get the basic performance an ordinary user could expect out of the box.”
The team found more intensive quantum circuit compilation — translating classical programming elements into the types of commands used by quantum computers — tended to pay off in higher quantum performance.
“Quantum computers are still a new type of computation,” Pelofske said. “We’re still learning how current quantum computers work and how to make them work best, so we’re still learning how to measure them too. Sometimes a detail as simple as which qubits you use can affect your results. Some circuits perform better than others on the same machine. We want to figure out why. As we continue to refine our understanding of quantum computing, we’ll continue to refine these benchmarks and learn better ways to measure these machines.”
This work was supported by the Oak Ridge Leadership Computing Facility’s Quantum Computing User Program, a DOE Office of Science user facility. The researchers were supported by the DOE Advanced Scientific Computing Research program.
UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. — Matt Lakin