In late October, Google made the cover of *Nature* with its landmark claim of “quantum supremacy,” the ability to solve problems that classical computers would need tens of thousands of years to complete. Publishing several weeks after a leaked version of the paper found its way online, the research has made headlines around the world.

#### Readers want to know:

**What’s a qubit?**

**Qubit**: The basic unit of quantum information, equivalent to bits in a classical computer. By using quantum-mechanical variables like the spin of an electron or the polarization of a photon to encode data, quantum computing platforms can store more information than traditional binary (0 and 1) systems.

Google’s feat of engineering, creating the first fully programmable quantum computer, has been met with both enthusiasm and skepticism. In fact, an Oct 21^{st} pre-print by researchers at IBM claims that the computation Google said would require 10,000 years to complete could be done in only two-and-a-half days using a different approach.

In this Q&A, Lee Bassett, a scientist who works on quantum computing platforms, explains the science behind the headlines, what IBM’s counterargument means, and what the future holds for this fast-growing, and incredibly hyped, field of research.

What is a quantum computer doing differently than a classical computer?

A quantum computer is a machine that relies on the laws of quantum physics. Those laws say that the state of the machine is described by probability amplitudes, instead of probabilities, where the amplitudes can be negative or even imaginary numbers. Google built a machine that controls these probability amplitudes. At the end of a calculation, you measure the state of the computer, and that’s where the probability amplitude turns into a probability of a certain measurement outcome: Is the bit zero or is it one?

What about this process is so difficult for a classical computer?

The essential part of quantum computation, and also quantum communication and quantum sensing, is that these probability amplitudes are preserved. They can interfere with each other during the calculation such that a negative amplitude cancels a positive one and the output probability is truly zero, whereas with regular probabilities that could never happen. To simulate it classically, you need to hold in memory all of the possibilities that the computer could take. If we wanted to classically encode this data in a regular computer, it requires a lot of memory to store all those intermediate possibilities.

Because Google’s quantum computer has 54 qubits, simulating that experiment with a regular computer requires storing 2^{54} bits of information for every step of the calculation. It’s more than any sort of RAM anywhere.

What does Google’s setup look like?

The core of the quantum computer is a silicon chip, about a square centimeter in size, and on top they fabricated aluminum shapes. The aluminum becomes superconducting at cold temperatures, so charges can move around without dissipating heat. The shapes are assembled into a structure of inductors and capacitors, and each qubit features a small aluminum island and a tunnel barrier for superconducting charges.

The quantum computer chip is in the bottom of a dilution refrigerator connected to a bunch of wires. The can around the refrigerator is full of liquid helium, at 4 Kelvin, and then the insert that goes inside has a coil that condenses a fluid and then evaporates the fluid, essentially the way an air conditioner works. All of that is just to get the computer cold, and the rest is classical electronics.

What makes quantum computing so challenging?

Real quantum systems are subject to a lot of noise, and the hard thing about quantum engineering is making devices that preserve the probability amplitudes. The low temperatures, a few thousandths of a degree above absolute zero, are all about removing noise, but Google’s device is still really noisy. What they measure is almost entirely a random signal with a small deviation, where the small deviation is coming from the quantum mechanics.

Based on Google’s estimate in their Nature paper, a classical supercomputer would need 10,000 years to complete what the quantum computer did, but then IBM says it would only need a couple of days using a different method. Could you explain this discrepancy?

IBM said they have an algorithm that could be faster than the 10,000 years that Google stated and that was because they realized that it is *just* possible to store that state of 2^{54} qubits on the hard drives of the Oak Ridge supercomputer, the largest in the world, operating for two days.

Does IBM’s conjecture take away from the overall significance of what Google did?

I don’t think it changes the fact that this demonstration is showing a clear separation in how hard it is to perform this calculation in a classical computer versus a quantum device. It’s absolutely true that people can come up with different ways of calculating things, and the performance of our classical supercomputers and algorithms will continue to improve.

IBM is absolutely right to point out this discrepancy and also to make the larger point that the quantum supremacy demonstration is not really useful, so we should continue to wait for devices that can run quantum algorithms with known applications. It’s also important for IBM to run the simulation to see if it really does take two days because sometimes running things on supercomputers is not as obvious as in a theorist’s head. Google posted the output from their quantum calculations, so then we can check to see if they really are measuring the quantum effects they believe.

Ultimately, I think this demonstration will go down in history as a landmark achievement. Although there are other quantum devices—or materials for that matter—that are hard to simulate classically, this is the first device matching that description that is an engineered, fully programmable quantum computer. That is an important distinction since there is a natural blueprint for how one “scales” the system into larger devices that can run more complex calculations. With a quantum computer, adding just one qubit doubles the computational capacity, so things can move quickly now.

What comes next?

We’re still a long way from having the types of quantum machines in many people’s heads, like ones that can simulate chemical reactions or break encryption models. The best estimates for what you need in a quantum computer to break encryption codes is around 10 million qubits with the same properties as these 54.

Google’s quantum computer is in some ways analogous to ENIAC, the first general-purpose digital computer, which was built at Penn in the 1940s. ENIAC was built for a special purpose, using the best technology available at the time, but it ultimately found far wider applications and spawned the information age. It was a huge engineering feat to take something from a basic concept, in ENIAC’s case vacuum tubes that can perform logic gates, and put enough of them together to calculate something that was previously inaccessible.

That is very much what Google’s approach has been. They’ve known for several years that a device could be assembled into something of this scale, and it has really just been a matter of building it. It is important to note that there are many other ways to build quantum devices, and we do not yet know what form the “useful quantum computers” of the future will take.

It may be that these superconducting qubits continue to push the boundaries, but it also may be that there is some other technology—maybe yet to be discovered—that will push it forward. That is why it is so important to continue with basic research in this area. In the case of classical computers, ENIAC was completed in 1945, and the transistor was invented two years later.

Another difference between classical and quantum computing is that we do not have great ideas for what to do with machines like Google’s. The last sentence of Google’s paper essentially sums up the field: “We are only one creative algorithm away from valuable near-term applications.” They are acknowledging two things: That it’s not useful right now, and also there’s a lot of uncertainty. Tomorrow, somebody could publish an algorithm that uses a device like this for something useful, and that would be a game changer.

*Lee Bassett** is an assistant professor in the **Department of Electrical and Systems Engineering** in the **School of Engineering and Applied Science** at the **University of Pennsylvania**.*