Your cart is currently empty!
Google’s quantum team bests key threshold
Google researchers have demonstrated that scaling physical qubits results in an exponential reduction of error rates in quantum computing. Testing increasingly larger grids of qubits, from 3×3 to 5×5 to 7×7, they found that the error rate was cut in half at each step. This accomplishment is known in the field as “below threshold,” which has been a key outstanding challenge for quantum computing ever since computer scientist Peter Shor outlined it in 1995.
A qubit is notoriously unstable and one of the major challenges of quantum computing is therefore to deal with that instability. A good way is to create logical qubits from multiple physical qubits, ensuring that when a single or even a few qubits fail, the logical qubit doesn’t and the calculation can continue. Obviously, this strategy only works if adding more physical qubits results in lower – not higher – error rates.
The Google team has demonstrated that this is possible. To be precise, it showed that the rate of improvement exceeded a crucial threshold. “As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built,” Hartmut Neven, founder and lead of Google Quantum AI, writes on a corporate blog.