After building a new kind of architecture, silicon quantum computing physicists have developed what they claim is the most accurate quantum computing chip ever designed.
Representatives of the Sydney-based startup say its silicon-based atomic quantum computing chips have advantages over other types of quantum processing units (QPUs). That’s because the chip is based on a new architecture called “14/15” that places phosphorus atoms within silicon (so named because phosphorus atoms are the 14th and 15th elements on the periodic table). They outlined their findings in a new study published Dec. 17 in the journal Nature.
SQC achieved 99.5% to 99.99% fidelity on a quantum computer with nine nuclear qubits and two atomic qubits, resulting in the world’s first demonstration of atomic silicon-based quantum computing across separate clusters.
you may like
Fidelity measures how well error correction and mitigation techniques work. Company representatives say they achieved state-of-the-art error rates with a custom-built architecture.
While this may not sound as appealing as a quantum computer with thousands of qubits, the 14/15 architecture is highly scalable, the researchers said in their study. They added that demonstrating peak fidelity across multiple clusters serves as a proof of concept for what could theoretically lead to fault-tolerant QPUs with millions of functional qubits.
The secret sauce is silicone (contains phosphorus)
Quantum computing is performed using the same principles as binary computing. That is, energy is used to perform calculations. But instead of using electricity to flip switches, as is the case with traditional binary computers, quantum computing involves creating and manipulating quantum bits (qubits, the equivalent of classical computer bits).
Qubits come in many forms. Scientists at Google and IBM are building systems with superconducting qubits that use gated circuits, while some labs such as PsiQuantum have developed photonic qubits, which are particles of light. Other products, including IonQ, work with trapped ions, where single atoms are captured and held in devices called laser tweezers.
The general idea is to use quantum mechanics to manipulate very small things and perform useful calculations from their potential states. SQC representatives say the process for doing this is unique in that the QPU is developed using a 14/15 architecture.
They create each chip by placing phosphorus atoms inside a pure silicon wafer.
“This is the smallest feature size on a silicon chip,” SQC CEO Michelle Simmons told Live Science in an interview. “This is 0.13 nanometers, which is essentially like a vertical bond length. That’s two orders of magnitude lower than the typical length that TSMC does as standard. That’s a pretty dramatic accuracy improvement.”
you may like
Increasing the number of qubits tomorrow
Each platform has different obstacles to overcome or mitigate for scientists to achieve quantum computing scaling.
One universal hurdle for all quantum computing platforms is error correction (QEC). Quantum computing takes place in a highly fragile environment where qubits are sensitive to electromagnetic waves, temperature fluctuations, and other stimuli. This “collapses” the superposition of many qubits, causing quantum information to be lost during calculations and becoming unmeasurable.
To compensate, most quantum computing platforms dedicate many qubits to error mitigation. These work in a similar way to bit checking or parity in classical networks. However, as the number of qubits increases, so does the number of qubits required for QEC.
“The coherence time of nuclear spins is very long, and so-called ‘bit-flip errors’ are almost non-existent. Therefore, the error-correcting code itself is much more efficient; there is no need to correct for bit-flips or phase-adjust errors,” Simmons said.
In other silicon-based quantum systems, operating at coarse precision tends to reduce the stability of the qubit, making bit flip errors more pronounced. SQC’s chips are designed with high precision to help reduce the occurrence of certain errors that occur on other platforms.
“All we really need to do is correct for these phase errors,” Simmons added. “So the error correction code is much smaller, so the overall overhead you do for error correction is smaller.
It has decreased quite a lot. ”
The race to beat Grover’s algorithm
The standard for testing the fidelity of quantum computing systems is a routine called Grover’s algorithm. It was designed in 1996 by computer scientist Rob Glover to demonstrate whether quantum computers could demonstrate an “advantage” over classical computers in certain search functions.
It is currently used as a diagnostic tool to determine how efficiently quantum systems are operating. Essentially, if a lab can achieve quantum computing fidelity in the range of 99.0% or higher, it is considered to have achieved error-corrected and fault-tolerant quantum computing.
In February 2025, SQC published a research paper in the journal Nature in which the team demonstrated 98.9% fidelity of Grover’s algorithm with a 14/15 architecture.
watch on
In this respect, SQC has outperformed companies such as IBM and Google. However, we show competing results with tens or even hundreds of qubits compared to SQC’s 4 qubits.
IBM, Google, and other prominent projects are still testing and iterating on their respective roadmaps. However, scaling up the number of qubits requires adapting error mitigation techniques. QEC has proven to be one of the most difficult bottlenecks to overcome.
But SQC scientists say their platform is so “error-free” that they were able to break Grover’s record without performing any error correction on the qubits.
“If you look at the result of the Grover we produced earlier this year, we have the most faithful Grover album.” [algorithm] That’s 98.87% of the theoretical maximum, and on top of that we don’t do any error correction,” Simmons said.
Simmons said the qubit “clusters” included in the new 11-qubit system can be scaled to represent millions of qubits. However, infrastructure bottlenecks can slow progress.
“Obviously, as you scale your system, you’re going to do error correction,” Simmons said. “Every company needs to do it. But the number of qubits needed is much smaller. So the physical system is smaller and the power requirements are smaller.”
Source link
