We investigate how hardware specifications can impact the final run time and the required number of physical qubits to achieve a quantum advantage in the fault tolerant regime. Within a particular time frame, both the code cycle time and the number of achievable physical qubits may vary by orders of magnitude between different quantum hardware designs. We start with logical resource requirements corresponding to a quantum advantage for a particular chemistry application, simulating the FeMo-co molecule, and explore to what extent slower code cycle times can be mitigated by using additional qubits. We show that in certain situations, architectures with considerably slower code cycle times will still be able to reach desirable run times, provided enough physical qubits are available. We utilize various space and time optimization strategies that have been previously considered within the field of error-correcting surface codes. In particular, we compare two distinct methods of parallelization: Game of Surface Code’s Units and AutoCCZ factories. Finally, we calculate the number of physical qubits required to break the 256-bit elliptic curve encryption of keys in the Bitcoin network within the small available time frame in which it would actually pose a threat to do so. It would require 317 × 106 physical qubits to break the encryption within one hour using the surface code, a code cycle time of 1 μs, a reaction time of 10 μs, and a physical gate error of . To instead break the encryption within one day, it would require 13 × 106 physical qubits.
With the advent of quantum computers, the race to a quantum computational advantage has gained serious traction in both the academic and commercial sectors. Recently, quantum supremacy has been claimed1,21.
F. Arute et al., Nature
574, 505 (2019). https://doi.org/10.1038/s41586-019-1666-52.
A. Deshpande et al., arXiv:2102.12474 (2021). on quantum devices with tens of qubits. However, the targeted problems solved were theoretical in nature, and not relevant to industrial applications. Quantum advantage, conversely, is a stronger form of supremacy that shows an industrially relevant computational problem solved in a reasonable time-frame that would be practically impossible to do using any classical supercomputer. There is a large physical qubit overhead associated with quantum error correction which is required to run some of the most powerful algorithms. There are many factors that will determine the ability for different quantum computing architectures to scale. Consequently, within a given time frame, the maximum size (qubit count) of a device could vary by orders of magnitude. The code cycle time (base unit of operation in the surface code) will also vary considerably between platforms. Therefore, it is of interest to investigate the interplay between the code cycle time and the number of achievable qubits and the resulting impact on the feasibility for a particular device to achieve a quantum advantage. We calculate the physical qubit and run time requirement for relevant problems in chemistry and cryptography with a surface code error-corrected quantum computer, comparing parameters typical to various types of hardware realizations.
Algorithms that are tailored to Noisy Intermediate Scale Quantum (NISQ) devices generally consist of a hybrid approach, where a low depth circuit is parameterized and iterated through a classical optimizer. These NISQ algorithms are more often heuristic in nature than their fault tolerant counterparts, and so, providing rigorous resource estimates can be more challenging. Many of the most powerful quantum algorithms require a circuit depth that greatly exceeds the capabilities of NISQ era devices, and for some applications, the number of required logical qubits and operations is known. The quantum threshold theorem states that a quantum computer using error correction schemes, and a physical error below a certain threshold, can suppress the logical error rate to arbitrary low levels.3–53.
D. Aharonov and
M. Ben-Or, SIAM J. Comput.
38, 1207 (2008). https://doi.org/10.1137/S00975397993593854.
E. Knill,
R. Laflamme, and
W. H. Zurek, Proc. R. Soc. A
454, 365 (1998). https://doi.org/10.1098/rspa.1998.01665.
A. Y. Kitaev, Ann. Phys.
303, 2 (2003). https://doi.org/10.1016/S0003-4916(02)00018-0 Therefore, one could run an algorithm with an arbitrary long circuit depth, provided enough qubits are available to perform the required level of error correction. There is a large time overhead for performing logical operations at the error-corrected level relative to operations performed on physical qubits. For the foreseeable future, classical computers will have a clock rate that is orders of magnitude faster than error-corrected quantum computers. To determine the problem size at which a quantum computer will outperform a classical computer, one must consider both the algorithmic speed up and the relative difference between their associated clock rates. By making use of parallelization schemes, quantum computers can speed up the effective rate of logical operations at the cost of additional qubits, and so the ability to scale to large enough device sizes will also play a role in determining the feasibility of reaching desirable run times.
The surface code6–86.
A. G. Fowler,
M. Mariantoni,
J. M. Martinis, and
A. N. Cleland, Phys. Rev. A
86, 032324 (2012). https://doi.org/10.1103/PhysRevA.86.0323247.
S. B. Bravyi and
A. Y. Kitaev, arXiv:quant-ph/9811052 (1998).8.
E. Dennis,
A. Kitaev,
A. Landahl, and
J. Preskill, J. Math. Phys.
43, 4452 (2002). https://doi.org/10.1063/1.1499754 is the most researched error correction technique, particularly in regard to end-to-end physical resource estimation. There are many other error correction techniques available, and the best choice will likely depend on the underlying architecture’s characteristics, such as the available physical qubit–qubit connectivity. Superconducting qubits are one of the leading quantum computing platforms, and these devices generally consist of static qubits arranged on a grid where only nearest neighbor interactions are natively available. Long-distance connections must be enabled by sequences of nearest neighbor swap operations, which, in the context of a NISQ device, may limit their computational power.9,109.
A. W. Cross,
L. S. Bishop,
S. Sheldon,
P. D. Nation, and
J. M. Gambetta, Phys. Rev. A
100, 032328 (2019). https://doi.org/10.1103/PhysRevA.100.03232810.
M. Webber,
S. Herbert,
S. Weidt, and
W. Hensinger, Adv. Quantum Technol.
3, 2000027 (2020). https://doi.org/10.1002/qute.202000027 The limited connectivity of superconducting qubits in part motivated the continued research into the surface code, which relies only on nearest neighbor interactions between physical qubits arranged on a 2D grid. In the following, we briefly introduce some of the alternative error correction techniques to the 2D surface code. Error correction codes that rely on global interactions at the physical level have favorable encoding rates as a function of code distance,1111.
J. Roffe,
D. R. White,
S. Burton, and
E. Campbell, arXiv:2005.07016 (2020). but enabling this global connectivity on large devices may be challenging, or the connectivity overheads may outweigh the benefits relative to closer-range-connectivity codes. Entanglement distribution may be a viable method of enabling distant connectivity for large-scale devices with limited physical connectivity in the limit of large reserves of quantum memory.1212.
N. de Beaudrap and
S. Herbert, Quantum
4, 356 (2020). https://doi.org/10.22331/q-2020-11-01-356 Higher dimensional error correction codes can have access to a greater range of transversal gates,13,1413.
E. T. Campbell,
B. M. Terhal, and
C. Vuillot, Nature
549, 172 (2017). https://doi.org/10.1038/nature2346014.
M. Vasmer and
D. E. Browne, Phys. Rev. A
100, 012312 (2019). https://doi.org/10.1103/PhysRevA.100.012312 which may considerably improve final run-times, where transversal implies that each qubit in a code block is acted on by at most a single physical gate, and each code block is corrected independently when an error occurs. Realizing this 3D (or greater) physical connectivity could be challenging for many of the current quantum platforms; photonic-interconnected modules may be the most flexible architecture with regard to its possible connectivity graph;1515.
C. Monroe,
R. Raussendorf,
A. Ruthven,
K. R. Brown,
P. Maunz,
L. M. Duan, and
J. Kim, Phys. Rev. A
89, 022317 (2014). https://doi.org/10.1103/PhysRevA.89.022317 however, currently, achieved connection speeds would present a considerable bottleneck.1616.
L. J. Stephenson et al., Phys. Rev. Lett.
124, 110501 (2020). https://doi.org/10.1103/PhysRevLett.124.110501 A variant of the 3D surface code may still be realizable with hardware that is only scalable in two dimensions because the thickness (extra dimension) can be made relatively small and independent of code distance.1717.
T. R. Scruby,
D. E. Browne,
P. Webster, and
M. Vasmer, arXiv:2012.08536 (2020). In Sec. II, we highlight the surface code in more detail and include relevant considerations for physical resource estimation.
Here, we highlight some of the leading quantum computing platforms, their relevant error correction strategies, and contrast their rate of operations. The surface code is the front-running proposal for error correction in superconducting devices, and their associated code cycle times (the base sequence of hardware operations) have been estimated to be in the range of 0.2–10 μs.6,186.
A. G. Fowler,
M. Mariantoni,
J. M. Martinis, and
A. N. Cleland, Phys. Rev. A
86, 032324 (2012). https://doi.org/10.1103/PhysRevA.86.03232418.
D. Litinski, Quantum
3, 128 (2019). https://doi.org/10.22331/q-2019-03-05-128 There is a great variety within different implementations of trapped ion architectures, particularly with regard to the method of enabling connectivity. A proposed scalable trapped ion design that relies on shuttling to enable connectivity and microwave-based gates has estimated the code cycle time to be 235 μs.1919.
B. Lekitsch,
S. Weidt,
A. G. Fowler,
K. Mølmer,
S. J. Devitt,
C. Wunderlich, and
W. K. Hensinger, Sci. Adv.
3, 1601540 (2017). https://doi.org/10.1126/sciadv.1601540 For this shuttling-based design, alternative error correction protocols may be more effective than the surface code due to the variable-mid-range connectivity that is possible, but in this work, we constrain ourselves to the surface code. Small trapped ion modules connected via photonic interconnects have been envisaged with the surface code,2020.
R. Nigmatullin,
C. J. Ballance,
N. D. Beaudrap, and
S. C. Benjamin, New J. Phys.
18, 103028 (2016). https://doi.org/10.1088/1367-2630/18/10/103028 but due to their extremely flexible connectivity, higher dimensional error correction codes may one day be utilized. The code cycle time for trapped ions with photonic interconnects would depend on how the physical qubits are distributed across modules; one approach advocates for two tiers of encoding to minimize the use of the slower interconnects.2121.
Y. Li and
S. C. Benjamin, Phys. Rev. A
94, 042303 (2016). https://doi.org/10.1103/PhysRevA.94.042303 Trapped ion architectures have achieved some of the highest gate fidelities to date;22–2622.
R. Srinivas et al., Nature
597, 209 (2021). https://doi.org/10.1038/s41586-021-03809-423.
V. M. Schäfer,
C. J. Ballance,
K. Thirumalai,
L. J. Stephenson,
T. G. Ballance,
A. M. Steane, and
D. M. Lucas, Nature
555, 75 (2018). https://doi.org/10.1038/nature2573724.
C. J. Ballance,
T. P. Harty,
N. M. Linke,
M. A. Sepiol, and
D. M. Lucas, Phys. Rev. Lett.
117, 060504 (2016). https://doi.org/10.1103/PhysRevLett.117.06050425.
J. P. Gaebler et al., Phys. Rev. Lett.
117, 060505 (2016). https://doi.org/10.1103/PhysRevLett.117.06050526.
A. E. Webb,
S. C. Webster,
S. Collingbourne,
D. Bretaud,
A. M. Lawrence,
S. Weidt,
F. Mintert, and
W. K. Hensinger, Phys. Rev. Lett.
121, 180501 (2018). https://doi.org/10.1103/PhysRevLett.121.180501 the lower the base physical error rate, the lower the code distance will need to be in the error correction protocol, and therefore, fewer physical qubits would be needed per logical qubit. A fault tolerant silicon-based architecture has been proposed using the surface code with code cycles estimated to be 1.2 ms.2727.
J. O’gorman,
N. H. Nickerson,
P. Ross,
J. J. Morton, and
S. C. Benjamin, npj Quantum Inf.
2, 15019 (2016). https://doi.org/10.1038/npjqi.2015.19 The error correction choice for photonic devices will depend on the underlying design; the primary candidate for a particular fault tolerant proposal2828.
J. Eli Bourassa et al., Quantum
5, 392 (2021). https://doi.org/10.22331/q-2021-02-04-392 is the Raussendorf-Harrington-Goyal (RHG) lattice.29,3029.
R. Raussendorf,
S. Bravyi, and
J. Harrington, Phys. Rev. A
71, 062313 (2005). https://doi.org/10.1103/PhysRevA.71.06231330.
R. Raussendorf,
J. Harrington, and
K. Goyal, Ann. Phys.
321, 2242 (2006). https://doi.org/10.1016/j.aop.2006.01.012 The degree to which an architecture is scalable will vary greatly between architecture types and within particular stages of development.
In this work, we provide physical resource estimates to achieve a quantum advantage with a quantum computer using the surface code. Using some of the latest time optimization strategies,18,31,3218.
D. Litinski, Quantum
3, 128 (2019). https://doi.org/10.22331/q-2019-03-05-12831.
C. Gidney and
M. Ekerå, Quantum
5, 433 (2021). https://doi.org/10.22331/q-2021-04-15-43332.
C. Gidney and
A. G. Fowler, arXiv:1905.08916 (2019). we investigate the interplay between the code cycle time and the number of achievable physical qubits in the underlying architecture. In Secs. I A and I B, we introduc quantum algorithms, which have been hypothesized to offer a disruptive quantum advantage for industry-relevant applications. We first provide a brief overview of quantum computing for chemistry and highlight the logical resource requirements for a quantum advantage use case, simulating the ground state of the FeMo-co molecule, which we use as the starting point for our investigation. We then calculate the number of physical qubits that are required to break the elliptic curve encryption of Bitcoin keys within the time frame that it actually poses a threat to do, as a function of the code cycle time.
A. Fault tolerant quantum chemistry
When numerically simulating quantum chemistry problems, typically, a set of independent functions, known as a basis set, are introduced to describe the physical wave function of the system. This introduction does not remedy the exponential increase in parameters with system size but enables one to balance the achievable accuracy against the required computational resources. Feynman was perhaps the first to envisage a quantum computer and its application to the simulation of physics and chemistry.3333.
R. P. Feynman, Int. J. Theor. Phys.
21, 467 (1982). https://doi.org/10.1007/BF02650179 It is now expected that quantum computers will eventually be able to perform electronic structure calculations with a quality of solution typical to the most accurate classical methods but with run times comparable to the approximate techniques, such as density functional theory.
The quantum phase estimation (QPE) algorithm generates eigenvalues for a general unitary operator, and it can be applied to quantum chemistry to find the eigenenergies of chemistry Hamiltonians to FCI (full configuration interaction, i.e., exact) precision. Unlike the variational quantum eigensolver (VQE)3434.
A. Peruzzo,
J. McClean,
P. Shadbolt,
M. H. Yung,
X. Q. Zhou,
P. J. Love,
A. Aspuru-Guzik, and
J. L. O’Brien, Nature Commun.
5, 4213 (2014). https://doi.org/10.1038/ncomms5213 that involves many iterations [ with accuracy ϵ] of low depth circuits, the QPE algorithm requires iterations of a circuit with a depth scaling as . The large depth required in the QPE algorithm means that it will only be possible with error corrected devices because NISQ devices would lose their coherence long before the end of the circuit.
Hamiltonian simulation is used as a subroutine in the quantum phase estimation (QPE) algorithm, and it involves constructing a quantum circuit that approximates the evolution of the input state according to the Hamiltonian. Two of the main paradigms for Hamiltonian simulation are trotterization and qubitization. Qubitization35,3635.
G. H. Low and
I. L. Chuang, Quantum
3, 163 (2019). https://doi.org/10.22331/q-2019-07-12-16336.
D. W. Berry,
C. Gidney,
M. Motta,
J. R. McClean, and
R. Babbush, Quantum
3, 208 (2019). https://doi.org/10.22331/q-2019-12-02-208 can be used to simulate the Hamiltonian evolution by using quantum signal processing,3737.
G. H. Low and
I. L. Chuang, Phys. Rev. Lett.
118, 010501 (2017). https://doi.org/10.1103/PhysRevLett.118.010501 but more commonly, it is used to generate a quantum walk3838.
D. Poulin,
A. Kitaev,
D. S. Steiger,
M. B. Hastings, and
M. Troyer, Phys. Rev. Lett.
121, 010501 (2018). https://doi.org/10.1103/PhysRevLett.121.010501 upon which one can directly perform phase estimation. Qubitization is perhaps the most favored method for simulating chemistry Hamiltonian dynamics because it achieves the provably optimal scaling in query complexity and approximation error albeit while requiring more logical qubits than other methods.
Previous work has investigated the potential for quantum computers to provide a quantum advantage by performing ground state energy estimations on the catalytic complex known as FeMo-co.39–4239.
M. Reiher,
N. Wiebe,
K. M. Svore,
D. Wecker, and
M. Troyer, Proc. Natl. Acad. Sci. U. S. A.
114, 7555 (2017). https://doi.org/10.1073/pnas.161915211440.
Z. Li,
J. Li,
N. S. Dattani,
C. J. Umrigar, and
G. K. L. Chan, J. Chem. Phys.
150, 024302 (2019). https://doi.org/10.1063/1.506337641.
V. von Burg,
G. H. Low,
T. Häner,
D. S. Steiger,
M. Reiher,
M. Roetteler, and
M. Troyer, arXiv:2007.14460 (2020).42.
J. Lee,
D. W. Berry,
C. Gidney,
W. J. Huggins,
J. R. McClean,
N. Wiebe, and
R. Babbush, PRX Quantum
2, 030305 (2020). https://doi.org/10.1103/PRXQuantum.2.030305 FeMo-co is a large molecule expressed in biology, which reduces N2 from the atmosphere, and a better understanding of this process could provide a significant commercial advantage by improving the efficiency of nitrogen fixation for the production of ammonia for fertilizer. In this work, we start our investigation with the latest logical resource requirements for simulating FeMo-co and calculate the number of physical qubits required to reach a desirable run time as a function of the code cycle time of the hardware.
B. Breaking Bitcoin’s encryption
Bitcoin, the first decentralized cryptocurrency, is continuing to grow in popularity. Bitcoin has properties that make it desirable as a hedge against inflation, for example, the rate of supply is known, decreases with time, and is entirely independent of demand. The decentralized nature of the blockchain makes it censor resistant, and it can operate in a trustless manner. There are two main ways in which a quantum computer may pose a threat to the Bitcoin network.43,4443.
D. Aggarwal,
G. K. Brennen,
T. Lee,
M. Santha, and
M. Tomamichel, arXiv:1710.10377 (2017).44.
L. Tessler and
T. Byrnes, arXiv:1711.04235 (2017). The first and least likely is the threat to the proof of work mechanism (mining) for which a quantum computer may achieve a quadratic speedup on the hashing of the SHA256 protocol with the Grover’s algorithm.4545.
L. K. Grover, in Proceedings of the Annual ACM Symposium on Theory of Computing Part F1294 (
ACM, 1996), pp. 212–219. The algorithmic speedup is unlikely to make up for the considerably slower clock cycle times relative to state of the art classical computing for the foreseeable future.4444.
L. Tessler and
T. Byrnes, arXiv:1711.04235 (2017). The second and more serious threat would be an attack on the elliptic curve encryption of signatures. Bitcoin uses the Elliptic Curve Digital Signature Algorithm (ECDSA) that relies on the hardness of the Elliptic Curve Discrete Log Problem (ECDLP), and a modified version of Shor’s algorithm46–4846.
P. W. Shor, SIAM J. Comput.
26, 1484 (1997). https://doi.org/10.1137/S009753979529317247.
T. Häner,
S. Jaques,
M. Naehrig,
M. Roetteler, and
M. Soeken, in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) LNCS (
Springer, 2020), Vol.
12100, pp. 425–444.48.
M. Roetteler,
M. Naehrig,
K. M. Svore, and
K. Lauter, in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) LNCS (
Springer, 2017), Vol.
10625, pp. 241–270. can provide an exponential speedup using a quantum computer for solving this problem. Bitcoin uses ECDSA to convert between the public and private keys, which are used when performing transactions. With best practices (using new addresses for each transaction), the only point at which a public key is available and relevant to a eavesdropper is after a transaction has been broadcast to the network but prior to its acceptance within the blockchain. In this window, transactions wait in the “mem pool” for an amount of time dependent on the fee paid; the time taken for this process is on average 10 min, but it can often take much longer. Gidney and Ekerå estimated that it would require 20 × 106 noisy qubits and 8 h to break the 2048 Rivest-Shamir-Adleman (RSA) encryption,3131.
C. Gidney and
M. Ekerå, Quantum
5, 433 (2021). https://doi.org/10.22331/q-2021-04-15-433 which is of a comparable difficulty to the EC encryption of Bitcoin. The maximum acceptable run time for breaking Bitcoin’s encryption makes it particularly well suited to our investigation into the cost of parallelization and the interplay between the code cycle time and scalability, which we present in Sec. III.
In Sec. II, we introduce considerations for error correction and provide an overview of the space and time optimizations within the surface code that we make use of in this work.
In this section, we briefly introduce quantum error correction in the context of resource estimation and explain some of the available strategies within a surface code setup, which are selected based upon a preference for space (physical qubit requirement) or time (final run time of the algorithm).
A. The available gate set
An important consideration for quantum error correction is the available logical gate set, which is generally more restricted than the underlying physical gate set. The Clifford gates are those that map Pauli operators onto other Pauli operators, and the set can be generated by various combinations of the set {H, CNOT, S}, where the S gate is the Pauli . The Gottesman–Knill theorem4949.
D. Gottesman, arXiv:quant-ph/9807006 (1998). states that any Clifford circuit of finite size can be simulated in polynomial time (efficiently) with a classical computer. The Clifford gate set in combination with any non-Clifford gate is sufficient for universal quantum computation, and two of the most commonly considered non-Clifford gates are the T gate () and the Toffoli gate (control-control-not). Any arbitrary angle single qubit gate can be decomposed into long sequences of the fixed angle H and T gates with chain length scaling with desired precision, as per the Solovay–Kitaev theorem,5050.
C. M. Dawson and
M. A. Nielsen, Quantum Inf. Comput.
6, 081095 (2006); e-print arXiv:quant-ph/0505030.
(1) |
The surface code has transversal access to the CNOT, and the H and S gates can be realized with a low overhead using other techniques.5151.
D. Litinski and
F. von Oppen, Quantum
2, 62 (2018). https://doi.org/10.22331/q-2018-05-04-62 The T-gate is not transversal in the surface code, and it must be effectuated using methods that incur a large space-time volume overhead relative to the other gates listed here. The T gate can be constructed by consuming a magic state, ,5252.
S. Bravyi and
A. Kitaev, Phys. Rev. A
71, 022316 (2005). https://doi.org/10.1103/PhysRevA.71.022316 where the magic state can be produced with an error proportional to the physical error, independent of the code distance.5353.
D. Litinski, Quantum
3, 205 (2019). https://doi.org/10.22331/q-2019-12-02-205 To create a sufficiently high-quality magic state, a distillation protocol54,5554.
S. Bravyi and
J. Haah, Phys. Rev. A
86, 052329 (2012). https://doi.org/10.1103/PhysRevA.86.05232955.
A. G. Fowler,
S. J. Devitt, and
C. Jones, Sci. Rep.
3, 01939 (2013). https://doi.org/10.1038/srep01939 can be used, which essentially involves converting multiple low fidelity states into fewer higher fidelity states. Due to the high time cost associated with magic state distillation (the production and consumption), we can make a simplifying assumption that the time required to perform the T-gates effectively determines the final run time of an algorithm, as the relative cost of performing the Clifford gates fault-tolerantly is negligible. Some algorithms more naturally lend themselves to being expressed in terms of Toffoli gates, and there exist distinct specialized distillation factories to produce the magic states required to effectuate both of these non-Clifford operations. A Toffoli gate can be decomposed using 4 T gates,5656.
C. Jones, Phys. Rev. A
87, 022328 (2013). https://doi.org/10.1103/PhysRevA.87.022328 whereas the control-control-Z (CCZ) states normally produced for the Toffoli gate can be efficiently catalyzed into two T states.5757.
C. Gidney and
A. G. Fowler, Quantum
3, 135 (2019). https://doi.org/10.22331/q-2019-04-30-135
B. Error correction and logical error rate
A logical qubit in the surface code consists of data qubits, which store the quantum information, and ancillary qubits, which are used for stabilizer measurements that nondestructively extract error information from the data qubits. The distance, d, of a code represents the minimum size of physical error that can lead to a logical error, and in the surface code, the number of physical qubits per logical qubit scales as . The logical error rate per logical qubit, pL, per code cycle as a function of the base physical error rate, p, can be approximated by5858.
A. G. Fowler and
C. Gidney, arXiv:1808.06709 (2018).
The efficiency of the error protection decreases as the base physical error rate approaches from below the threshold of the code (here assumed to be 1%). For feasible final resource estimates, the base physical error rate will need to be close to or below , where the necessary code distance is chosen based upon both the base physical error rate and the length of the desired computation. The physical error model here is the assumption that each physical operation has a probability p of also introducing a random Pauli error. The achieved gate fidelity (the go-to metric for experimentalists) cannot be directly converted with confidence to the physical error rate, without further information. The cause of the gate infidelity, where it exists on the spectrum between the two extremes of depolarizing error (decoherent noise) and unitary error (coherent noise), will determine the corresponding gate error rate. The best-case situation is the one-to-one mapping between the gate error (one-fidelity) and depolarizing error rate, and this has often been an assumption in the experimentally focused literature. A measure of gate fidelity alone cannot determine the unitarity of the noise, i.e., the relative contribution of coherent and decoherent errors. Coherent errors, such as an over rotation of an intended gate, can positively interfere with each other and, therefore, cause worse case errors than those that are decoherent. The worst case scaling of the physical error rate, p, with gate fidelity F and dimension of gate D is .5959.
Y. R. Sanders,
J. J. Wallman, and
B. C. Sanders, New J. Phys.
18, 012002 (2015). https://doi.org/10.1088/1367-2630/18/1/012002 To illustrate an example of this worst case scenario on noise quality, to guarantee an error rate of below 1%, a gate fidelity of 99.9995% would be required.5959.
Y. R. Sanders,
J. J. Wallman, and
B. C. Sanders, New J. Phys.
18, 012002 (2015). https://doi.org/10.1088/1367-2630/18/1/012002 To determine where on the unitarity spectrum, the actual hardware noise exists, protocols based on randomized bench marking can be used.60,6160.
J. Wallman,
C. Granade,
R. Harper, and
S. T. Flammia, New J. Phys.
17, 113020 (2015). https://doi.org/10.1088/1367-2630/17/11/11302061.
S. T. Flammia and
J. J. Wallman, ACM Trans. Quantum Comput.
1, 08039 (2020). https://doi.org/10.1145/3408039 This information can then be used to estimate the base physical error rate with confidence.59,6259.
Y. R. Sanders,
J. J. Wallman, and
B. C. Sanders, New J. Phys.
18, 012002 (2015). https://doi.org/10.1088/1367-2630/18/1/01200262.
R. Kueng,
D. M. Long,
A. C. Doherty, and
S. T. Flammia, Phys. Rev. Lett.
117, 170502 (2016). https://doi.org/10.1103/PhysRevLett.117.170502
C. Code cycle, reaction time, and measurement depth
The code cycle is the base unit of operation in the surface code, and it involves performing a full round of stabilizer measurements. As all operations in the computer are assumed to be subject to errors, including the stabilizer measurement process, the code cycle needs to be repeated d times before corrections can be applied. We will refer to the time it takes to perform these d rounds of code cycles as a “beat,” where many surface code operations will have a time cost measured in beats. A fault tolerant quantum computer using the surface code can be envisaged as partitioned into two sections: data-blocks that consume magic states to effectuate T gates for the desired algorithm and distillation-blocks that produce high fidelity magic states. Each of these constructs in the surface code consists of a number of logical qubits, sometimes referred to as tiles, and each tile contains physical qubits. The data block has a size scaling with the number of abstract qubits required for the algorithm, and this then sets the minimum required number of physical qubits when paired with a single magic state factory and given the code distance.
The T (or Toffoli) gates of an algorithm can be arranged into layers of gates (measurement layers), where all of the gates within a layer could potentially be executed simultaneously. Measurement layers are sometimes instead referred to as T layers when the relevant non-Clifford gate is the T gate, and the measurement (T) depth is the number of measurement layers in the algorithm. When a magic state is consumed by the data block, a Pauli product measurement is performed to determine whether an error has occurred, so that a (Clifford) correction can be applied if required. The algorithm cannot proceed to the next measurement layer until all of the necessary corrections have been applied in the current layer, and this process requires a classical computation (decoding and feed-forward). The characteristic time cost that includes the quantum measurement, classical communication, and classical computation is referred to as the “reaction time, RT.” It is conjectured that the fastest an error-corrected quantum algorithm can run, i.e., the time optimal limit6363.
A. G. Fowler, arXiv:1210.4626 (2012). is by spending only one reaction time per measurement layer, independent of code distance, and we will refer to this as reaction limited. In the case of superconducting devices that have relatively fast physical gates and measurements, the reaction time may be dominated by the classical communication and computation. A reaction time of 10 μs has been used in recent resource estimation work,3131.
C. Gidney and
M. Ekerå, Quantum
5, 433 (2021). https://doi.org/10.22331/q-2021-04-15-433 as compared to the 1 μs code cycle time, which requires both physical two qubit operations and measurements. In this work, we have defined the reaction time (RT) as a function of the code cycle time (CC) with unless otherwise stated. This assumption is motivated by the fact that generally, the quantum measurement within the code cycle represents a fraction of the total time, for example, with the shuttling-based trapped ion architecture with a code cycle time of 235 μs, the quantum measurement is estimated to represent of that total time. We include a code cycle independent additional cost of to represent the (somewhat) hardware independent classical processing. With our resource estimation tool, which is available upon request, one could recreate the results of this paper with a different relationship between the code cycle time and reaction time. If particular surface code set up contained only a single distillation factory, then the rate of computation would likely not be limited by the reaction time but instead by the rate of magic state production. We refer to the regime of being limited by magic state production as “tick” limited. There are then three relevant regimes for surface code strategies, which are separated by the limiting factor of computation rate. Beat limited implies that the limiting factor is the rate of magic state consumption by the data block, tick limited, the rate of magic state production by the distillation blocks, and reaction limited, where the conjectured time-optimal limit is reached, and one reaction time is spent per measurement layer.
In this work, we utilize and compare two distinct strategies of incrementally trading qubits for run-time up to the reaction limit and introduce them later in this section.
D. Distillation and topological errors
When choosing a surface code setup, one must decide upon the acceptable final success probability, where, in principle, a success probability greater than 50% would be sufficient to reach the desired precision by repeating the computation. The acceptable success probability then allows one an additional method of trading space for time, as the lower the acceptable success probability, the lower the various code distances would need to be. There are two contributions to the probability of failure: the topological error associated with the data block and the total distillation error. The topological error is the chance for at least a single logical error within the data block across the entirety of the algorithm, which can be calculated with the product of the number of logical qubits, the number of code cycles required for the algorithm, and the logical error rate, where the logical error rate is defined in Eq. (2) by the base physical error rate and the code distance on the data block. The total distillation error corresponds to the probability of at least a single faulty magic state, which is calculated by the product of the total required number of requi