This tutorial contains a few details on how to run Clifford Randomized Benchmarking that are not covered in the RB overview tutorial.
By Clifford randomized benchmarking we mean RB of the $n$-qubit Clifford group, as defined by Magesan et al. in Scalable and Robust Benchmarking of Quantum Processes. This protocol is routinely run on 1 and 2 qubits.
from __future__ import print_function #python 2 & 3 compatibility
import pygsti
The only aspects of running Clifford RB with pyGSTi that are not covered in the RB overview tutorial are some subtleties in generating a Clifford RB experiment design (and what those subtleties mean for interpretting the results). To cover these subtleties, here we go through the inputs used to generate a Clifford RB experiment design in more detail.
The first inputs to create an RB experiment design are the same as in all RB protocols, and these are covered in the RB overview tutorial. They are:
pspec
).depths
). For Clifford RB on $n$ qubits, the RB depth is the number of (uncompiled) $n$-qubit Clifford gates in the sequence minus two. This convention is chosen so that zero is the minimum RB depth for all RB methods in pyGSTi.k
).qubits
).All other arguments to Clifford RB experiment design generation function are optional.
n_qubits = 4
qubit_labels = ['Q0','Q1','Q2','Q3']
gate_names = ['Gxpi2', 'Gxmpi2', 'Gypi2', 'Gympi2', 'Gcphase']
availability = {'Gcphase':[('Q0','Q1'), ('Q1','Q2'), ('Q2','Q3'), ('Q3','Q0')]}
pspec = pygsti.obj.ProcessorSpec(n_qubits, gate_names, availability=availability,
qubit_labels=qubit_labels, construct_models=('clifford',))
depths = [0,1,2,4,8]
k = 10
qubits = ['Q0','Q1']
In the standard formulation of Clifford RB, the circuit should always return the all-zeros bit-string if there is no errors. But it can be useful to randomized the "target" bit-string (e.g., then the asymptote in the RB decay is fixed to $1/2^n$ even with biased measurement errors). This randomization is specified via the randomizeout
argument, and it defaults to False
(the standard protocol).
randomizeout = True
To generate a Clifford RB circuit in terms of native gates, it is necessary to decompose each $n$-qubit Clifford gate into the native gates. pyGSTi has a few different Clifford gate compilation algorithms, that can be accessed via the compilerargs
optional argument. Note: The Clifford RB error rate is compiler dependent! So it is not possible to properly interpret the Clifford RB error rate without understanding at least some aspects of the compilation algorithm (e.g., the mean two-qubit gate count in a compiled $n$-qubit Clifford circuit). This is one of the reasons that Direct RB is arguably a preferable method to Clifford RB.
None of the Clifford compilation algorithms in pyGSTi are a simple look-up table with some optimized property (e.g., minimized two-qubit gate count or depth). Look-up tables like this are typically used for 1- and 2-qubit Clifford RB experiments, but we instead used a method that scales to many qubits.
There are multiple compilation algorithms in pyGSTi, and the algorithm can be set using the compilerargs
argument (see the pygsti.algorithms.compile_clifford
function for some details on the available algorithms, and the CliffordRBDesign
docstring for how to specify the desired algorithm). The default algorthm is the one that we estimate to be our "best" algorithm in the regime of 1-20ish qubits. This algorithm (and some of the other algorithms) are randomized. So when creating a CliffordRBDesign
you can also specify the number of randomization, via citerations
. Increasing this will reduce the average depth and two-qubit gate count of each $n$-qubit Clifford gate, up to a point, making Clifford RB feasable on more qubits.
But note that time to generate the circuits can increase quickly as citerations
increases (because a depth $m$ circuit contains $(m+2)$ $n$-qubit Clifford gates to compile).
citerations = 20
From here, everything proceeds as in the RB overview tutorial (except for adding in the optional arguments).
# Here we construct an error model with 1% local depolarization on each qubit after each gate.
gate_error_rate = 0.01
def simulate_taking_data(data_template_filename):
"""Simulate taking data and filling the results into a template dataset.txt file"""
pspec = pygsti.obj.ProcessorSpec(n_qubits, gate_names, availability=availability,
qubit_labels=qubit_labels, construct_models=('TP',))
noisemodel = pspec.models['TP'].copy()
for gate in noisemodel.operation_blks['gates'].values():
if gate.dim == 16:
gate.depolarize(1 - pygsti.tools.rbtools.r_to_p(1 - (1-gate_error_rate)**2, 4))
if gate.dim == 4:
gate.depolarize(1 - pygsti.tools.rbtools.r_to_p(gate_error_rate, 2))
pygsti.io.fill_in_empty_dataset_with_fake_data(noisemodel, data_template_filename, num_samples=1000, seed=1234)
design = pygsti.protocols.CliffordRBDesign(pspec, depths, k, qubit_labels=qubits, randomizeout=randomizeout,
citerations=citerations)
pygsti.io.write_empty_protocol_data(design, '../tutorial_files/test_rb_dir', clobber_ok=True)
# -- fill in the dataset file in tutorial_files/test_rb_dir/data/dataset.txt --
simulate_taking_data('../tutorial_files/test_rb_dir/data/dataset.txt') # REPLACE with actual data-taking
data = pygsti.io.load_data_from_dir('../tutorial_files/test_rb_dir')
protocol = pygsti.protocols.RB()
results = protocol.run(data)
ws = pygsti.report.Workspace()
ws.init_notebook_mode(autodisplay=True)
ws.RandomizedBenchmarkingPlot(results)