This tutorial is an overview of randomized benchmarking (RB) in pyGSTi. The are multiple flavours of RB, that have different strengths and weaknesses. pyGSTi contains end-to-end methods for:
from __future__ import print_function #python 2 & 3 compatibility
import pygsti
First, we specify the device to be benchmarked, so that pyGSTi can create circuits that use only the native gates in the device (including respecting the device's connectivity). We do this using a ProcessorSpec
object (see the ProcessorSpec tutorial for details). Here we'll demonstrate RB on a device with:
n_qubits = 5
qubit_labels = ['Q0','Q1','Q2','Q3','Q4']
gate_names = ['Gxpi2', 'Gxmpi2', 'Gypi2', 'Gympi2', 'Gcphase']
availability = {'Gcphase':[('Q0','Q1'), ('Q1','Q2'), ('Q2','Q3'), ('Q3','Q4'),('Q4','Q0')]}
pspec = pygsti.obj.ProcessorSpec(n_qubits, gate_names, availability=availability,
qubit_labels=qubit_labels, construct_models=('clifford',))
All RB methods require a set of "RB depths" and a number of circuits to sample at each length ($k$). For all RB methods in pyGSTi, we use a convention where the smallest RB depth ($m$) allowed is $m=0$. So, in the case of Clifford RB on $n$ qubits, $m$ is the number of (uncompiled) $n$-qubit Clifford gates in the sequence minus two.
We can also specify the qubits to be benchmarked (if this is not specified then it defaults to holistic benchmarking of all the qubits). Here, we'll create an experiment for running 2-qubit Clifford RB on qubits 'Q0' and 'Q1'.
depths = [0,1,2,4,8,16,32,64]
k = 10
qubits = ['Q0','Q1']
# To run direct / mirror RB change CliffordRBDesign -> DirectRBDesign / MirrorRBDesign
exp_design = pygsti.protocols.CliffordRBDesign(pspec, depths, k, qubit_labels=qubits)
Next, we just follow the instructions in the experiment design to collect data from the quantum processor. In this example, we'll generate the data using a depolarizing noise model since we don't have a real quantum processor lying around. The call to simulate_taking_data
function should be replaced with the user filling out the empty "template" data set file with real data. Note also that we set clobber_ok=True
; this is so the tutorial can be run multiple times without having to manually remove the dataset.txt file - we recommend you leave this set to False (the default) when using it in your own scripts.
def simulate_taking_data(data_template_filename):
"""Simulate taking 2-qubit data and filling the results into a template dataset.txt file"""
pspec = pygsti.obj.ProcessorSpec(n_qubits, gate_names, availability=availability,
qubit_labels=qubit_labels, construct_models=('TP',))
noisemodel = pspec.models['TP'].copy()
for gate in noisemodel.operation_blks['gates'].values():
gate.depolarize(0.001)
pygsti.io.fill_in_empty_dataset_with_fake_data(noisemodel, data_template_filename, num_samples=1000, seed=1234)
pygsti.io.write_empty_protocol_data(exp_design, '../tutorial_files/test_rb_dir', clobber_ok=True)
# -- fill in the dataset file in tutorial_files/test_rb_dir/data/dataset.txt --
simulate_taking_data('../tutorial_files/test_rb_dir/data/dataset.txt') # REPLACE with actual data-taking
data = pygsti.io.load_data_from_dir('../tutorial_files/test_rb_dir')
Now we just instantiate an RB
protocol and .run
it on our data object. This involves converting the data to the success/fail format of RB and then fitting it to an exponential decay ($P_m = A + B p^m$ where $P_m$ is the average success probability at RB length $m$). The .run
method returns a results object that can be used to plot decay curves, and display error rates.
# To run Mirror RB, set datatype = 'adjusted_success_probabilities' in this init.
protocol = pygsti.protocols.RB()
# TODO: This hits same error as RB-CliffordRB tutorial, KeyError
results = protocol.run(data)
ws = pygsti.report.Workspace()
ws.init_notebook_mode(autodisplay=True)
ws.RandomizedBenchmarkingPlot(results)
By default, pyGSTi uses an RB error rate ($r$) convention whereby
$$ r = \frac{(4^n - 1)(1 - p)}{4^n}, $$
where $n$ is the number of qubits (here, 2) and $p$ is the estimated decay constant obtained from fitting to $P_m = A + Bp^m$. This approximately corresponds to the mean entanglement infidelity of the benchmarked gate set (modulo some subtleties). A common alternative convention is to define $r$ by
$$ r = \frac{(2^n - 1)(1 - p)}{2^n}. $$
In this case, $r$ approximately corresponds to the mean average gate infidelity of the benchmarked gate set (modulo the same subtleties). This alternative convention can be obtained by setting the optional argument rtype = 'AGI'
when initializing an RB
protocol.
We have the entanglement infidelity convention as the default because it is more convenient when comparing RB error rates obtained from benchmarking different numbers of qubits (as the entanglement fidelity of a tensor product of gates is the product of the constituent fidelities).
# We can also access the estimated error rate directly without plotting the decay
r = results.fits['full'].estimates['r']
rstd = results.fits['full'].stds['r']
rAfix = results.fits['A-fixed'].estimates['r']
rAfixstd = results.fits['A-fixed'].stds['r']
print("r = {0:1.2e} +/- {1:1.2e} (fit with a free asymptote)".format(r, 2*rstd))
print("r = {0:1.2e} +/- {1:1.2e} (fit with the asymptote fixed to 1/2^n)".format(rAfix, 2*rAfixstd))