In [1]:

```
import os
import sys
import time
import json
import pygsti
from pygsti.modelpacks.legacy import std1Q_XYI
%pylab inline
```

Populating the interactive namespace from numpy and matplotlib

In [2]:

```
#Get a GST estimate (similar to Tutorial 0)
# 1) get the target Model
target_model = std1Q_XYI.target_model()
# 2) get the building blocks needed to specify which operation sequences are needed
prep_fiducials, meas_fiducials = std1Q_XYI.prepStrs, std1Q_XYI.effectStrs
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8,16]
# 3) generate "fake" data from a depolarized version of target_model
mdl_datagen = target_model.depolarize(op_noise=0.1, spam_noise=0.001)
listOfExperiments = pygsti.construction.create_lsgst_circuits(
target_model, prep_fiducials, meas_fiducials, germs, maxLengths)
ds = pygsti.construction.simulate_data(mdl_datagen, listOfExperiments, nSamples=1000,
sampleError="binomial", seed=1234)
results = pygsti.run_stdpractice_gst(ds, target_model, prep_fiducials, meas_fiducials,
germs, maxLengths, modes="TP")
estimated_model = results.estimates['TP'].models['stdgaugeopt']
```

Here we do parametric bootstrapping, as indicated by the 'parametric' argument below. The output is eventually stored in the "mean" and "std" Models, which hold the mean and standard deviation values of the set of bootstrapped models (after gauge optimization). It is this latter "standard deviation Model" which holds the collection of error bars. Note: due to print setting issues, the outputs that are printed here will not necessarily reflect the true accuracy of the estimates made.

In [3]:

```
#The number of simulated datasets & models made for bootstrapping purposes.
# For good statistics, should probably be greater than 10.
numGatesets=10
param_boot_models = pygsti.drivers.make_bootstrap_models(
numGatesets, ds, 'parametric', prep_fiducials, meas_fiducials, germs, maxLengths,
inputModel=estimated_model, startSeed=0, returnData=False,
verbosity=2)
```

In [4]:

```
gauge_opt_pboot_models = pygsti.drivers.gauge_optimize_model_list(param_boot_models, estimated_model,
plot=False) #plotting support removed w/matplotlib
```

In [5]:

```
pboot_mean = pygsti.drivers.to_mean_model(gauge_opt_pboot_models, estimated_model)
pboot_std = pygsti.drivers.to_std_model(gauge_opt_pboot_models, estimated_model)
#Summary of the error bars
print("Parametric bootstrapped error bars, with", numGatesets, "resamples\n")
print("Error in rho vec:")
print(pboot_std['rho0'], end='\n\n')
print("Error in effect vecs:")
print(pboot_std['Mdefault'], end='\n\n')
print("Error in Gi:")
print(pboot_std['Gi'], end='\n\n')
print("Error in Gx:")
print(pboot_std['Gx'], end='\n\n')
print("Error in Gy:")
print(pboot_std['Gy'])
```

Here we do non-parametric bootstrapping, as indicated by the 'nonparametric' argument below. The output is again eventually stored in the "mean" and "std" Models, which hold the mean and standard deviation values of the set of bootstrapped models (after gauge optimization). It is this latter "standard deviation Model" which holds the collection of error bars. Note: due to print setting issues, the outputs that are printed here will not necessarily reflect the true accuracy of the estimates made.

(Technical note: ddof = 1 is by default used when computing the standard deviation -- see numpy.std -- meaning that we are computing a standard deviation of the sample, not of the population.)

In [6]:

```
#The number of simulated datasets & models made for bootstrapping purposes.
# For good statistics, should probably be greater than 10.
numModels=10
nonparam_boot_models = pygsti.drivers.make_bootstrap_models(
numModels, ds, 'nonparametric', prep_fiducials, meas_fiducials, germs, maxLengths,
targetModel=estimated_model, startSeed=0, returnData=False, verbosity=2)
```

In [7]:

```
gauge_opt_npboot_models = pygsti.drivers.gauge_optimize_model_list(nonparam_boot_models, estimated_model,
plot=False) #plotting removed w/matplotlib
```

In [8]:

```
npboot_mean = pygsti.drivers.to_mean_model(gauge_opt_npboot_models, estimated_model)
npboot_std = pygsti.drivers.to_std_model(gauge_opt_npboot_models, estimated_model)
#Summary of the error bars
print("Non-parametric bootstrapped error bars, with", numGatesets, "resamples\n")
print("Error in rho vec:")
print(npboot_std['rho0'], end='\n\n')
print("Error in effect vecs:")
print(npboot_std['Mdefault'], end='\n\n')
print("Error in Gi:")
print(npboot_std['Gi'], end='\n\n')
print("Error in Gx:")
print(npboot_std['Gx'], end='\n\n')
print("Error in Gy:")
print(npboot_std['Gy'])
```

In [9]:

```
loglog(npboot_std.to_vector(),pboot_std.to_vector(),'.')
loglog(np.logspace(-4,-2,10),np.logspace(-4,-2,10),'--')
xlabel('Non-parametric')
ylabel('Parametric')
xlim((1e-4,1e-2)); ylim((1e-4,1e-2))
title('Scatter plot comparing param vs. non-param bootstrapping error bars.')
```

Out[9]:

Text(0.5,1,'Scatter plot comparing param vs. non-param bootstrapping error bars.')

In [ ]:

```
```