The pygsti
package provides multiple levels of abstraction over the core Gate Set Tomography (GST) algorithms. This tutorial will show you how to work with pygsti
's highest level of abstraction, called "driver functions" to run GST algorithms with a minimial amount of effort. In order to run a GST algorithm there are 3 essential ingredients: 1) data specifing the experimental outcomes, 2) a desired, or "target", GateSet
, and 3) multiple lists of gate sequences, specifying the gate sequences to use at each successive step in the GST optimization. There are currently only a few driver routines, which we'll cover in turn. Each driver function returns a single pygsti.objects.Results
object, which contains the single input DataSet
and one or more estimates (pygsti.objects.Estimate
objects).
[The abbreviation "LSGST" (lowercase in function names to follow Python naming conventions) stands for "Long Sequence Gate Set Tomography", and refers to the more powerful flavor of GST that utilizes long sequences to find gate set estimates. LSGST can be compared to Linear GST, or "LGST", which only uses short sequences and as a result provides much less accurate estimates.]
from __future__ import print_function
import pygsti
First, we set our desired "target gateset" to be the standard I, X, Y gate set that we've been using throughout the tutorials, and pull in the fiducial and germ sequences needed to generate the GST gate sequences. We also specify a list of maximum lengths. We'll analyze the simulated data generated in the data sets tutorial.
from pygsti.construction import std1Q_XYI
gs_target = std1Q_XYI.gs_target
prep_fiducials, meas_fiducials = std1Q_XYI.prepStrs, std1Q_XYI.effectStrs
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8,16,32]
ds = pygsti.io.load_dataset("tutorial_files/Example_Dataset.txt", cache=True)
Loading tutorial_files/Example_Dataset.txt: 100% Writing cache file (to speed future loads): tutorial_files/Example_Dataset.txt.cache
do_long_sequence_gst
¶This driver function finds what is logically a single GST estimate given a DataSet
, a target GateSet
, and other parameters. We say "logically" because the returned Results
object may actually contain multiple related estimates in certain cases. Most important among the other parameters are the fiducial and germ sequences and list of maximum lengths needed to define a standard set of GST gate sequence lists.
results = pygsti.do_long_sequence_gst(ds, gs_target, prep_fiducials, meas_fiducials, germs, maxLengths)
--- Gate Sequence Creation --- 1702 sequences created Dataset has 3382 entries: 1702 utilized, 0 requested sequences were missing --- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.245030583357433 1.1797105733752997 0.956497891831113 0.9423535266759971 0.04708902142849769 0.015314932955168444 Singular values of target I_tilde (truncating to first 4 of 6) = 4.242640687119285 1.4142135623730954 1.4142135623730947 1.4142135623730945 3.1723744950054595e-16 1.0852733691121267e-16 --- Iterative MLGST: Iter 1 of 6 92 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 40.9959 (92 data params - 44 model params = expected mean of 48; p-value = 0.752996) Completed in 0.1s 2*Delta(log(L)) = 41.1735 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 6 168 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 119.003 (168 data params - 44 model params = expected mean of 124; p-value = 0.609957) Completed in 0.2s 2*Delta(log(L)) = 119.399 Iteration 2 took 0.4s --- Iterative MLGST: Iter 3 of 6 450 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 415.116 (450 data params - 44 model params = expected mean of 406; p-value = 0.366587) Completed in 0.5s 2*Delta(log(L)) = 415.822 Iteration 3 took 0.8s --- Iterative MLGST: Iter 4 of 6 862 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 813.352 (862 data params - 44 model params = expected mean of 818; p-value = 0.539288) Completed in 0.9s 2*Delta(log(L)) = 815.138 Iteration 4 took 1.6s --- Iterative MLGST: Iter 5 of 6 1282 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1251.63 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.387312) Completed in 1.6s 2*Delta(log(L)) = 1254.02 Iteration 5 took 2.5s --- Iterative MLGST: Iter 6 of 6 1702 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1747.78 (1702 data params - 44 model params = expected mean of 1658; p-value = 0.0613829) Completed in 2.5s 2*Delta(log(L)) = 1750.66 Iteration 6 took 3.9s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 875.077 below upper bound of -2.84675e+06 2*Delta(log(L)) = 1750.15 (1702 data params - 44 model params = expected mean of 1658; p-value = 0.0566961) Completed in 3.3s 2*Delta(log(L)) = 1750.15 Final MLGST took 3.3s Iterative MLGST Total Time: 12.7s -- Adding Gauge Optimized (go0) --
# A summary of what's inside a Results object is obtained by printing it
# (for more examples of how to use a Results object, see the Results tutorial)
print(results)
---------------------------------------------------------- ---------------- pyGSTi Results Object ------------------- ---------------------------------------------------------- How to access my contents: .dataset -- the DataSet used to generate these results .gatestring_lists -- a dict of GateString lists w/keys: --------------------------------------------------------- iteration final all iteration delta prep fiducials effect fiducials germs .gatestring_structs -- a dict of GatestringStructures w/keys: --------------------------------------------------------- iteration final .estimates -- a dictionary of Estimate objects: --------------------------------------------------------- default
The above example supplies the minimal amount of information required to run the long-sequence GST algorithm. do_long_sequence_gst
can be used in a variety of contexts and accepts additional (optional) arguments that affect the way the algorithm is run. Here we make several remarks regarding alternate or more advanced usage of do_long_sequence_gst
.
For many of the arguments, you can supply either a filename or a python object (e.g. dataset, target gateset, gate string lists), so if you find yourself loading things from files just to pass them in as arguments, you're probabaly working too hard.
Typically we want to apply certain constraints to a GST optimization. As mentioned in the gate set tutorial, the space over which a gate-set estimation is carried out is dictated by the parameterization of the targetGateset
argument. For example, to constrain a GST estimate to be trace-preserving, one should call set_all_parameterizations("TP")
on the target GateSet
before calling do_long_sequence_gst
.
the gaugeOptParams
argument specifies a dictionary of parameters ultimately to be passed to the gaugeopt_to_target
function (which provides full documentation). By specifying an itemWeights
argument we can set the ratio of the state preparation and measurement (SPAM) weighting to the gate weighting when performing a gauge optimization. In the example below, the gate parameters are weighted 1000 times more relative to the SPAM parameters. Mathematically this corresponds to a multiplicative factor of 0.001 preceding the sum-of-squared-difference terms corresponding to SPAM elements in the gateset. Typically it is good to weight the gates parameters more heavily since GST amplifies gate parameter errors via long gate sequences but cannot amplify SPAM parameter errors. If unsure, 0.001 is a good value to start with. For more details on the arguments of gaugeopt_to_target
, see the previous tutorial on low-level algorithms.
The below call illustrates all three of these.
gs_target_TP = gs_target.copy() #make a copy so we don't change gs_target's parameterization,
# since this could be confusing later...
gs_target_TP.set_all_parameterizations("TP") #constrain GST estimate to TP
results_TP = pygsti.do_long_sequence_gst("tutorial_files/Example_Dataset.txt", gs_target_TP,
prep_fiducials, meas_fiducials, germs, maxLengths,
gaugeOptParams={'itemWeights': {'gates': 1.0, 'spam': 0.001}})
Loading from cache file: tutorial_files/Example_Dataset.txt.cache --- Gate Sequence Creation --- 1702 sequences created Dataset has 3382 entries: 1702 utilized, 0 requested sequences were missing --- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.245030583357433 1.1797105733752997 0.956497891831113 0.9423535266759971 0.04708902142849769 0.015314932955168444 Singular values of target I_tilde (truncating to first 4 of 6) = 4.242640687119285 1.4142135623730954 1.4142135623730947 1.4142135623730945 3.1723744950054595e-16 1.0852733691121267e-16 --- Iterative MLGST: Iter 1 of 6 92 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 41.0771 (92 data params - 31 model params = expected mean of 61; p-value = 0.976519) Completed in 0.1s 2*Delta(log(L)) = 41.2329 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 6 168 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 119.288 (168 data params - 31 model params = expected mean of 137; p-value = 0.859758) Completed in 0.3s 2*Delta(log(L)) = 119.601 Iteration 2 took 0.4s --- Iterative MLGST: Iter 3 of 6 450 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 415.46 (450 data params - 31 model params = expected mean of 419; p-value = 0.539658) Completed in 0.5s 2*Delta(log(L)) = 415.96 Iteration 3 took 0.9s --- Iterative MLGST: Iter 4 of 6 862 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 814.34 (862 data params - 31 model params = expected mean of 831; p-value = 0.65359) Completed in 1.1s 2*Delta(log(L)) = 815.742 Iteration 4 took 1.9s --- Iterative MLGST: Iter 5 of 6 1282 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1252.69 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.481202) Completed in 1.3s 2*Delta(log(L)) = 1254.49 Iteration 5 took 2.6s --- Iterative MLGST: Iter 6 of 6 1702 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1748.76 (1702 data params - 31 model params = expected mean of 1671; p-value = 0.0907206) Completed in 2.1s 2*Delta(log(L)) = 1750.76 Iteration 6 took 3.8s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 875.357 below upper bound of -2.84675e+06 2*Delta(log(L)) = 1750.71 (1702 data params - 31 model params = expected mean of 1671; p-value = 0.0854932) Completed in 3.9s 2*Delta(log(L)) = 1750.71 Final MLGST took 3.9s Iterative MLGST Total Time: 13.7s -- Adding Gauge Optimized (go0) --
do_long_sequence_gst_base
¶This performs the same analysis as do_long_sequence_gst
except it allows the user to fully specify the list of gate sequences as either a list of lists of GateString
objects or a list of GateStringStructure
objects (the latter allow the structured plotting of the sequences in report figures). In this example, we'll just generate a standard set of structures, but with some of the sequences randomly dropped (see later tutorials on gate string reduction). Note that like do_long_sequence_gst
, do_long_sequence_gst_base
is able to take filenames as arguments and accepts a gaugeOptParams
argument for customizing the gauge optimization that is performed.
#Create the same sequences but drop 50% of them randomly for each repeated-germ block.
lsgst_structs = pygsti.construction.make_lsgst_structs(gs_target, prep_fiducials, meas_fiducials,
germs, maxLengths, keepFraction=0.5, keepSeed=2018)
results_reduced = pygsti.do_long_sequence_gst_base(ds, gs_target, lsgst_structs)
--- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.245030583357433 1.1797105733752997 0.956497891831113 0.9423535266759971 0.04708902142849769 0.015314932955168444 Singular values of target I_tilde (truncating to first 4 of 6) = 4.242640687119285 1.4142135623730954 1.4142135623730947 1.4142135623730945 3.1723744950054595e-16 1.0852733691121267e-16 --- Iterative MLGST: Iter 1 of 6 92 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 40.9959 (92 data params - 44 model params = expected mean of 48; p-value = 0.752996) Completed in 0.1s 2*Delta(log(L)) = 41.1735 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 6 132 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 85.9728 (132 data params - 44 model params = expected mean of 88; p-value = 0.541264) Completed in 0.2s 2*Delta(log(L)) = 86.209 Iteration 2 took 0.3s --- Iterative MLGST: Iter 3 of 6 284 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 247.628 (284 data params - 44 model params = expected mean of 240; p-value = 0.353874) Completed in 0.4s 2*Delta(log(L)) = 247.985 Iteration 3 took 0.6s --- Iterative MLGST: Iter 4 of 6 493 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 424.418 (493 data params - 44 model params = expected mean of 449; p-value = 0.792017) Completed in 0.6s 2*Delta(log(L)) = 425.347 Iteration 4 took 1.0s --- Iterative MLGST: Iter 5 of 6 705 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 651.657 (705 data params - 44 model params = expected mean of 661; p-value = 0.594769) Completed in 0.9s 2*Delta(log(L)) = 653 Iteration 5 took 1.6s --- Iterative MLGST: Iter 6 of 6 917 gate strings ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 903.446 (917 data params - 44 model params = expected mean of 873; p-value = 0.230793) Completed in 1.7s 2*Delta(log(L)) = 905.041 Iteration 6 took 3.0s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 452.388 below upper bound of -1.53074e+06 2*Delta(log(L)) = 904.776 (917 data params - 44 model params = expected mean of 873; p-value = 0.22144) Completed in 2.4s 2*Delta(log(L)) = 904.776 Final MLGST took 2.4s Iterative MLGST Total Time: 9.2s -- Adding Gauge Optimized (go0) --
do_std_practice_gst
¶This driver function calls do_long_sequence_gst
multiple times using typical variations in gauge optimization parameters and GateSet
parameterization. This function provides a clean and simple interface to performing a "usual" set of GST analyses on a set of data. As such, it takes a single DataSet
, similar gate-sequence-specifying parameters to do_long_sequence_gst
, and a new modes
argument which is a comma-separated list of "canned" GST analysis types (e.g. "TP,CPTP"
will compute a Trace-Preserving estimate and a Completely-Positive & Trace-Preserving estimate). The currently available modes are:
The gauge optimization(s) do_std_practice_gst
performs are controlled by its gaugeOptSuite
and gaugeOptTarget
arguments. The former is can be either a string, specifying a standard "suite" of gauge optimizations, or a dictionary of dictionaries similar to the gaugeOptParams
argument of do_long_sequence_gst
(see docstring). The gaugeOptTarget
argument may be set to a GateSet
that is used as the target for gauge optimization, overriding the (typically ideal) target gates given by the targetGateFilenameOrSet
argument.
results_stdprac = pygsti.do_stdpractice_gst(ds, gs_target, prep_fiducials, meas_fiducials, germs, maxLengths,
modes="TP,CPTP,Target") #uses the default suite of gauge-optimizations
-- Std Practice: Iter 1 of 3 (TP) --: --- Gate Sequence Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 14.2s -- Performing 'single' gauge optimization on TP estimate -- -- Std Practice: Iter 2 of 3 (CPTP) --: --- Gate Sequence Creation --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 21.9s -- Performing 'single' gauge optimization on CPTP estimate -- -- Std Practice: Iter 3 of 3 (Target) --: --- Gate Sequence Creation --- -- Performing 'single' gauge optimization on Target estimate --
print("Estimates: ", ", ".join(results_stdprac.estimates.keys()))
print("TP Estimate's gauge optimized gate sets: ", ", ".join(results_stdprac.estimates["TP"].goparameters.keys()))
Estimates: TP, CPTP, Target TP Estimate's gauge optimized gate sets: single
Next, we'll perform the same analysis but with a non-default standard suite of gauge optimizations - this one toggles the SPAM penalty in addition to varying the spam weight (the default suite just varies the spam weight without any SPAM penalty)
results_stdprac_nondefaultgo = pygsti.do_stdpractice_gst(
ds, gs_target, prep_fiducials, meas_fiducials, germs, maxLengths,
modes="TP,CPTP,Target", gaugeOptSuite="varySpam")
-- Std Practice: Iter 1 of 3 (TP) --: --- Gate Sequence Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 11.6s -- Performing 'Spam 0.0001' gauge optimization on TP estimate -- -- Performing 'Spam 0.1' gauge optimization on TP estimate -- -- Performing 'Spam 0.0001+v' gauge optimization on TP estimate -- -- Performing 'Spam 0.1+v' gauge optimization on TP estimate -- -- Std Practice: Iter 2 of 3 (CPTP) --: --- Gate Sequence Creation --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 17.3s -- Performing 'Spam 0.0001' gauge optimization on CPTP estimate -- -- Performing 'Spam 0.1' gauge optimization on CPTP estimate -- -- Performing 'Spam 0.0001+v' gauge optimization on CPTP estimate -- -- Performing 'Spam 0.1+v' gauge optimization on CPTP estimate -- -- Std Practice: Iter 3 of 3 (Target) --: --- Gate Sequence Creation --- -- Performing 'Spam 0.0001' gauge optimization on Target estimate -- -- Performing 'Spam 0.1' gauge optimization on Target estimate -- -- Performing 'Spam 0.0001+v' gauge optimization on Target estimate -- -- Performing 'Spam 0.1+v' gauge optimization on Target estimate --
print("Estimates: ", ", ".join(results_stdprac_nondefaultgo.estimates.keys()))
print("TP Estimate's gauge optimized gate sets: ", ", ".join(results_stdprac_nondefaultgo.estimates["TP"].goparameters.keys()))
Estimates: TP, CPTP, Target TP Estimate's gauge optimized gate sets: Spam 0.0001, Spam 0.1, Spam 0.0001+v, Spam 0.1+v
Finally, we'll demonstrate how to specify a fully custom set of gauge optimization parameters and how to use a separately-specified target gate set for gauge optimization.
my_goparams = { 'itemWeights': {'gates': 1.0, 'spam': 0.001} }
my_gaugeOptTarget = gs_target.depolarize(gate_noise=0.005, spam_noise=0.01) # a guess at what estimate should be
results_stdprac_customgo = pygsti.do_stdpractice_gst(
ds, gs_target, prep_fiducials, meas_fiducials, germs, maxLengths,
modes="TP,CPTP,Target", gaugeOptSuite={ 'myGO': my_goparams }, gaugeOptTarget=my_gaugeOptTarget)
-- Std Practice: Iter 1 of 3 (TP) --: --- Gate Sequence Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 7.7s -- Performing 'myGO' gauge optimization on TP estimate -- -- Std Practice: Iter 2 of 3 (CPTP) --: --- Gate Sequence Creation --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 14.1s -- Performing 'myGO' gauge optimization on CPTP estimate -- -- Std Practice: Iter 3 of 3 (Target) --: --- Gate Sequence Creation --- -- Performing 'myGO' gauge optimization on Target estimate --
print("Estimates: ", ", ".join(results_stdprac_customgo.estimates.keys()))
print("TP Estimate's gauge optimized gate sets: ", ", ".join(results_stdprac_customgo.estimates["TP"].goparameters.keys()))
Estimates: TP, CPTP, Target TP Estimate's gauge optimized gate sets: myGO
To finish up, we'll pickle the results for processing in subsequent tutorials.
import pickle
pickle.dump(results, open('tutorial_files/exampleResults.pkl',"wb"))
pickle.dump(results_TP, open('tutorial_files/exampleResults_TP.pkl',"wb"))
pickle.dump(results_reduced, open('tutorial_files/exampleResults_reduced.pkl',"wb"))
pickle.dump(results_stdprac, open('tutorial_files/exampleResults_stdprac.pkl',"wb"))