The pygsti
package provides multiple levels of abstraction over the core Gate Set Tomography (GST) algorithms. This initial tutorial will show you how to work with pygsti
's highest level of abstraction to get you started using GST quickly. Subsequent tutorials will delve into the details of pygsti
objects and algorithms, and how to use them in detail.
do_long_sequence_gst
driver function¶Let's first look at how to use the do_long_sequence_gst
which combines all the steps of running typical GST algortithms into a single function.
#Make print statements compatible with Python 2 and 3
from __future__ import print_function
#Set matplotlib backend: the ipython "inline" backend cannot pickle
# figures, which is required for generating pyGSTi reports
import matplotlib
matplotlib.use('Agg')
#Import the pygsti module (always do this)
import pygsti
Gate sets and other pygsti
objects are constructed using routines within pygsti.construction
, and so we construct a gateset by calling pygsti.construction.build_gateset
:
#Construct a target gateset
gs_target = pygsti.construction.build_gateset([2], [('Q0',)], ['Gi', 'Gx', 'Gy'],
["I(Q0)", "X(pi/2,Q0)", "Y(pi/2,Q0)"],
prepLabels=['rho0'], prepExpressions=["0"],
effectLabels=['E0'], effectExpressions=["1"],
spamdefs={'plus': ('rho0', 'E0'),
'minus': ('rho0', 'remainder')})
The parameters to build_gateset
, specify:
The state space is dimension 2 (i.e. the density matrix is 2x2)
interpret this 2-dimensional space as that of a single qubit labeled "Q0" (label must begin with 'Q')
there are three gates: Idle, $\pi/2$ x-rotation, $\pi/2$ y-rotation, labeled Gi
, Gx
, and Gy
.
there is one state prep operation, labeled rho0
, which prepares the 0-state (the first basis element of the 2D state space)
there is one POVM (~ measurement), labeled E0
that projects onto the 1-state (the second basis element of the 2D state space)
the name of the state-prep then measure our POVM is plus
the name of the state-prep then measure something other than our POVM is minus
Reading from and writing to files is done mostly via routines in pygsti.io
. To store this gateset in a file (for reference or to load it somewhere else), you just call pygsti.io.write_gateset
:
#Write it to a file
pygsti.io.write_gateset(gs_target, "tutorial_files/MyTargetGateset.txt")
#To load the gateset back into a python object, do:
# gs_target = pygsti.io.load_gateset("tutorial_files/MyTargetGateset.txt")
These three lists will specify what experiments GST will use in its estimation procedure, and depend on the target gateset as well as the expected quality of the qubit being measured. They are:
fiducial gate strings (fiducials
): gate sequences that immediately follow state preparation or immediately precede measurement.
germ gate strings (germs
): gate sequences that are repeated to produce a string that is as close to some "maximum length" as possible without exceeding it.
maximum lengths (maxLengths
): a list of maximum lengths used to specify the increasingly long gate sequences (via more germ repeats) used by each iteration of the GST estimation procedure.
To make GST most effective, these gate strings lists should be computed. Typically this computation is done by the Sandia GST folks and the gate string lists are sent to you, though there is preliminary support within pygsti
for computing these string lists directly. Here, we'll assume we have been given the lists. The maximum lengths list typically starts with [0,1] and then contains successive powers of two. The largest maximum length should roughly correspond to the number of gates ones qubit can perform before becoming depolarized beyond ones ability to measure anything other than the maximally mixed state. Since we're constructing gate string lists, the routines used are in pygsti.construction
:
#Create fiducial gate string lists
fiducials = pygsti.construction.gatestring_list( [ (), ('Gx',), ('Gy',), ('Gx','Gx'), ('Gx','Gx','Gx'), ('Gy','Gy','Gy') ])
#Create germ gate string lists
germs = pygsti.construction.gatestring_list( [('Gx',), ('Gy',), ('Gi',), ('Gx', 'Gy',),
('Gx', 'Gy', 'Gi',), ('Gx', 'Gi', 'Gy',), ('Gx', 'Gi', 'Gi',), ('Gy', 'Gi', 'Gi',),
('Gx', 'Gx', 'Gi', 'Gy',), ('Gx', 'Gy', 'Gy', 'Gi',),
('Gx', 'Gx', 'Gy', 'Gx', 'Gy', 'Gy',)] )
#Create maximum lengths list
maxLengths = [0,1,2,4,8,16,32]
If we want to, we can save these lists in files (but this is not necessary):
pygsti.io.write_gatestring_list("tutorial_files/MyFiducials.txt", fiducials, "My fiducial gate strings")
pygsti.io.write_gatestring_list("tutorial_files/MyGerms.txt", germs, "My germ gate strings")
import pickle
pickle.dump( maxLengths, open("tutorial_files/MyMaxLengths.pkl", "wb"))
# To load these back into python lists, do:
#fiducials = pygsti.io.load_gatestring_list("tutorial_files/MyFiducials.txt")
#germs = pygsti.io.load_gatestring_list("tutorial_files/MyGerms.txt")
#maxLengths = pickle.load( open("tutorial_files/MyMaxLengths.pkl"))
Before experimental data is obtained, it is useful to create a "template" dataset file which specifies which gate sequences are required to run GST. Since we don't actually have an experiment for this example, we'll generate some "fake" experimental data from a set of gates that are just depolarized versions of the targets. First we construct the list of experiments used by GST using make_lsgst_experiment_list
, and use the result to specify which experiments to simulate. The abbreviation "LSGST" (lowercase in function names to follow Python naming conventions) stands for "Long Sequence Gate Set Tomography", and refers to the more powerful flavor of GST that utilizes long sequences to find gate set estimates. LSGST can be compared to Linear GST, or "LGST", which only uses short sequences and as a result provides much less accurate estimates.
#Create a list of GST experiments for this gateset, with
#the specified fiducials, germs, and maximum lengths
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(gs_target.gates.keys(),
fiducials, fiducials, germs, maxLengths)
#Create an empty dataset file, which stores the list of experiments
#plus extra columns where data can be inserted
pygsti.io.write_empty_dataset("tutorial_files/MyDataTemplate.txt", listOfExperiments,
"## Columns = plus count, count total")
Since we don't actually have a experiment to generate real data, let's now create and save a dataset using depolarized target gates and spam operations:
#Create a gateset of depolarized gates and SPAM relative to target, and generate fake data using this gateset.
gs_datagen = gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)
ds = pygsti.construction.generate_fake_data(gs_datagen, listOfExperiments, nSamples=1000,
sampleError="binomial", seed=2015)
We could at this point just use the generated dataset directly, but let's save it as though it were a file filled with experimental results.
#Save our dataset
pygsti.io.write_dataset("tutorial_files/MyDataset.txt", ds)
#Note; to load the dataset back again, do:
#ds = pygsti.io.load_dataset("tutorial_files/MyDataset.txt")
Now we're all set to call the driver routine. All of the possible arguments to this function are detailed in the included help (docstring), and so here we just make a few remarks:
For many of the arguments, you can supply either a filename or a python object (e.g. dataset, target gateset, gate string lists).
fiducials
is supplied twice since the state preparation fiducials (those sequences following a state prep) need not be the same as the measurement fiducials (those sequences preceding a measurement).
Typically we want to constrain the resulting gates to be trace-preserving. This is accomplished by the call to set_all_parameterizations("TP")
, which restricts gs_target
to only describing TP gates (see more on this in later tutorials).
gaugeOptParams
specifies a dictionary of parameters ultimately to be passed to the gaugeopt_to_target
function (which provides full documentation). We set the ratio of the state preparation and measurement (SPAM) weighting to the gate weighting when performing a gauge optimization. When this is set as below, the gate parameters are weighted 1000 times more relative to the SPAM parameters. Mathematically this corresponds to a multiplicative factor of 0.001 preceding the sum-of-squared-difference terms corresponding to SPAM elements in the gateset. Typically it is good to weight the gates parameters more heavily since GST amplifies gate parameter errors via long gate sequences but cannot amplify SPAM parameter errors. If unsure, 0.001 is a good value to start with.
gs_target.set_all_parameterizations("TP")
results = pygsti.do_long_sequence_gst("tutorial_files/MyDataset.txt", gs_target,
fiducials, fiducials, germs, maxLengths,
gaugeOptParams={'itemWeights': {'spam': 1e-3, 'gates': 1.0}})
Loading tutorial_files/MyDataset.txt: 100% Writing cache file (to speed future loads): tutorial_files/MyDataset.txt.cache --- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.24630617076 1.15648754295 0.95999512855 0.941028780281 0.0452423314618 0.0262637861406 Singular values of target I_tilde (truncating to first 4 of 6) = 4.24264068712 1.41421356237 1.41421356237 1.41421356237 3.3483200348e-16 2.72548486209e-16 --- Iterative MLGST: Iter 1 of 7 92 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 61.3616 (92 data params - 31 model params = expected mean of 61; p-value = 0.462934) Completed in 0.1s 2*Delta(log(L)) = 61.4796 Iteration 1 took 0.1s --- Iterative MLGST: Iter 2 of 7 92 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 61.3616 (92 data params - 31 model params = expected mean of 61; p-value = 0.462934) Completed in 0.0s 2*Delta(log(L)) = 61.4796 Iteration 2 took 0.0s --- Iterative MLGST: Iter 3 of 7 168 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 127.73 (168 data params - 31 model params = expected mean of 137; p-value = 0.702867) Completed in 0.1s 2*Delta(log(L)) = 127.973 Iteration 3 took 0.1s --- Iterative MLGST: Iter 4 of 7 441 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 387.654 (441 data params - 31 model params = expected mean of 410; p-value = 0.779828) Completed in 0.1s 2*Delta(log(L)) = 387.414 Iteration 4 took 0.1s --- Iterative MLGST: Iter 5 of 7 817 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 778.59 (817 data params - 31 model params = expected mean of 786; p-value = 0.567748) Completed in 0.2s 2*Delta(log(L)) = 777.903 Iteration 5 took 0.2s --- Iterative MLGST: Iter 6 of 7 1201 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 1204.48 (1201 data params - 31 model params = expected mean of 1170; p-value = 0.235874) Completed in 0.3s 2*Delta(log(L)) = 1204.08 Iteration 6 took 0.3s --- Iterative MLGST: Iter 7 of 7 1585 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 1585.47 (1585 data params - 31 model params = expected mean of 1554; p-value = 0.283448) Completed in 0.4s 2*Delta(log(L)) = 1585.17 Iteration 7 took 0.5s Switching to ML objective (last iteration) --- MLGST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Maximum log(L) = 792.537 below upper bound of -2.65052e+06 2*Delta(log(L)) = 1585.07 (1585 data params - 31 model params = expected mean of 1554; p-value = 0.28582) Completed in 0.3s 2*Delta(log(L)) = 1585.07 Final MLGST took 0.3s Iterative MLGST Total Time: 1.6s
import pickle
s = pickle.dumps(results)
r2 = pickle.loads(s)
print(r2.gatesets['final estimate'])
rho0 = 0.7071 0.0005 -0.0043 0.7049 E0 = 0.7073 -0.0008 -0.0052 -0.7065 Gi = 1.0000 0 0 0 -0.0006 0.9009 -0.0040 0.0023 -0.0004 0.0010 0.8982 -0.0007 -0.0002 -0.0023 0.0002 0.8948 Gx = 1.0000 0 0 0 -0.0004 0.8986 -0.0012 -0.0013 -0.0003 0.0020 -0.0039 -0.9021 -0.0003 -0.0019 0.9021 0.0032 Gy = 1.0000 0 0 0 -0.0002 0.0015 -0.0049 0.9022 0.0003 -0.0016 0.8976 -0.0056 0.0004 -0.9022 0.0023 0.0014
The analysis routine returns a pygsti.report.Results
object which encapsulates intermediate and final GST estimates, as well as quantities derived from these "raw" estimates. (The object also caches derived quantities so that repeated queries for the same quanties do not require recalculation.) Finally, a Results
object can generate reports and presentations containing many of the raw and derived GST results. We give examples of these uses below.
# Access to raw GST best gateset estimate
print(results.gatesets['final estimate'])
rho0 = 0.7071 0.0005 -0.0043 0.7049 E0 = 0.7073 -0.0008 -0.0052 -0.7065 Gi = 1.0000 0 0 0 -0.0006 0.9009 -0.0040 0.0023 -0.0004 0.0010 0.8982 -0.0007 -0.0002 -0.0023 0.0002 0.8948 Gx = 1.0000 0 0 0 -0.0004 0.8986 -0.0012 -0.0013 -0.0003 0.0020 -0.0039 -0.9021 -0.0003 -0.0019 0.9021 0.0032 Gy = 1.0000 0 0 0 -0.0002 0.0015 -0.0049 0.9022 0.0003 -0.0016 0.8976 -0.0056 0.0004 -0.9022 0.0023 0.0014
#create a full GST report (most detailed and pedagogical; best for those getting familiar with GST)
results.create_full_report_pdf(confidenceLevel=95, filename="tutorial_files/easy_full.pdf", verbosity=2)
--- Hessian Projector Optimization for gate CIs (L-BFGS-B) --- 20s 0.0278965571 25s 0.0276337997 29s 0.0276079377 34s 0.0275908344 39s 0.0275853888 43s 0.0275844496 The resulting min sqrt(sum(gateCIs**2)): 0.0275844 *** Generating tables *** Iter 01 of 17 : Generating table: targetSpamTable (w/95% CIs) [0.0s] Iter 02 of 17 : Generating table: targetGatesTable (w/95% CIs) [0.0s] Iter 03 of 17 : Generating table: datasetOverviewTable (w/95% CIs) [0.0s] Iter 04 of 17 : Generating table: bestGatesetSpamTable (w/95% CIs) [0.0s] Iter 05 of 17 : Generating table: bestGatesetSpamParametersTable (w/95% CIs) [0.0s] Iter 06 of 17 : Generating table: bestGatesetGaugeOptParamsTable (w/95% CIs) [0.0s] Iter 07 of 17 : Generating table: bestGatesetGatesTable (w/95% CIs) [0.0s] Iter 08 of 17 : Generating table: bestGatesetChoiTable (w/95% CIs) [0.2s] Iter 09 of 17 : Generating table: bestGatesetDecompTable (w/95% CIs) [0.2s] Iter 10 of 17 : Generating table: bestGatesetRotnAxisTable (w/95% CIs) [0.3s] Iter 11 of 17 : Generating table: bestGatesetVsTargetTable (w/95% CIs) [1.3s] Iter 12 of 17 : Generating table: bestGatesetErrorGenTable (w/95% CIs) [0.0s] Iter 13 of 17 : Generating table: fiducialListTable (w/95% CIs) [0.0s] Iter 14 of 17 : Generating table: prepStrListTable (w/95% CIs) [0.0s] Iter 15 of 17 : Generating table: effectStrListTable (w/95% CIs) [0.0s] Iter 16 of 17 : Generating table: germListTable (w/95% CIs) [0.0s] Iter 17 of 17 : Generating table: progressTable (w/95% CIs) [0.2s] *** Generating plots *** LogL plots (2): Iter 1 of 3 : Generating figure: colorBoxPlotKeyPlot (w/95% CIs) Generating figure: colorBoxPlotKeyPlot [0.9s] Iter 2 of 3 : Generating figure: bestEstimateColorBoxPlot (w/95% CIs) Generating figure: bestEstimateColorBoxPlot [9.6s] Iter 3 of 3 : Generating figure: invertedBestEstimateColorBoxPlot (w/95% CIs) Generating figure: invertedBestEstimateColorBoxPlot [9.4s] *** Merging into template file *** Latex file(s) successfully generated. Attempting to compile with pdflatex... Initial output PDF tutorial_files/easy_full.pdf successfully generated. Final output PDF tutorial_files/easy_full.pdf successfully generated. Cleaning up .aux and .log files.
#create a brief GST report (just highlights of full report but fast to generate; best for folks familiar with GST)
results.create_brief_report_pdf(confidenceLevel=95, filename="tutorial_files/easy_brief.pdf", verbosity=2)
*** Generating tables *** Retrieving cached table: bestGatesetSpamTable (w/95% CIs) Retrieving cached table: bestGatesetSpamParametersTable (w/95% CIs) Retrieving cached table: bestGatesetGatesTable (w/95% CIs) Retrieving cached table: bestGatesetDecompTable (w/95% CIs) Retrieving cached table: bestGatesetRotnAxisTable (w/95% CIs) Retrieving cached table: bestGatesetVsTargetTable (w/95% CIs) Retrieving cached table: bestGatesetErrorGenTable (w/95% CIs) Retrieving cached table: progressTable (w/95% CIs) *** Generating plots *** *** Merging into template file *** Latex file(s) successfully generated. Attempting to compile with pdflatex... Initial output PDF tutorial_files/easy_brief.pdf successfully generated. Final output PDF tutorial_files/easy_brief.pdf successfully generated. Cleaning up .aux and .log files.
#create GST slides (tables and figures of full report in latex-generated slides; best for folks familiar with GST)
results.create_presentation_pdf(confidenceLevel=95, filename="tutorial_files/easy_slides.pdf", verbosity=2)
*** Generating tables *** Retrieving cached table: targetSpamTable (w/95% CIs) Retrieving cached table: targetGatesTable (w/95% CIs) Retrieving cached table: datasetOverviewTable (w/95% CIs) Retrieving cached table: bestGatesetSpamTable (w/95% CIs) Retrieving cached table: bestGatesetSpamParametersTable (w/95% CIs) Retrieving cached table: bestGatesetGatesTable (w/95% CIs) Retrieving cached table: bestGatesetChoiTable (w/95% CIs) Retrieving cached table: bestGatesetDecompTable (w/95% CIs) Retrieving cached table: bestGatesetRotnAxisTable (w/95% CIs) Retrieving cached table: bestGatesetVsTargetTable (w/95% CIs) Retrieving cached table: bestGatesetErrorGenTable (w/95% CIs) Retrieving cached table: fiducialListTable (w/95% CIs) Retrieving cached table: prepStrListTable (w/95% CIs) Retrieving cached table: effectStrListTable (w/95% CIs) Retrieving cached table: germListTable (w/95% CIs) Retrieving cached table: progressTable (w/95% CIs) *** Generating plots *** -- LogL plots (1): Iter 1 of 1 : Retrieving cached figure: bestEstimateColorBoxPlot (w/95% CIs) *** Merging into template file *** Latex file(s) successfully generated. Attempting to compile with pdflatex... Initial output PDF tutorial_files/easy_slides.pdf successfully generated. Final output PDF tutorial_files/easy_slides.pdf successfully generated. Cleaning up .aux and .log files.
#create GST slides (tables and figures of full report in Powerpoint slides; best for folks familiar with GST)
#Requires python-pptx and ImageMagick to be installed
results.create_presentation_ppt(confidenceLevel=95, filename="tutorial_files/easy_slides.pptx", verbosity=2)
*** Generating tables *** Iter 01 of 16 : Retrieving cached table: targetSpamTable (w/95% CIs) Iter 02 of 16 : Retrieving cached table: targetGatesTable (w/95% CIs) Iter 03 of 16 : Retrieving cached table: datasetOverviewTable (w/95% CIs) Iter 04 of 16 : Retrieving cached table: bestGatesetSpamTable (w/95% CIs) Iter 05 of 16 : Retrieving cached table: bestGatesetSpamParametersTable (w/95% CIs) Iter 06 of 16 : Retrieving cached table: bestGatesetGatesTable (w/95% CIs) Iter 07 of 16 : Retrieving cached table: bestGatesetChoiTable (w/95% CIs) Iter 08 of 16 : Retrieving cached table: bestGatesetDecompTable (w/95% CIs) Iter 09 of 16 : Retrieving cached table: bestGatesetRotnAxisTable (w/95% CIs) Iter 10 of 16 : Retrieving cached table: bestGatesetVsTargetTable (w/95% CIs) Iter 11 of 16 : Retrieving cached table: bestGatesetErrorGenTable (w/95% CIs) Iter 12 of 16 : Retrieving cached table: fiducialListTable (w/95% CIs) Iter 13 of 16 : Retrieving cached table: prepStrListTable (w/95% CIs) Iter 14 of 16 : Retrieving cached table: effectStrListTable (w/95% CIs) Iter 15 of 16 : Retrieving cached table: germListTable (w/95% CIs) Iter 16 of 16 : Retrieving cached table: progressTable (w/95% CIs) *** Generating plots *** -- LogL plots (1): Iter 1 of 1 : Retrieving cached figure: bestEstimateColorBoxPlot (w/95% CIs) *** Assembling PPT file *** Latexing progressTable table... Latexing bestGatesetVsTargetTable table... Latexing bestGatesetErrorGenTable table... Latexing bestGatesetDecompTable table... Latexing bestGatesetRotnAxisTable table... Latexing bestGatesetGatesTable table... Latexing bestGatesetSpamTable table... Latexing bestGatesetSpamParametersTable table... Latexing bestGatesetChoiTable table... Latexing targetSpamTable table... Latexing targetGatesTable table... Latexing fiducialListTable table... Latexing germListTable table... Latexing datasetOverviewTable table... Final output PPT tutorial_files/easy_slides.pptx successfully generated.
If all has gone well, the above lines have produced the four primary types of reports pygsti
is capable of generating:
A "full" report, tutorial_files/easy_full.pdf. This is the most detailed and pedagogical of the reports, and is best for those getting familiar with GST.
A "brief" report, tutorial_files/easy_brief.pdf. This Contains just the highlights of the full report but much faster to generate, and is best for folks familiar with GST.
PDF slides, tutorial_files/easy_slides.pdf. These slides contain tables and figures of the full report in LaTeX-generated (via beamer
) slides, and is best for folks familiar with GST who want to show other people their great results.
PPT slides, tutorial_files/easy_slides.pptx. These slides contain the same information as PDF slides, but in MS Powerpoint format. These slides won't look as nice as the PDF ones, but can be used for merciless copying and pasting into your other Powerpoint presentations... :)
A significant component of running GST as show above is constructing things: the target gateset, the fiducial, germ, and maximum-length lists, etc. We've found that many people who use GST have one of only a few different target gatesets, and for these commonly used target gatesets we've created modules that perform most of the constructions for you. If you gateset isn't one of these standard ones then you'll have to follow the above approach for now, but please let us know and we'll try to add a module for your gateset in the future.
The standard construction modules are located under pygsti.construction
(surprise, surprise) and are prefixed with "std
". In the example above, our gateset (comprised of single qubit $I$, X($\pi/2$), and Y($\pi/2$) gates) is one of the commonly used gatesets, and relevant constructions are importable via:
#Import the "stardard 1-qubit quantities for a gateset with X(pi/2), Y(pi/2), and idle gates"
from pygsti.construction import std1Q_XYI
We follow the same order of constructing things as above, but it's much easier since almost everything has been constructed already:
gs_target = std1Q_XYI.gs_target
fiducials = std1Q_XYI.fiducials
germs = std1Q_XYI.germs
maxLengths = [0,1,2,4,8,16,32] #still need to define this manually
We generate a fake dataset as before:
gs_datagen = gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(gs_target.gates.keys(), fiducials, fiducials, germs, maxLengths)
ds = pygsti.construction.generate_fake_data(gs_datagen, listOfExperiments, nSamples=1000000,
sampleError="binomial", seed=1234)
And run the analysis function (this time using the dataset object directly instead of loading from a file), and then create a report in the specified file.
gs_target.set_all_parameterizations("TP")
results = pygsti.do_long_sequence_gst(ds, gs_target, fiducials, fiducials, germs, maxLengths)
results.create_full_report_pdf(confidenceLevel=95,filename="tutorial_files/MyEvenEasierReport.pdf",verbosity=2)
--- LGST --- Singular values of I_tilde (truncating to first 4 of 6) = 4.24409810599 1.16704609919 0.946915237 0.943164947695 0.00230163583615 0.000699324004315 Singular values of target I_tilde (truncating to first 4 of 6) = 4.24264068712 1.41421356237 1.41421356237 1.41421356237 3.3483200348e-16 2.72548486209e-16 --- Iterative MLGST: Iter 1 of 7 92 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 65.4588 (92 data params - 31 model params = expected mean of 61; p-value = 0.32481) Completed in 0.1s 2*Delta(log(L)) = 65.4654 Iteration 1 took 0.1s --- Iterative MLGST: Iter 2 of 7 92 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 65.4588 (92 data params - 31 model params = expected mean of 61; p-value = 0.32481) Completed in 0.0s 2*Delta(log(L)) = 65.4654 Iteration 2 took 0.0s --- Iterative MLGST: Iter 3 of 7 168 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 136.541 (168 data params - 31 model params = expected mean of 137; p-value = 0.495) Completed in 0.1s 2*Delta(log(L)) = 136.546 Iteration 3 took 0.1s --- Iterative MLGST: Iter 4 of 7 441 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 409.627 (441 data params - 31 model params = expected mean of 410; p-value = 0.495914) Completed in 0.1s 2*Delta(log(L)) = 409.632 Iteration 4 took 0.1s --- Iterative MLGST: Iter 5 of 7 817 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 753.34 (817 data params - 31 model params = expected mean of 786; p-value = 0.793481) Completed in 0.2s 2*Delta(log(L)) = 753.354 Iteration 5 took 0.2s --- Iterative MLGST: Iter 6 of 7 1201 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 1165.78 (1201 data params - 31 model params = expected mean of 1170; p-value = 0.52932) Completed in 0.3s 2*Delta(log(L)) = 1165.8 Iteration 6 took 0.3s --- Iterative MLGST: Iter 7 of 7 1585 gate strings ---: --- Minimum Chi^2 GST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Sum of Chi^2 = 1572.66 (1585 data params - 31 model params = expected mean of 1554; p-value = 0.364955) Completed in 0.3s 2*Delta(log(L)) = 1572.68 Iteration 7 took 0.3s Switching to ML objective (last iteration) --- MLGST --- Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing) groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params). Maximum log(L) = 786.341 below upper bound of -2.65149e+09 2*Delta(log(L)) = 1572.68 (1585 data params - 31 model params = expected mean of 1554; p-value = 0.364782) Completed in 0.2s 2*Delta(log(L)) = 1572.68 Final MLGST took 0.2s Iterative MLGST Total Time: 1.4s --- Hessian Projector Optimization for gate CIs (L-BFGS-B) --- 117s 0.0008805702 121s 0.0008720898 126s 0.0008712749 The resulting min sqrt(sum(gateCIs**2)): 0.000871275 *** Generating tables *** Iter 01 of 17 : Generating table: targetSpamTable (w/95% CIs) [0.0s] Iter 02 of 17 : Generating table: targetGatesTable (w/95% CIs) [0.0s] Iter 03 of 17 : Generating table: datasetOverviewTable (w/95% CIs) [0.0s] Iter 04 of 17 : Generating table: bestGatesetSpamTable (w/95% CIs) [0.0s] Iter 05 of 17 : Generating table: bestGatesetSpamParametersTable (w/95% CIs) [0.0s] Iter 06 of 17 : Generating table: bestGatesetGaugeOptParamsTable (w/95% CIs) [0.0s] Iter 07 of 17 : Generating table: bestGatesetGatesTable (w/95% CIs) [0.0s] Iter 08 of 17 : Generating table: bestGatesetChoiTable (w/95% CIs) [0.2s] Iter 09 of 17 : Generating table: bestGatesetDecompTable (w/95% CIs) [0.2s] Iter 10 of 17 : Generating table: bestGatesetRotnAxisTable (w/95% CIs) [0.3s] Iter 11 of 17 : Generating table: bestGatesetVsTargetTable (w/95% CIs) [1.2s] Iter 12 of 17 : Generating table: bestGatesetErrorGenTable (w/95% CIs) [0.0s] Iter 13 of 17 : Generating table: fiducialListTable (w/95% CIs) [0.0s] Iter 14 of 17 : Generating table: prepStrListTable (w/95% CIs) [0.0s] Iter 15 of 17 : Generating table: effectStrListTable (w/95% CIs) [0.0s] Iter 16 of 17 : Generating table: germListTable (w/95% CIs) [0.0s] Iter 17 of 17 : Generating table: progressTable (w/95% CIs) [0.2s] *** Generating plots *** LogL plots (2): Iter 1 of 3 : Generating figure: colorBoxPlotKeyPlot (w/95% CIs) Generating figure: colorBoxPlotKeyPlot [0.7s] Iter 2 of 3 : Generating figure: bestEstimateColorBoxPlot (w/95% CIs) Generating figure: bestEstimateColorBoxPlot [9.9s] Iter 3 of 3 : Generating figure: invertedBestEstimateColorBoxPlot (w/95% CIs) Generating figure: invertedBestEstimateColorBoxPlot [9.2s] *** Merging into template file *** Latex file(s) successfully generated. Attempting to compile with pdflatex... Initial output PDF tutorial_files/MyEvenEasierReport.pdf successfully generated. Final output PDF tutorial_files/MyEvenEasierReport.pdf successfully generated. Cleaning up .aux and .log files.
Now open tutorial_files/MyEvenEasierReport.pdf to see the results. You've just run GST (again)!
# Printing a Results object gives you information about how to extract information from it
print(results)
---------------------------------------------------------- ---------------- pyGSTi Results Object ------------------- ---------------------------------------------------------- I can create reports for you directly, via my create_XXX functions, or you can query me for result data via members: .dataset -- the DataSet used to generate these results .gatesets -- a dictionary of GateSet objects w/keys: --------------------------------------------------------- final estimate target iteration estimates pre gauge opt iteration estimates seed .gatestring_lists -- a dict of GateString lists w/keys: --------------------------------------------------------- final iteration iteration delta all germs prep fiducials effect fiducials .tables -- a dict of ReportTable objects w/keys: --------------------------------------------------------- blankTable targetSpamTable targetSpamBriefTable targetGatesTable datasetOverviewTable fiducialListTable prepStrListTable effectStrListTable germListTable germList2ColTable bestGatesetSpamTable bestGatesetSpamBriefTable bestGatesetSpamParametersTable bestGatesetGatesTable bestGatesetChoiTable bestGatesetDecompTable bestGatesetRotnAxisTable bestGatesetEvalTable bestGatesetClosestUnitaryTable bestGatesetVsTargetTable gaugeOptGatesetsVsTargetTable gaugeOptCPTPGatesetChoiTable bestGatesetSpamVsTargetTable bestGatesetErrorGenTable bestGatesetVsTargetAnglesTable bestGatesetGaugeOptParamsTable chi2ProgressTable logLProgressTable progressTable byGermTable logLErrgenProjectionTable targetGatesBoxTable bestGatesetGatesBoxTable bestGatesetErrGenBoxTable bestGatesetErrGenProjectionTargetMetricsTable bestGatesetErrGenProjectionSelfMetricsTable bestGatesetRelEvalTable bestGatesetChoiEvalTable hamiltonianProjectorTable stochasticProjectorTable .figures -- a dict of ReportFigure objects w/keys: --------------------------------------------------------- colorBoxPlotKeyPlot bestEstimateColorBoxPlot invertedBestEstimateColorBoxPlot bestEstimateSummedColorBoxPlot estimateForLIndex0ColorBoxPlot estimateForLIndex1ColorBoxPlot estimateForLIndex2ColorBoxPlot estimateForLIndex3ColorBoxPlot estimateForLIndex4ColorBoxPlot estimateForLIndex5ColorBoxPlot estimateForLIndex6ColorBoxPlot blankBoxPlot blankSummedBoxPlot directLGSTColorBoxPlot directLongSeqGSTColorBoxPlot directLGSTDeviationColorBoxPlot directLongSeqGSTDeviationColorBoxPlot smallEigvalErrRateColorBoxPlot whackGxMoleBoxes whackGyMoleBoxes whackGiMoleBoxes whackGxMoleBoxesSummed whackGyMoleBoxesSummed whackGiMoleBoxesSummed bestGateErrGenBoxesGi bestGateErrGenBoxesGx bestGateErrGenBoxesGy targetGateBoxesGi targetGateBoxesGx targetGateBoxesGy bestEstimatePolarGiEvalPlot bestEstimatePolarGxEvalPlot bestEstimatePolarGyEvalPlot pauliProdHamiltonianDecompBoxesGi pauliProdHamiltonianDecompBoxesGx pauliProdHamiltonianDecompBoxesGy .parameters -- a dict of simulation parameters: --------------------------------------------------------- minProbClipForWeighting defaultBasename memLimit distributeMethod cptpPenaltyFactor profiler L,germ tuple base string dict probClipInterval max length list linlogPercentile objective defaultDirectory gaugeOptParams radius fiducial pairs hessianProjection weights minProbClip .options -- a container of display options: --------------------------------------------------------- .long_tables -- long latex tables? False .table_class -- HTML table class = pygstiTbl .precision -- precision = 4 .polar_precision -- precision for polar exponent = 3 .sci_precision -- precision for scientific notn = 0 .errgen_type -- type of error generator = logTiG .template_path -- pyGSTi templates path = 'None' .latex_cmd -- latex compiling command = 'pdflatex' .latex_postcmd -- latex compiling command postfix = '-halt-on-error </dev/null >/dev/null' NOTE: passing 'tips=True' to create_full_report_pdf or create_brief_report_pdf will add markup to the resulting PDF indicating how tables and figures in the PDF correspond to the values of .tables[ ] and .figures[ ] listed above.