Algorithms Tutorial

Once we have data for GST, there are several algorithms we can run on it to produce tomographic estimates. Depending on the amount of data you have, and time available for running Gate Set Tomography, one algorithm may be preferable over the others.

Currently, pygsti provides support for the following GST algorithms:

  • Linear Gate Set Tomography (LGST): Uses short gate sequences to quickly compute a rough (low accuracy) estimate of a gate set by linear inversion.

  • Extended Linear Gate Set Tomography (eLGST or EXLGST): Minimizes the sub-of-squared errors between independent LGST estimates and the estimates obtained from a single gate set to find a best-estimate gate set. This is typically done in an interative fashion, using LGST estimates for longer and longer sequences.

  • Minimum-$\chi^2$ Gate Set Tomography (MC2GST): Minimizes the $\chi^{2}$ statistic of the data frequencies and gate set probabilities to find a best-estimate gate set. Typically done in an interative fashion, using successively larger sets of longer and longer gate sequences.

  • Maximum-Likelihood Gate Set Tomography (MLGST): Maximizes the log-likelihood statistic of the data frequencies and gate set probabilities to find a best-estimate gate set. Typically done in an interative fashion similar to MC2GST. This maximum likelihood estimation (MLE) is very well-motivated from a statistics standpoint and should be the most accurate among the algorithms.

If you're curious, the implementation of the algorithms for LGST, EXLGST, MC2GST, and MLGST may be found in the pygsti.algorithms.core module. In this tutorial, we'll show how to invoke each of these algorithms.

Setup

The ingredients needed to as input to the GST algorithms are:

  • a "target" GateSet which defines the desired gates. This gate set is used by LGST to specify the various gate, state preparation, POVM effect, and SPAM labels, as well as to provide an initial guess for the gauge degrees of freedom.
  • a DataSet containing the data that GST attempts to fit using the probabilities generated by a single GateSet. This data set must at least contain the data for the gate sequences required by the algorithm that is chosen.
  • for EXLGST, MC2GST, and MLGST, a list-of-lists of GateString objects, which specify which gate strings are used during each iteration of the algorithm (the length of the top-level list defines the number of interations). Note that which gate strings are included in these lists is different for EXLGST than it is for MC2GST and MLGST.
In [1]:
from __future__ import print_function
In [2]:
import pygsti
import json
In [3]:
#Load target gateset, dataset, and list of fiducial gate strings.
#In this case we load directly from files created in past tutorials

gs_target = pygsti.io.load_gateset("tutorial_files/Example_Gateset.txt")
ds = pygsti.io.load_dataset("tutorial_files/Example_Dataset.txt", cache=True)
dsLowCounts = pygsti.io.load_dataset("tutorial_files/Example_Dataset_LowCnts.txt", cache=True)
fiducialList = pygsti.io.load_gatestring_list("tutorial_files/Example_FiducialList.txt")

depol_gateset = gs_target.depolarize(gate_noise=0.1)

#Could also load a fiducial dictionary file like this:
#fiducialDict = GST.load_gatestring_dict("Example_FiducialList.txt")
#fiducialList = fiducialDict.values() #all we really need are the fiducial strings themselves

print("Loaded target gateset with gate labels: ", gs_target.gates.keys())
print("Loaded fiducial list of length: ", len(fiducialList))
print("Loaded dataset of length: ", len(ds))
Loading tutorial_files/Example_Dataset.txt: 100%
Writing cache file (to speed future loads): tutorial_files/Example_Dataset.txt.cache
Loading from cache file: tutorial_files/Example_Dataset_LowCnts.txt.cache
Loaded target gateset with gate labels:  odict_keys(['Gi', 'Gx', 'Gy'])
Loaded fiducial list of length:  6
Loaded dataset of length:  2737

Using LGST to get an initial estimate

An important and distinguising property of the LGST algorithm is that it does not require an initial-guess GateSet as an input. It uses linear inversion and short sequences to obtain a rough gate set estimate. As such, it is very common to use the LGST estimate as the initial-guess starting point for more advanced forms of GST.

In [4]:
#Run LGST to get an initial estimate for the gates in gs_target based on the data in ds

#create "spam specs" from the list of fiducial gate strings.  See the gate strings tutorial for more info.
specs = pygsti.construction.build_spam_specs(fiducialGateStrings=fiducialList)

#run LGST
gs_lgst = pygsti.do_lgst(ds, specs, targetGateset=gs_target, svdTruncateTo=4, verbosity=1)

#Gauge optimize the result to match the target gateset
gs_lgst_after_gauge_opt = pygsti.gaugeopt_to_target(gs_lgst, gs_target)

#Contract the result to CPTP, guaranteeing that the gates are CPTP
gs_clgst = pygsti.contract(gs_lgst_after_gauge_opt, "CPTP")
--- LGST ---
In [5]:
print(gs_lgst)
rho0 =    0.7094  -0.0205   0.0230   0.7544


E0 =    0.6877   0.0052  -0.0021  -0.6498


Gi = 
   1.0065  -0.0005  -0.0018   0.0055
  -0.0059   0.9258   0.0528  -0.0169
   0.0422  -0.0172   0.9029   0.0212
  -0.0078   0.0118   0.0219   0.9076


Gx = 
   0.9970  -0.0073  -0.0088   0.0003
   0.0056   0.9077   0.0232  -0.0016
   0.0146   0.0163   0.0013  -1.0031
  -0.0695  -0.0060   0.8032   0.0079


Gy = 
   1.0040   0.0074  -0.0055  -0.0005
  -0.0377  -0.0225  -0.0012   0.9963
   0.0189   0.0094   0.8877  -0.0266
  -0.0679  -0.8058  -0.0250   0.0178



Extended LGST (eLGST or EXLGST)

EXLGST requires a list-of-lists of gate strings, one per iteration. The elements of these lists are typically repetitions of short "germ" strings such that the final strings does not exceed some maximum length. We created such lists in the gate string tutorial. Now, we just load these lists from the text files they were saved in.

In [6]:
#Get rho and E specifiers, needed by LGST
specs  = pygsti.construction.build_spam_specs(fiducialGateStrings=fiducialList)
germList = pygsti.io.load_gatestring_list("tutorial_files/Example_GermsList.txt")
maxLengthList = json.load(open("tutorial_files/Example_maxLengths.json","r"))
elgstListOfLists = [ pygsti.io.load_gatestring_list("tutorial_files/Example_eLGSTlist%d.txt" % l) for l in maxLengthList]
           
#run EXLGST.  The result, gs_exlgst, is a GateSet containing the estimated quantities
gs_exlgst = pygsti.do_iterative_exlgst(ds, gs_clgst, specs, elgstListOfLists, targetGateset=gs_target,
                                  svdTruncateTo=4, verbosity=2)
--- Iterative eLGST:  Iter 10 of 10 ; 84 gate strings ---: 

Minimum-$\chi^2$ GST (MC2GST)

MC2GST and MLGST also require a list-of-lists of gate strings, one per iteration. However, the elements of these lists are typically repetitions of short "germ" strings sandwiched between fiducial strings such that the repeated-germ part of the string does not exceed some maximum length. We created such lists in the gate string tutorial. Now, we just load these lists from the text files they were saved in.

In [7]:
#Get lists of gate strings for successive iterations of LSGST to use
specs  = pygsti.construction.build_spam_specs(fiducialGateStrings=fiducialList)
germList = pygsti.io.load_gatestring_list("tutorial_files/Example_GermsList.txt")
maxLengthList = json.load(open("tutorial_files/Example_maxLengths.json","r"))
lsgstListOfLists = [ pygsti.io.load_gatestring_list("tutorial_files/Example_LSGSTlist%d.txt" % l) for l in maxLengthList]
  
#run MC2GST.  The result is a GateSet containing the estimated quantities
gs_mc2 = pygsti.do_iterative_mc2gst(ds, gs_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MC2GST: Iter 01 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 40.9237 (92 data params - 40 model params = expected mean of 52; p-value = 0.866036)
  Completed in 0.2s
      Iteration 1 took 0.2s
  
--- Iterative MC2GST: Iter 02 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 40.9237 (92 data params - 40 model params = expected mean of 52; p-value = 0.866036)
  Completed in 0.1s
      Iteration 2 took 0.1s
  
--- Iterative MC2GST: Iter 03 of 10  168 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 124.56 (168 data params - 40 model params = expected mean of 128; p-value = 0.569542)
  Completed in 0.3s
      Iteration 3 took 0.3s
  
--- Iterative MC2GST: Iter 04 of 10  441 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 424.143 (441 data params - 40 model params = expected mean of 401; p-value = 0.204567)
  Completed in 0.6s
      Iteration 4 took 0.6s
  
--- Iterative MC2GST: Iter 05 of 10  817 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 772.113 (817 data params - 40 model params = expected mean of 777; p-value = 0.542726)
  Completed in 1.1s
      Iteration 5 took 1.1s
  
--- Iterative MC2GST: Iter 06 of 10  1201 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1154.51 (1201 data params - 40 model params = expected mean of 1161; p-value = 0.548163)
  Completed in 1.6s
      Iteration 6 took 1.6s
  
--- Iterative MC2GST: Iter 07 of 10  1585 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1600.17 (1585 data params - 40 model params = expected mean of 1545; p-value = 0.160387)
  Completed in 2.5s
      Iteration 7 took 2.5s
  
--- Iterative MC2GST: Iter 08 of 10  1969 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2031.01 (1969 data params - 40 model params = expected mean of 1929; p-value = 0.0520838)
  Completed in 3.7s
      Iteration 8 took 3.7s
  
--- Iterative MC2GST: Iter 09 of 10  2353 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2422.67 (2353 data params - 40 model params = expected mean of 2313; p-value = 0.055082)
  Completed in 5.9s
      Iteration 9 took 5.9s
  
--- Iterative MC2GST: Iter 10 of 10  2737 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2795.59 (2737 data params - 40 model params = expected mean of 2697; p-value = 0.0908715)
  Completed in 8.1s
      Iteration 10 took 8.2s
  
Iterative MC2GST Total Time: 24.2s
In [8]:
#Write the resulting EXLGST and MC2GST results to gate set text files for later reference.
pygsti.io.write_gateset(gs_exlgst, "tutorial_files/Example_eLGST_Gateset.txt","# Example result from running eLGST")
pygsti.io.write_gateset(gs_mc2,  "tutorial_files/Example_MC2GST_Gateset.txt","# Example result from running MC2GST")
In [9]:
#Run MC2GST again but use a DataSet with a lower number of counts 
gs_mc2_lowcnts = pygsti.do_iterative_mc2gst(dsLowCounts, gs_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MC2GST: Iter 01 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 64.0312 (92 data params - 40 model params = expected mean of 52; p-value = 0.122289)
  Completed in 0.2s
      Iteration 1 took 0.2s
  
--- Iterative MC2GST: Iter 02 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 64.0312 (92 data params - 40 model params = expected mean of 52; p-value = 0.122289)
  Completed in 0.0s
      Iteration 2 took 0.1s
  
--- Iterative MC2GST: Iter 03 of 10  168 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 161.612 (168 data params - 40 model params = expected mean of 128; p-value = 0.0237625)
  Completed in 0.3s
      Iteration 3 took 0.3s
  
--- Iterative MC2GST: Iter 04 of 10  441 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 427.539 (441 data params - 40 model params = expected mean of 401; p-value = 0.173449)
  Completed in 0.5s
      Iteration 4 took 0.5s
  
--- Iterative MC2GST: Iter 05 of 10  817 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 770.758 (817 data params - 40 model params = expected mean of 777; p-value = 0.556396)
  Completed in 0.8s
      Iteration 5 took 0.8s
  
--- Iterative MC2GST: Iter 06 of 10  1201 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1181.51 (1201 data params - 40 model params = expected mean of 1161; p-value = 0.331064)
  Completed in 1.6s
      Iteration 6 took 1.6s
  
--- Iterative MC2GST: Iter 07 of 10  1585 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1551.99 (1585 data params - 40 model params = expected mean of 1545; p-value = 0.445324)
  Completed in 2.2s
      Iteration 7 took 2.2s
  
--- Iterative MC2GST: Iter 08 of 10  1969 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1904.4 (1969 data params - 40 model params = expected mean of 1929; p-value = 0.650628)
  Completed in 3.2s
      Iteration 8 took 3.2s
  
--- Iterative MC2GST: Iter 09 of 10  2353 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2302.72 (2353 data params - 40 model params = expected mean of 2313; p-value = 0.556292)
  Completed in 5.1s
      Iteration 9 took 5.1s
  
--- Iterative MC2GST: Iter 10 of 10  2737 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2677.85 (2737 data params - 40 model params = expected mean of 2697; p-value = 0.599592)
  Completed in 4.3s
      Iteration 10 took 4.3s
  
Iterative MC2GST Total Time: 18.2s

Maximum Likelihood GST (MLGST)

Executing MLGST is very similar to MC2GST: the same gate string lists can be used and calling syntax is nearly identitcal.

In [10]:
maxLengthList = json.load(open("tutorial_files/Example_maxLengths.json","r"))
lsgstListOfLists = [ pygsti.io.load_gatestring_list("tutorial_files/Example_LSGSTlist%d.txt" % l) for l in maxLengthList] 

#run MLGST.  The result is a GateSet containing the estimated quantities
gs_mle = pygsti.do_iterative_mlgst(ds, gs_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MLGST: Iter 01 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 40.9237 (92 data params - 40 model params = expected mean of 52; p-value = 0.866036)
  Completed in 0.2s
  2*Delta(log(L)) = 41.1104
  Iteration 1 took 0.2s
  
--- Iterative MLGST: Iter 02 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 40.9237 (92 data params - 40 model params = expected mean of 52; p-value = 0.866036)
  Completed in 0.1s
  2*Delta(log(L)) = 41.1104
  Iteration 2 took 0.1s
  
--- Iterative MLGST: Iter 03 of 10  168 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 124.56 (168 data params - 40 model params = expected mean of 128; p-value = 0.569542)
  Completed in 0.2s
  2*Delta(log(L)) = 124.957
  Iteration 3 took 0.3s
  
--- Iterative MLGST: Iter 04 of 10  441 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 424.143 (441 data params - 40 model params = expected mean of 401; p-value = 0.204567)
  Completed in 0.4s
  2*Delta(log(L)) = 425.013
  Iteration 4 took 0.4s
  
--- Iterative MLGST: Iter 05 of 10  817 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 772.113 (817 data params - 40 model params = expected mean of 777; p-value = 0.542726)
  Completed in 0.7s
  2*Delta(log(L)) = 773.808
  Iteration 5 took 0.8s
  
--- Iterative MLGST: Iter 06 of 10  1201 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1154.51 (1201 data params - 40 model params = expected mean of 1161; p-value = 0.548163)
  Completed in 1.2s
  2*Delta(log(L)) = 1156.61
  Iteration 6 took 1.4s
  
--- Iterative MLGST: Iter 07 of 10  1585 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1600.17 (1585 data params - 40 model params = expected mean of 1545; p-value = 0.160387)
  Completed in 1.7s
  2*Delta(log(L)) = 1602.65
  Iteration 7 took 1.8s
  
--- Iterative MLGST: Iter 08 of 10  1969 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2031.01 (1969 data params - 40 model params = expected mean of 1929; p-value = 0.0520838)
  Completed in 2.8s
  2*Delta(log(L)) = 2033.99
  Iteration 8 took 3.0s
  
--- Iterative MLGST: Iter 09 of 10  2353 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2422.67 (2353 data params - 40 model params = expected mean of 2313; p-value = 0.055082)
  Completed in 4.2s
  2*Delta(log(L)) = 2426.07
  Iteration 9 took 4.6s
  
--- Iterative MLGST: Iter 10 of 10  2737 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2795.59 (2737 data params - 40 model params = expected mean of 2697; p-value = 0.0908715)
  Completed in 7.0s
  2*Delta(log(L)) = 2799.4
  Iteration 10 took 7.7s
  
  Switching to ML objective (last iteration)
  --- MLGST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
    Maximum log(L) = 1399.31 below upper bound of -4.60013e+06
      2*Delta(log(L)) = 2798.62 (2737 data params - 40 model params = expected mean of 2697; p-value = 0.0844567)
    Completed in 6.1s
  2*Delta(log(L)) = 2798.62
  Final MLGST took 6.1s
  
Iterative MLGST Total Time: 26.4s
In [11]:
#Run MLGST again but use a DataSet with a lower number of counts 
gs_mle_lowcnts = pygsti.do_iterative_mlgst(dsLowCounts, gs_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MLGST: Iter 01 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 64.0312 (92 data params - 40 model params = expected mean of 52; p-value = 0.122289)
  Completed in 0.2s
  2*Delta(log(L)) = 65.3199
  Iteration 1 took 0.2s
  
--- Iterative MLGST: Iter 02 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 64.0312 (92 data params - 40 model params = expected mean of 52; p-value = 0.122289)
  Completed in 0.0s
  2*Delta(log(L)) = 65.3199
  Iteration 2 took 0.1s
  
--- Iterative MLGST: Iter 03 of 10  168 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 161.612 (168 data params - 40 model params = expected mean of 128; p-value = 0.0237625)
  Completed in 0.3s
  2*Delta(log(L)) = 164.828
  Iteration 3 took 0.3s
  
--- Iterative MLGST: Iter 04 of 10  441 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 427.539 (441 data params - 40 model params = expected mean of 401; p-value = 0.173449)
  Completed in 0.5s
  2*Delta(log(L)) = 439.032
  Iteration 4 took 0.6s
  
--- Iterative MLGST: Iter 05 of 10  817 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 770.758 (817 data params - 40 model params = expected mean of 777; p-value = 0.556396)
  Completed in 0.8s
  2*Delta(log(L)) = 790.062
  Iteration 5 took 0.9s
  
--- Iterative MLGST: Iter 06 of 10  1201 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1181.51 (1201 data params - 40 model params = expected mean of 1161; p-value = 0.331064)
  Completed in 1.5s
  2*Delta(log(L)) = 1209.23
  Iteration 6 took 1.7s
  
--- Iterative MLGST: Iter 07 of 10  1585 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1551.99 (1585 data params - 40 model params = expected mean of 1545; p-value = 0.445324)
  Completed in 2.0s
  2*Delta(log(L)) = 1586.54
  Iteration 7 took 2.1s
  
--- Iterative MLGST: Iter 08 of 10  1969 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 1904.4 (1969 data params - 40 model params = expected mean of 1929; p-value = 0.650628)
  Completed in 2.6s
  2*Delta(log(L)) = 1945.36
  Iteration 8 took 2.8s
  
--- Iterative MLGST: Iter 09 of 10  2353 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2302.72 (2353 data params - 40 model params = expected mean of 2313; p-value = 0.556292)
  Completed in 4.3s
  2*Delta(log(L)) = 2352.79
  Iteration 9 took 4.7s
  
--- Iterative MLGST: Iter 10 of 10  2737 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 2677.85 (2737 data params - 40 model params = expected mean of 2697; p-value = 0.599592)
  Completed in 4.2s
  2*Delta(log(L)) = 2735.38
  Iteration 10 took 4.9s
  
  Switching to ML objective (last iteration)
  --- MLGST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
    Maximum log(L) = 1360.55 below upper bound of -228670
      2*Delta(log(L)) = 2721.09 (2737 data params - 40 model params = expected mean of 2697; p-value = 0.368373)
    Completed in 7.9s
  2*Delta(log(L)) = 2721.09
  Final MLGST took 7.9s
  
Iterative MLGST Total Time: 26.1s

Compare MLGST with MC2GST

Both MLGST and MC2GST use a $\chi^{2}$ optimization procedure for all but the final iteration. For the last set of gatestrings (the last iteration), MLGST uses a maximum likelihood estimation. Below, we show how close the two estimates are to one another. First, we optimize the gauge so the estimated gates are as close to the target gates as the gauge degrees of freedom allow.

In [12]:
# We optimize over the gate set gauge
gs_mle         = pygsti.gaugeopt_to_target(gs_mle,depol_gateset)
gs_mle_lowcnts = pygsti.gaugeopt_to_target(gs_mle_lowcnts,depol_gateset)
gs_mc2         = pygsti.gaugeopt_to_target(gs_mc2,depol_gateset)
gs_mc2_lowcnts = pygsti.gaugeopt_to_target(gs_mc2_lowcnts,depol_gateset)
In [13]:
print("Frobenius diff btwn MLGST  and datagen = {0}".format(round(gs_mle.frobeniusdist(depol_gateset), 6)))
print("Frobenius diff btwn MC2GST and datagen = {0}".format(round(gs_mc2.frobeniusdist(depol_gateset), 6)))
print("Frobenius diff btwn MLGST  and LGST    = {0}".format(round(gs_mle.frobeniusdist(gs_clgst), 6)))
print("Frobenius diff btwn MLGST  and MC2GST  = {0}".format(round(gs_mle.frobeniusdist(gs_mc2), 6)))
print("Chi^2 ( MC2GST ) = {0}".format(round(pygsti.chi2(ds, gs_mc2, lsgstListOfLists[-1]), 4)))
print("Chi^2 ( MLGST )  = {0}".format(round(pygsti.chi2(ds, gs_mle, lsgstListOfLists[-1] ), 4)))
print("LogL  ( MC2GST ) = {0}".format(round(pygsti.logl(gs_mc2, ds, lsgstListOfLists[-1]), 4)))
print("LogL  ( MLGST )  = {0}".format(round(pygsti.logl(gs_mle, ds, lsgstListOfLists[-1]), 4)))
Frobenius diff btwn MLGST  and datagen = 0.020418
Frobenius diff btwn MC2GST and datagen = 0.020418
Frobenius diff btwn MLGST  and LGST    = 0.014499
Frobenius diff btwn MLGST  and MC2GST  = 5.3e-05
Chi^2 ( MC2GST ) = 2795.5891
Chi^2 ( MLGST )  = 2796.2956
LogL  ( MC2GST ) = -4601529.7298
LogL  ( MLGST )  = -4601529.3414

Notice that, as expected, the MC2GST estimate has a slightly lower $\chi^{2}$ score than the MLGST estimate, and the MLGST estimate has a slightly higher loglikelihood than the MC2GST estimate. In addition, both are close (in terms of the Frobenius difference) to the depolarized gateset. Which is good - it means GST is giving us estimates which are close to the true gateset used to generate the data. Performing the same analysis with the low-count data shows larger differences between the two, which is expected since the $\chi^2$ and loglikelihood statistics are more similar at large $N$, that is, for large numbers of samples.

In [14]:
print("LOW COUNT DATA:")
print("Frobenius diff btwn MLGST  and datagen = {0}".format(round(gs_mle_lowcnts.frobeniusdist(depol_gateset), 6)))
print("Frobenius diff btwn MC2GST and datagen = {0}".format(round(gs_mc2_lowcnts.frobeniusdist(depol_gateset), 6)))
print("Frobenius diff btwn MLGST  and LGST    = {0}".format(round(gs_mle_lowcnts.frobeniusdist(gs_clgst), 6)))
print("Frobenius diff btwn MLGST  and MC2GST  = {0}".format(round(gs_mle_lowcnts.frobeniusdist(gs_mc2), 6)))
print("Chi^2 ( MC2GST )  = {0}".format(round(pygsti.chi2(dsLowCounts, gs_mc2_lowcnts, lsgstListOfLists[-1]), 4)))
print("Chi^2 ( MLGST )   = {0}".format(round(pygsti.chi2(dsLowCounts, gs_mle_lowcnts, lsgstListOfLists[-1] ), 4)))
print("LogL  ( MC2GST )  = {0}".format(round(pygsti.logl(gs_mc2_lowcnts, dsLowCounts, lsgstListOfLists[-1]), 4)))
print("LogL  ( MLGST )   = {0}".format(round(pygsti.logl(gs_mle_lowcnts, dsLowCounts, lsgstListOfLists[-1]), 4)))
LOW COUNT DATA:
Frobenius diff btwn MLGST  and datagen = 0.0241
Frobenius diff btwn MC2GST and datagen = 0.024054
Frobenius diff btwn MLGST  and LGST    = 0.018807
Frobenius diff btwn MLGST  and MC2GST  = 0.010513
Chi^2 ( MC2GST )  = 2677.8492
Chi^2 ( MLGST )   = 2692.2308
LogL  ( MC2GST )  = -230037.9222
LogL  ( MLGST )   = -230030.7786
In [ ]: