The purpose of GST's fiducials is to generate an informationally-complete set of states and measurements. However, it's possible to reduce the number of fiducials necessary to do so.

In this tutorial, we use the pygsti.alg.find_sufficient_fiducial_pairs function to reduce the number of fiducials necessary for GST.

In [1]:
from __future__ import print_function
import matplotlib
matplotlib.use('Agg')
In [2]:
import pygsti
import json
In [3]:
# Follow Algorithm tutorial to generate LGST gatesets
gs_target = pygsti.io.load_gateset("tutorial_files/Example_Gateset.txt")
ds = pygsti.io.load_dataset("tutorial_files/Example_Dataset.txt", cache=True)
fiducialList = pygsti.io.load_gatestring_list("tutorial_files/Example_FiducialList.txt")

#Run LGST to get an initial estimate for the gates in gs_target based on the data in ds
specs = pygsti.construction.build_spam_specs(fiducialGateStrings=fiducialList)
gs_lgst = pygsti.do_lgst(ds, specs, targetGateset=gs_target, svdTruncateTo=4, verbosity=1)

#Gauge optimize the result to match the target gateset
gs_lgst_after_gauge_opt = pygsti.gaugeopt_to_target(gs_lgst, gs_target)

#Contract the result to CPTP
gs_clgst = pygsti.contract(gs_lgst_after_gauge_opt, "CPTP")

#Get lists of gate strings for successive iterations of LGST to use
specs  = pygsti.construction.build_spam_specs(fiducialGateStrings=fiducialList)
germList = pygsti.io.load_gatestring_list("tutorial_files/Example_GermsList.txt")
maxLengthList = json.load(open("tutorial_files/Example_maxLengths.json","r"))
Loading tutorial_files/Example_Dataset.txt: 100%
Writing cache file (to speed future loads): tutorial_files/Example_Dataset.txt.cache
--- LGST ---
In [4]:
#Get sufficient set of fiducial pairs, meaning that with these fidicual pairs and the given set of germs,
# the number of gateset parameters which are amplified when all pairs are used are also amplified when using
# the returned subset.
fidPairs = pygsti.alg.find_sufficient_fiducial_pairs(gs_target, fiducialList, fiducialList, germList, verbosity=1)
print(fidPairs)
------  Fiducial Pair Reduction --------
maximum number of amplified parameters = 34
Beginning search for a good set of 1 pairs (36 pair lists to test)
Beginning search for a good set of 2 pairs (630 pair lists to test)
Beginning search for a good set of 3 pairs (7140 pair lists to test)
[(0, 0), (0, 1), (1, 0)]
In [5]:
#Test a specific set of fiducial pairs: see how many gateset parameter are amplified
#FPR.find_sufficient_fiducial_pairs(gs_target, fiducialList, germList, testPairList=[(0,0),(0,1),(1,0)], verbosity=4)
In [6]:
lsgstListOfLists = pygsti.construction.make_lsgst_lists(gs_target.gates.keys(), fiducialList, fiducialList,
                                                        germList, maxLengthList, fidPairs)

gs_lsgst_list = pygsti.do_iterative_mc2gst(ds, gs_clgst, lsgstListOfLists, verbosity=2,
                                         minProbClipForWeighting=1e-6, probClipInterval=(-1e6,1e6),
                                         returnAll=True )
--- Iterative MC2GST: Iter 01 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 40.9238 (92 data params - 40 model params = expected mean of 52; p-value = 0.866034)
  Completed in 0.2s
      Iteration 1 took 0.2s
  
--- Iterative MC2GST: Iter 02 of 10  92 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 40.9238 (92 data params - 40 model params = expected mean of 52; p-value = 0.866034)
  Completed in 0.0s
      Iteration 2 took 0.0s
  
--- Iterative MC2GST: Iter 03 of 10  95 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 45.7751 (95 data params - 40 model params = expected mean of 55; p-value = 0.80775)
  Completed in 0.2s
      Iteration 3 took 0.2s
  
--- Iterative MC2GST: Iter 04 of 10  114 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 71.8674 (114 data params - 40 model params = expected mean of 74; p-value = 0.548568)
  Completed in 0.3s
      Iteration 4 took 0.3s
  
--- Iterative MC2GST: Iter 05 of 10  146 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 115.418 (146 data params - 40 model params = expected mean of 106; p-value = 0.250138)
  Completed in 0.3s
      Iteration 5 took 0.3s
  
--- Iterative MC2GST: Iter 06 of 10  178 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 149.112 (178 data params - 40 model params = expected mean of 138; p-value = 0.244594)
  Completed in 0.4s
      Iteration 6 took 0.5s
  
--- Iterative MC2GST: Iter 07 of 10  210 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 189.135 (210 data params - 40 model params = expected mean of 170; p-value = 0.149864)
  Completed in 0.5s
      Iteration 7 took 0.5s
  
--- Iterative MC2GST: Iter 08 of 10  242 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 238.689 (242 data params - 40 model params = expected mean of 202; p-value = 0.0393795)
  Completed in 0.7s
      Iteration 8 took 0.7s
  
--- Iterative MC2GST: Iter 09 of 10  274 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 272.022 (274 data params - 40 model params = expected mean of 234; p-value = 0.0444789)
  Completed in 0.8s
      Iteration 9 took 0.8s
  
--- Iterative MC2GST: Iter 10 of 10  306 gate strings ---: 
  --- Minimum Chi^2 GST ---
  Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
   groups of ~1 procs each, to distribute over 56 params (taken as 1 param groups of ~56 params).
  Sum of Chi^2 = 295.644 (306 data params - 40 model params = expected mean of 266; p-value = 0.102215)
  Completed in 1.1s
      Iteration 10 took 1.1s
  
Iterative MC2GST Total Time: 4.6s
In [7]:
# Compute a few additional quantities needed to generate the report
Ls = maxLengthList
gateStrDict = { (L,germ):pygsti.construction.repeat_with_max_length(germ,L,False) for L in Ls for germ in germList }

#remove duplicates by replacing duplicate strings with None
runningList = []
for L in Ls:
    for germ in germList:
        if gateStrDict[(L,germ)] in runningList:
            gateStrDict[(L,germ)] = None
        else: runningList.append( gateStrDict[(L,germ)] )
In [8]:
#optimize each gateset to the target
gs_lsgst_list = [ pygsti.gaugeopt_to_target(gs, gs_target) for gs in gs_lsgst_list ]
In [9]:
res = pygsti.report.Results()
res.init_Ls_and_germs("chi2", gs_target, ds, gs_clgst, maxLengthList, germList,
                    gs_lsgst_list, lsgstListOfLists, fiducialList, fiducialList, 
                    pygsti.construction.repeat_with_max_length, False, fidPairs)

res.create_full_report_pdf(filename="tutorial_files/Example_report_FR.pdf", verbosity=2)
*** Generating tables ***
 Iter 01 of 19 :   Generating table: targetSpamTable  [0.0s]
 Iter 02 of 19 :   Generating table: targetGatesTable  [0.0s]
 Iter 03 of 19 :   Generating table: datasetOverviewTable  [0.1s]
 Iter 04 of 19 :   Generating table: bestGatesetSpamTable  [0.0s]
 Iter 05 of 19 :   Generating table: bestGatesetSpamParametersTable  [0.0s]
 Iter 06 of 19 :   Generating table: bestGatesetGaugeOptParamsTable  [0.0s]
 Iter 07 of 19 :   Generating table: bestGatesetGatesTable  [0.0s]
 Iter 08 of 19 :   Generating table: bestGatesetChoiTable  [0.0s]
 Iter 09 of 19 :   Generating table: bestGatesetDecompTable  [0.0s]
 Iter 10 of 19 :   Generating table: bestGatesetRotnAxisTable  [0.0s]
 Iter 11 of 19 :   Generating table: bestGatesetVsTargetTable  [0.3s]
 Iter 12 of 19 :   Generating table: bestGatesetErrorGenTable  [0.0s]
 Iter 13 of 19 :   Generating table: metadataTable  [0.0s]
 Iter 14 of 19 :   Generating table: softwareEnvTable  [0.3s]
 Iter 15 of 19 :   Generating table: fiducialListTable  [0.0s]
 Iter 16 of 19 :   Generating table: prepStrListTable  [0.0s]
 Iter 17 of 19 :   Generating table: effectStrListTable  [0.0s]
 Iter 18 of 19 :   Generating table: germListTable  [0.0s]
 Iter 19 of 19 :   Generating table: progressTable  [0.3s]
*** Generating plots ***
Chi2 plots (2): 
 Iter 1 of 3 :   Generating figure: colorBoxPlotKeyPlot  [2.1s]
 Iter 2 of 3 :   Generating figure: bestEstimateColorBoxPlot  [19.4s]
 Iter 3 of 3 :   Generating figure: invertedBestEstimateColorBoxPlot  [25.6s]

*** Merging into template file ***
Latex file(s) successfully generated.  Attempting to compile with pdflatex...
Initial output PDF tutorial_files/Example_report_FR.pdf successfully generated.
Final output PDF tutorial_files/Example_report_FR.pdf successfully generated. Cleaning up .aux and .log files.
In [ ]: