The gate sequences used in standard Long Sequence GST are more than what are needed to amplify every possible gate error. (Technically, this is due to the fact that the informationaly complete fiducial sub-sequences allow extraction of each germ's entire process matrix, when all that is needed is the part describing the amplified directions in gate set space.) Because of this over-completeness, fewer sequences, i.e. experiments, may be used whilst retaining the desired Heisenberg-like scaling ($\sim 1/L$, where $L$ is the maximum length sequence). The over-completeness can still be desirable, however, as it makes the GST optimization more robust to model violation and so can serve to stabilize the GST parameter optimization in the presence of significant non-Markovian noise. Recall that the form of a GST gate sequence is
$$S = F_i (g_k)^n F_j $$where $F_i$ is a "preparation fiducial" sequence, $F_j$ is a "measurement fiducial" sequence, and "g_k" is a "germ" sequence. The repeated germ sequence $(g_k)^n$ we refer to as a "germ-power". There are currently three different ways to reduce a standard set of GST gate sequences within pyGSTi, each of which removes certain $(F_i,F_j)$ fiducial pairs for certain germ-powers.
We now demonstrate how to invoke each of these methods within pyGSTi for the case of a single qubit, using our standard $X(\pi/2)$, $Y(\pi/2)$, $I$ gateset. First, we retrieve a target GateSet
as usual, along with corresponding sets of fiducial and germ sequences. We set the maximum length to be 32, roughly consistent with our data-generating gate set having gates depolarized by 10%.
#Import pyGSTi and the "stardard 1-qubit quantities for a gateset with X(pi/2), Y(pi/2), and idle gates"
import pygsti
import pygsti.construction as pc
from pygsti.construction import std1Q_XYI
#Collect a target gate set, germ and fiducial strings, and set
# a list of maximum lengths.
gs_target = std1Q_XYI.gs_target
prep_fiducials = std1Q_XYI.fiducials
meas_fiducials = std1Q_XYI.fiducials
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8,16,32]
gateLabels = list(gs_target.gates.keys())
print("Gate labels = ", gateLabels)
Gate labels = ['Gi', 'Gx', 'Gy']
Now let's generate a list of all the gate sequences for each maximum length - so a list of lists. We'll generate the full lists (without any reduction) and the lists for each of the three reduction types listed above. In the random reduction case, we'll keep 30% of the fiducial pairs, removing 70% of them.
#Make list-of-lists of GST gate sequences
fullStructs = pc.make_lsgst_structs(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths)
#Print the number of gate sequences for each maximum length
print("** Without any reduction ** ")
for L,strct in zip(maxLengths,fullStructs):
print("L=%d: %d gate sequences" % (L,len(strct.allstrs)))
#Make a (single) list of all the GST sequences ever needed,
# that is, the list of all the experiments needed to perform GST.
fullExperiments = pc.make_lsgst_experiment_list(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths)
print("\n%d experiments to run GST." % len(fullExperiments))
** Without any reduction ** L=1: 92 gate sequences L=2: 168 gate sequences L=4: 450 gate sequences L=8: 862 gate sequences L=16: 1282 gate sequences L=32: 1702 gate sequences 1702 experiments to run GST.
fidPairs = pygsti.alg.find_sufficient_fiducial_pairs(
gs_target, prep_fiducials, meas_fiducials, germs,
searchMode="random", nRandom=100, seed=1234,
verbosity=1, memLimit=int(2*(1024)**3), minimumPairs=2)
# fidPairs is a list of (prepIndex,measIndex) 2-tuples, where
# prepIndex indexes prep_fiducials and measIndex indexes meas_fiducials
print("Global FPR says we only need to keep the %d pairs:\n %s\n"
% (len(fidPairs),fidPairs))
gfprStructs = pc.make_lsgst_structs(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,
fidPairs=fidPairs)
print("Global FPR reduction")
for L,strct in zip(maxLengths,gfprStructs):
print("L=%d: %d gate sequences" % (L,len(strct.allstrs)))
gfprExperiments = pc.make_lsgst_experiment_list(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,
fidPairs=fidPairs)
print("\n%d experiments to run GST." % len(gfprExperiments))
------ Fiducial Pair Reduction -------- maximum number of amplified parameters = 34 Beginning search for a good set of 2 pairs (630 pair lists to test) Beginning search for a good set of 3 pairs (7140 pair lists to test) Global FPR says we only need to keep the 3 pairs: [(0, 4), (0, 5), (5, 2)] Global FPR reduction L=1: 92 gate sequences L=2: 97 gate sequences L=4: 123 gate sequences L=8: 159 gate sequences L=16: 195 gate sequences L=32: 231 gate sequences 231 experiments to run GST.
fidPairsDict = pygsti.alg.find_sufficient_fiducial_pairs_per_germ(
gs_target, prep_fiducials, meas_fiducials, germs,
searchMode="random", constrainToTP=True,
nRandom=100, seed=1234, verbosity=1,
memLimit=int(2*(1024)**3))
print("\nPer-germ FPR to keep the pairs:")
for germ,pairsToKeep in fidPairsDict.items():
print("%s: %s" % (str(germ),pairsToKeep))
pfprStructs = pc.make_lsgst_structs(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,
fidPairs=fidPairsDict) #note: fidPairs arg can be a dict too!
print("\nPer-germ FPR reduction")
for L,strct in zip(maxLengths,pfprStructs):
print("L=%d: %d gate sequences" % (L,len(strct.allstrs)))
pfprExperiments = pc.make_lsgst_experiment_list(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,
fidPairs=fidPairsDict)
print("\n%d experiments to run GST." % len(pfprExperiments))
------ Individual Fiducial Pair Reduction -------- Progress: [##################################################] 100.0% -- GxGxGyGxGyGy germ (4 params) Per-germ FPR to keep the pairs: Gi: [(0, 0), (1, 1), (5, 1), (5, 2)] GxGiGy: [(1, 3), (1, 4), (4, 0), (5, 0)] GxGxGy: [(0, 2), (0, 4), (1, 3), (2, 5), (3, 2), (4, 4)] Gy: [(0, 0), (0, 5), (1, 1), (4, 4)] GxGyGyGi: [(0, 2), (1, 3), (1, 4), (4, 4), (5, 0), (5, 2)] GxGxGyGxGyGy: [(1, 3), (1, 4), (4, 0), (5, 0)] GxGyGi: [(1, 3), (1, 4), (4, 0), (5, 0)] GxGyGy: [(0, 2), (1, 3), (1, 4), (4, 4), (5, 0), (5, 2)] GyGiGi: [(0, 0), (0, 5), (1, 1), (4, 4)] Gx: [(0, 0), (0, 4), (3, 3), (5, 2)] GxGiGi: [(0, 0), (0, 4), (3, 3), (5, 2)] GxGy: [(1, 3), (1, 4), (4, 0), (5, 0)] Per-germ FPR reduction L=1: 92 gate sequences L=2: 99 gate sequences L=4: 140 gate sequences L=8: 193 gate sequences L=16: 247 gate sequences L=32: 301 gate sequences 301 experiments to run GST.
#keep only 30% of the pairs
rfprStructs = pc.make_lsgst_structs(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,
keepFraction=0.30, keepSeed=1234)
print("Random FPR reduction")
for L,strct in zip(maxLengths,rfprStructs):
print("L=%d: %d gate sequences" % (L,len(strct.allstrs)))
rfprExperiments = pc.make_lsgst_experiment_list(
gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,
keepFraction=0.30, keepSeed=1234)
print("\n%d experiments to run GST." % len(rfprExperiments))
Random FPR reduction L=1: 92 gate sequences L=2: 112 gate sequences L=4: 209 gate sequences L=8: 339 gate sequences L=16: 467 gate sequences L=32: 597 gate sequences 597 experiments to run GST.
In each case above, we constructed (1) a list-of-lists giving the GST gate sequences for each maximum-length stage, and (2) a list of the experiments. In what follows, we'll use the experiment list to generate some simulated ("fake") data for each case, and then run GST on it. Since this is done in exactly the same way for all three cases, we'll put all of the logic in a function. Note that the use of fiducial pair redution requires the use of do_long_sequence_gst_base
, since do_long_sequence_gst
internally builds a complete list of gate sequences.
#use a depolarized version of the target gates to generate the data
gs_datagen = gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)
def runGST(gstStructs, exptList):
#Use list of experiments, expList, to generate some data
ds = pc.generate_fake_data(gs_datagen, exptList,
nSamples=1000,sampleError="binomial", seed=1234)
#Use "base" driver to directly pass list of gatestring structures
return pygsti.do_long_sequence_gst_base(
ds, gs_target, gstStructs, verbosity=1)
print("\n------ GST with standard (full) sequences ------")
full_results = runGST(fullStructs, fullExperiments)
print("\n------ GST with GFPR sequences ------")
gfpr_results = runGST(gfprStructs, gfprExperiments)
print("\n------ GST with PFPR sequences ------")
pfpr_results = runGST(pfprStructs, pfprExperiments)
print("\n------ GST with RFPR sequences ------")
rfpr_results = runGST(rfprStructs, rfprExperiments)
------ GST with standard (full) sequences ------ --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 1702 gate strings --- Iterative MLGST Total Time: 12.4s --- Re-optimizing logl after robust data scaling --- ------ GST with GFPR sequences ------ --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 231 gate strings --- Iterative MLGST Total Time: 5.6s --- Re-optimizing logl after robust data scaling --- ------ GST with PFPR sequences ------ --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 301 gate strings --- Iterative MLGST Total Time: 4.9s --- Re-optimizing logl after robust data scaling --- ------ GST with RFPR sequences ------ --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 597 gate strings --- Iterative MLGST Total Time: 8.0s --- Re-optimizing logl after robust data scaling ---
Finally, one can generate reports using GST with reduced-sequences:
pygsti.report.create_standard_report(full_results,
filename="tutorial_files/example_stdstrs_report", title="Standard GST Strings Example")
pygsti.report.create_standard_report(gfpr_results,
filename="tutorial_files/example_gfpr_report", title="Global FPR Report Example")
pygsti.report.create_standard_report(pfpr_results,
filename="tutorial_files/example_pfpr_report", title="Per-germ FPR Report Example")
pygsti.report.create_standard_report(rfpr_results,
filename="tutorial_files/example_rfpr_report", title="Random FPR Report Example")
*** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/rbutils.py:382: UserWarning: Predicted RB decay parameter / error rate may be unreliable: Gateset is not (approximately) trace-preserving.
*** Generating plots *** *** Merging into template file *** Output written to tutorial_files/example_stdstrs_report directory *** Report Generation Complete! Total time 148.394s *** *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/rbutils.py:382: UserWarning: Predicted RB decay parameter / error rate may be unreliable: Gateset is not (approximately) trace-preserving.
*** Generating plots *** *** Merging into template file *** Output written to tutorial_files/example_gfpr_report directory *** Report Generation Complete! Total time 17.9813s *** *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI *** Generating tables *** *** Generating plots *** *** Merging into template file *** Output written to tutorial_files/example_pfpr_report directory *** Report Generation Complete! Total time 21.3s *** *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI *** Generating tables *** *** Generating plots *** *** Merging into template file *** Output written to tutorial_files/example_rfpr_report directory *** Report Generation Complete! Total time 38.1271s ***
<pygsti.report.workspace.Workspace at 0x10dd32518>
If all has gone well, the Standard GST, GFPR, PFPR, and RFPR, reports may now be viewed. The only notable difference in the output are "gaps" in the color box plots which plot quantities such as the log-likelihood across all gate sequences, organized by germ and fiducials.