This example shows how to introduce new gate labels into a GST analysis so as to test for context dependence. In particular, we'll look at the 1-qubit X, Y, I gateset. Suppose a usual GST analysis cannot fit the model well, and that we think this is due to the fact that a "Gi" gate which immediately follows a "Gx" gate is affected by some residual noise that isn't otherwise present. In this case, we can model the system as having two different "Gi" gates: "Gi" and "Gi2", and model the "Gi" gate as "Gi2" whenever it follows a "Gx" gate.
from __future__ import print_function
import pygsti
from pygsti.construction import std1Q_XYI
First we'll create a mock data set that exhibits this context dependence. To do this, we add an additional "Gi2" gate to the data-generating gate set, generate some data using "Gi2"-containing gate sequences, and finally replace all instances of "Gi2" with "Gi" so that it looks like data that was supposed to have just a single "Gi" gate.
# The usual setup: identify the target gateset, fiducials, germs, and max-lengths
gs_target = std1Q_XYI.gs_target
fiducials = std1Q_XYI.fiducials
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8,16,32]
# Create a GateSet to generate the data - one that has two identity gates: Gi and Gi2
gs_datagen = gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)
gs_datagen["Gi2"] = gs_datagen["Gi"].copy()
gs_datagen["Gi2"].depolarize(0.1) # depolarize Gi2 even further
gs_datagen["Gi2"].rotate( (0,0,0.1), gs_datagen.basis) # and rotate it slightly about the Z-axis
# Create a set of gate sequences by constructing the usual set of experiments and using
# "manipulate_gatestring_list" to replace Gi with Gi2 whenever it follows Gx. Create a
# DataSet using the resulting Gi2-containing list of sequences.
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(gs_target, fiducials, fiducials, germs, maxLengths)
rules = [ (("Gx","Gi") , ("Gx","Gi2")) ] # a single replacement rule: GxGi => GxGi2
listOfExperiments = pygsti.construction.manipulate_gatestring_list(listOfExperiments, rules)
ds = pygsti.construction.generate_fake_data(gs_datagen, listOfExperiments, nSamples=10000,
sampleError="binomial", seed=1234)
# Revert all the Gi2 labels back to Gi, so that the DataSet doesn't contain any Gi2 labels.
rev_rules = [ (("Gi2",) , ("Gi",)) ] # returns all Gi2's to Gi's
ds.process_gate_strings(lambda gstr: pygsti.construction.manipulate_gatestring(gstr,rev_rules))
Running "standard" GST on this DataSet
resulst in a bad fit:
gs_target.set_all_parameterizations("TP")
results = pygsti.do_long_sequence_gst(ds, gs_target, fiducials, fiducials,
germs, maxLengths, verbosity=2)
So, since we have a hunch that the reason for the bad fit is that when "Gi" follows "Gx" it looks different, we can fit that data to a model that has two identity gates, call them "Gi" and "Gi2" again, and tell GST to perform the "GxGi => GxGi2" manipulation rule before computing the probability of a gate sequence:
#Create a target gate set which includes a duplicate Gi called Gi2
gs_targetB = gs_target.copy()
gs_targetB['Gi2'] = gs_target['Gi'].copy() # Gi2 should just be another Gi
#Run GST with:
# 1) replacement rules giving instructions how to process all of the gate sequences
# 2) data set aliases which replace labels in the *processed* strings before querying the DataSet.
rules = [ (("Gx","Gi") , ("Gx","Gi2")) ] # a single replacement rule: GxGi => GxGi2
resultsB = pygsti.do_long_sequence_gst(ds, gs_targetB, fiducials, fiducials,
germs, maxLengths,
advancedOptions={"gateLabelAliases": {'Gi2': ('Gi',)},
"stringManipRules": rules},
verbosity=2)
This gives a better fit, but not as good as it should (given that we know the data was generated from exactly the model being used). This is due to the (default) LGST seed being a bad starting point, which can happen, particularly when looking for context dependence. (The LGST estimate - which you can print using print(resultsB.estimates['default'].gatesets['seed'])
- generates the same estimate for Gi and Gi2 which is roughly between the true values of Gi and Gi2, which can be a bad estimate for both gates.) To instead use our own custom guess as the starting point, we do the following:
#Create a guess, which we'll use instead of LGST - here we just
# take a slightly depolarized target.
gs_start = gs_targetB.depolarize(gate_noise=0.01, spam_noise=0.01)
#Run GST with the replacement rules as before.
resultsC = pygsti.do_long_sequence_gst(ds, gs_targetB, fiducials, fiducials,
germs, maxLengths,
advancedOptions={"gateLabelAliases": {'Gi2': ('Gi',)},
"stringManipRules": rules,
"starting point": gs_start},
verbosity=2)
This results is a much better fit and estimate, as seen from the final 2*Delta(log(L))
number.
gsA = pygsti.gaugeopt_to_target(results.estimates['default'].gatesets['final iteration estimate'], gs_datagen)
gsB = pygsti.gaugeopt_to_target(resultsB.estimates['default'].gatesets['final iteration estimate'], gs_datagen)
gsC = pygsti.gaugeopt_to_target(resultsC.estimates['default'].gatesets['final iteration estimate'], gs_datagen)
gsA['Gi2'] = gsA['Gi'] #so gsA is comparable with gs_datagen
print("Diff between truth and standard GST: ", gs_datagen.frobeniusdist(gsA))
print("Diff between truth and context-dep GST w/LGST starting pt: ", gs_datagen.frobeniusdist(gsB))
print("Diff between truth and context-dep GST w/custom starting pt: ", gs_datagen.frobeniusdist(gsC))