This example demonstrates how to take a previously computed Results
object and add new gauge-optimized version of to one of the estimates. First, let's "pre-compute" a Results
object using do_long_sequence_gst
, which contains a single Estimate
called "default":
from __future__ import print_function
import pygsti, pickle
from pygsti.construction import std1Q_XYI
#Generate some fake data and run GST on it.
gs_target = std1Q_XYI.gs_target.copy()
gs_datagen = std1Q_XYI.gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(
gs_target, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4])
ds = pygsti.construction.generate_fake_data(gs_datagen, listOfExperiments, nSamples=1000,
sampleError="binomial", seed=1234)
gs_target.set_all_parameterizations("TP")
results = pygsti.do_long_sequence_gst(
ds, gs_target, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],
gaugeOptParams={'itemWeights': {'gates': 1, 'spam': 1}}, verbosity=1)
with open("example_files/regaugeopt_result.pkl","wb") as f:
pickle.dump(results, f) # pickle the results, to mimic typical workflow
--- Gate Sequence Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 450 gate strings --- Iterative MLGST Total Time: 2.0s --- Re-optimizing logl after robust data scaling ---
Next, let's load in the pre-computed results and use the add_gauge_optimization
method of the pygsti.objects.Estimate
object to add a new gauge-optimized version of the (gauge un-fixed) gate set estimate stored in my_results.estimates['default']
. The first argument of add_gauge_optimization
is just a dictionary of arguments to pygsti.gaugeopt_to_target
except that you don't need to specify the GateSet
to gauge optimize or the target GateSet
(just like the gaugeOptParams
argument of do_long_sequence_gst
). The optional "label
" argument defines the key name for the gauge-optimized GateSet
and the corresponding parameter dictionary within the Estimate
's .gatesets
and .goparameters
dictionaries, respectively.
my_results = pickle.load(open("example_files/regaugeopt_result.pkl","rb"))
estimate = my_results.estimates['default']
estimate.add_gaugeoptimized( {'itemWeights': {'gates': 1, 'spam': 0.001}}, label="Spam 1e-3" )
gs_gaugeopt = estimate.gatesets['Spam 1e-3']
print(list(estimate.goparameters.keys())) # 'go0' is the default gauge-optimization label
print(gs_gaugeopt.frobeniusdist(estimate.gatesets['target']))
['go0', 'Spam 1e-3'] 0.03870984683871326
One can also perform the gauge optimization separately and specify it using the gateset
argument (this is useful when you want or need to compute the gauge optimization elsewhere):
gs_unfixed = estimate.gatesets['final iteration estimate']
gs_gaugefixed = pygsti.gaugeopt_to_target(gs_unfixed, estimate.gatesets['target'], {'gates': 1, 'spam': 0.001})
estimate.add_gaugeoptimized( {'any': "dictionary",
"doesn't really": "matter",
"but could be useful it you put gaugeopt params": 'here'},
gateset=gs_gaugefixed, label="Spam 1e-3 custom" )
print(list(estimate.goparameters.keys()))
print(estimate.gatesets['Spam 1e-3 custom'].frobeniusdist(estimate.gatesets['Spam 1e-3']))
['go0', 'Spam 1e-3', 'Spam 1e-3 custom'] 0.0
You can look at the gauge optimization parameters using .goparameters
:
import pprint
pp = pprint.PrettyPrinter()
pp.pprint(dict(estimate.goparameters['Spam 1e-3']))
{'_gaugeGroupEl': <pygsti.objects.gaugegroup.TPGaugeGroupElement object at 0x10c6435c0>, 'gateset': <pygsti.objects.gateset.GateSet object at 0x10c6b6da0>, 'itemWeights': {'gates': 1, 'spam': 0.001}, 'returnAll': True, 'targetGateset': <pygsti.objects.gateset.GateSet object at 0x10bc6f748>}
Finally, note that if, in the original call to do_long_sequence_gst
, you set gaugeOptParams=False
then no gauge optimizations are performed (there would be no "go0
" elements) and you start with a blank slate to perform whatever gauge optimizations you want on your own.