How to gauge-optimize to a model other than the ideal targets

Typically gauge optimizations are performed with respect to the set of ideal target gates and spam operations. This is convenient, since you need to specify the ideal targets as points of comparison, but not always the best approach. Particularly when you expect all or some of the gate estimates to either substantially differ from the ideal operations or differ, even by small amounts, in particular ways from the ideal operations, it can be hugely aid later interpretation to specify a non-ideal Model as the target for gauge-optimization. By separating the "ideal targets" from the "gauge optimization targets", you're able to tell the gauge optimizer what gates you think you have, including any known errors. This can result in a gauge-optimized estimate which is much more sensible and straightforward to interpet.

For example, gauge transformations can slosh error between the SPAM operations and the non-unital parts of gates. If you know your gates are slightly non-unital you can include this information in the gauge-optimization-target (by specifying a Model which is slightly non-unital) and obtain a resulting estimate of low SPAM-error and slightly non-unital gates. If you just used the ideal (unital) target gates, the gauge-optimizer, which is often setup to care more about matching gate than SPAM ops, could have sloshed all the error into the SPAM ops, resulting in a confusing estimate that indicates perfectly unital gates and horrible SPAM operations.

This example demonstrates how to separately specify the gauge-optimization-target Model. There are two places where you might want to do this: 1) when calling pygsti.do_long_sequence_gst, to direct the gauge-optimization it performs, or 2) when calling estimate.add_gaugeoptimized to add a gauge-optimized version of an estimate after the main GST algorithms have been run.

In both cases, a dictionary of gauge-optimization "parameters" (really just a dictionary of arguments for pygsti.gaugeopt_to_target) is required, and one simply needs to set the targetModel argument of pygsti.gaugeopt_to_target by specifying targetModel within the parameter dictionary. We demonstrate this below.

First, we'll setup a standard GST analysis as usual except we'll create a mdl_guess model that is meant to be an educated guess at what we expect the estimate to be. We'll gauge optimize to mdl_guess instead of the usual target_model:

In [1]:
from __future__ import print_function
import pygsti
from pygsti.construction import std1Q_XYI
In [2]:
#Generate some fake data (all usual stuff here)
target_model = std1Q_XYI.target_model()
mdl_datagen  = std1Q_XYI.target_model().depolarize(op_noise=0.01, spam_noise=0.001).rotate( (0,0,0.05) )
mdl_datagen['Gx'].depolarize(0.1) #depolarize Gx even further
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(
    target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4])
ds = pygsti.construction.generate_fake_data(mdl_datagen, listOfExperiments, nSamples=1000,
                                            sampleError="binomial", seed=1234)
target_model.set_all_parameterizations("TP") #we'll do TP-constrained GST below

Create a "guess" model that anticipates a more-depolarized Gx gate

In [3]:
mdl_guess = std1Q_XYI.target_model()
mdl_guess['Gx'].depolarize(0.1)

Run GST with and without the guess model

In [4]:
# GST with standard "ideal target" gauge optimization
results1 = pygsti.do_long_sequence_gst(
    ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],
    gaugeOptParams={'itemWeights': {'gates': 1, 'spam': 1}}, verbosity=1)
--- Circuit Creation ---
--- LGST ---
--- Iterative MLGST: [##################################################] 100.0%  450 operation sequences ---
Iterative MLGST Total Time: 1.2s
In [5]:
# GST with our guess as the gauge optimization target (just add "targetModel" to gaugeOptParams)
results2 = pygsti.do_long_sequence_gst(
    ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],
    gaugeOptParams={'targetModel': mdl_guess, 
                    'itemWeights': {'gates': 1, 'spam': 1}}, verbosity=1)
--- Circuit Creation ---
--- LGST ---
--- Iterative MLGST: [##################################################] 100.0%  450 operation sequences ---
Iterative MLGST Total Time: 1.2s
In [6]:
#Note: you can also use the gaugeOptTarget
results3 = pygsti.do_stdpractice_gst(
    ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],
    gaugeOptTarget=mdl_guess, verbosity=1)
-- Std Practice:  [##################################################] 100.0%  (Target) --

Comparisons

After running both the "ideal-target" and "mdl_guess-target" gauge optimizations, we can compare them with the ideal targets and the data-generating gates themselves. We see that using mdl_guess results in a similar frobenius distance to the ideal targets, a slightly closer estimate to the data-generating model, and reflects our expectation that the Gx gate is slightly worse than the other gates.

In [7]:
mdl_1 = results1.estimates['default'].models['go0']
mdl_2 = results2.estimates['default'].models['go0']
print("Diff between ideal and ideal-target-gauge-opt = ", mdl_1.frobeniusdist(target_model))
print("Diff between ideal and mdl_guess-gauge-opt = ", mdl_2.frobeniusdist(target_model))
print("Diff between data-gen and ideal-target-gauge-opt = ", mdl_1.frobeniusdist(mdl_datagen))
print("Diff between data-gen and mdl_guess-gauge-opt = ", mdl_2.frobeniusdist(mdl_datagen))
print("Diff between ideal-target-GO and mdl_guess-GO = ", mdl_1.frobeniusdist(mdl_2))

print("\nPer-op difference between ideal and ideal-target-GO")
print(mdl_1.strdiff(target_model))

print("\nPer-op difference between ideal and mdl_guess-GO")
print(mdl_2.strdiff(target_model))
Diff between ideal and ideal-target-gauge-opt =  0.026385852729678447
Diff between ideal and mdl_guess-gauge-opt =  0.026386339218052658
Diff between data-gen and ideal-target-gauge-opt =  0.011688530015221483
Diff between data-gen and mdl_guess-gauge-opt =  0.011622694618020846
Diff between ideal-target-GO and mdl_guess-GO =  0.00015586066892434803

Per-op difference between ideal and ideal-target-GO
Model Difference:
 Preps:
  rho0 = 0.0180446
 POVMs:
  Mdefault:     0 = 0.0147282
    1 = 0.0147282
 Gates:
  Gi = 0.0795213
  Gx = 0.184807
  Gy = 0.0231531


Per-op difference between ideal and mdl_guess-GO
Model Difference:
 Preps:
  rho0 = 0.0178975
 POVMs:
  Mdefault:     0 = 0.0144092
    1 = 0.0144092
 Gates:
  Gi = 0.0795216
  Gx = 0.184867
  Gy = 0.0232215

Adding a gauge optimization to existing Results

We can also include our mdl_guess as the targetModel when adding a new gauge-optimized result. See other examples for more info on using add_gaugeoptimized.

In [8]:
results1.estimates['default'].add_gaugeoptimized( {'targetModel': mdl_guess, 
                                                   'itemWeights': {'gates': 1, 'spam': 1}},
                                                label="using mdl_guess")
In [9]:
mdl_1b = results1.estimates['default'].models['using mdl_guess']
print(mdl_1b.frobeniusdist(mdl_2)) # gs1b is the same as gs2
0.0
In [ ]: