Results Object Tutorial

This tutorial explains the structure and usage of the "results" objects (derived from pygsti.protocols.ProtocolResults). These objects are returned from a Protocol object's .run(...) method and thus constitute the main way results are communicated back to the user in pyGSTi. A ProtocolResults object is used to store results corresponding to a given experiment design and its accompanying data (packaged together into a ProtocolData object). A result object's .data attribute holds the ProtocolData associated with it, and its .protocol attribute holds the Protocol that created the results.

A ModelEstimateResults object stores estimated Models for an experiment design and its accompanying data. As a concrete example, we'll explore one of the ModelEstimateResults objects generated by the GST protocols tutorial (so if you haven't run this tutorial, go do it now).

In [ ]:
import pygsti
In [ ]:
results ="../../tutorial_files/Example_GST_Data","GateSetTomography")

As you can see, printing a ModelEstimateResults object gives you a summary of its structure and what you can do with it. The single DataSet can be accessed via the .dataset member, and the estimated Model objects can be found within the objects contained within the .estimates member. As the summary states, .estimates is a dictionary of Estimate objects, and can contain as multiple estimates of the data with the caveat that all of these estimates must use the same experiment design (roughly, the same Circuit lists and/or CircuitStructures for each algorithm iteration, and the same number of iterations).

The Estimate objects represent different gauge-unfixed or "up-to-gauge" estimates, and each holds one or more Model and associated ConfidenceRegion objects, and dictionaries containing the parameters used to generate the estimate. They also may contain multiple gauge-optimized "versions" of their single gauge-unfixed estimate. They can be printed to display a summary of their contents:

In [ ]:

Estimate objects do not store the operation sequences - rather, since these must be the same for all the estimates of a Results object, the Results object holds them separately in its .circuit_lists and .circuit_structs members (both of which are dictionaries like .estimates). Furthermore, since varying the gauge optimization parameters is such a common variation, a single Estimate may hold multiple dictionaries of gauge-optimization parameters as the elements of its .goparameters dictionary. The keys of goparameters will and must correspond to keys within the .models member (the Model estimate of that gauge optimization). The .confidence_regions dictionary (empty in the above example and so not included in the summary) holds ConfidenceRegion objects, each associated with 1) one the Models in .models, 2) one of the Circuit lists in the parent Results object's .circuit_lists, and 3) a confidence level. The keys of .confidence_regions are complicated because they must include all three of these associations, and so the .get_confidence_region(...) method is preferred to directly accessing .confidence_regions. Different Estimate objects typically hold estimates for different model parameterizations and/or algorithm parameters.

In our example, results contains only a single Estimate called default (the default estimate label created by do_long_sequence_gst). This Estimate contains the raw un-gauge-optimized Model labeled "final iteration estimate", as well as a single gauge-optimized Model labeled go0 (again, the default created by do_long_sequence_gst).

Rolling your own...

Creating your own ModelEstimateResults object is typically done within a custom Protocol object's .run(...) method. In this context, you create and populate the results object as follows:

  1. call pygsti.protocol.ModelEstimateResults(data, my_protocol) to create the empty object. Here data is almost always the data object passed to the protocol's .run method, and my_protocol is the protocol object itself (i.e., self).
  2. create contained Estimate objects by calling add_estimate with the essential components of a new gauge-unfixed estimate (the target model, the starting model, the estimated models by iteration, and the parameter dictionary).

This is demonstrated with dummy parameters below.

In [ ]:
class MyProtocol(pygsti.protocols.Protocol):
    def __init__(self, depol_amt, name=None):
        self.depol_amt = depol_amt
    def run(self, data):
        assert(isinstance(data.edesign, pygsti.protocols.GateSetTomographyDesign)) # a GST-like protocol
        edesign = data.edesign
        target_model = edesign.target_model  # all GST exp designs have target models
        nIters = len(edesign.maxlengths)  # number of GST iterations
        my_estimate = target_model.depolarize(op_noise=self.depol_amt)  # for example...
        my_estimate_by_iter = [my_estimate]*nIters  # same estimate for each iteration
        res = pygsti.protocols.ModelEstimateResults(data, self)
        my_parameters = {'depol_amt': self.depol_amt }
        res.add_estimate(target_model, my_estimate, my_estimate_by_iter,
                         my_parameters, estimate_key="myTestEstimate")
        return res

data =  # use same data as results loaded in above 
my_results = MyProtocol(0.01).run(data)

Creating a new gauge-optimized estimate

In many circumstances, one may want to perform a new gauge optimization on an existing gauge-unfixed Estimate, est, creating a new gauge-optimized Model to be stored in est. This is accomplished using the est.add_gaugeoptimized which lightly wraps a call to pygsti.gaugeopt_to_target. You specify the arguments to gaugeopt_to_target as a dictionary to add_gaugeoptimized, but you're allowed to leave out the first two: the Model to be optimized, (model, taken to be the est.models['final iteration estimate']) and the model to optimize toward (target_model, taken to be the est.models['target']). Note that these arguments can still be specified to override their defaults. In particular, setting target_model in the dictionary of parameters allows one to independently specify the model to optimize toward (and this need not be a perfect, ideal model!).

The optional label argument of add_gaugeoptimized specifies the key within est.goparameters and est.models where the gauge optimization argument dictionary and resulting gauge-optimized Model will be stored. If the label given already exists, that gauge-optimized estimate is replaced with the new one. If label is left as None, then "goX" is used as the label, where X is the next available integer.

If the model argument of add_gaugeoptimized is supplied, then this is taken to be the result of the described gauge optimization and no call to gaugeopt_to_target is made. (In this case, one could simply pass an empty dictionary of as goparams.)

Below we demonstrate how to add gauge-optimized models to an Estimate in several ways. Please refer to the previous tutorial on low-level algorithms for an explanation of the various arguments to gaugeopt_to_target.

In [ ]:
est = results.estimates['GateSetTomography']
est.add_gaugeoptimized({'item_weights': {'gates': 1.0, 'spam': 1.0}}, label="equal_footing")
est.add_gaugeoptimized({'item_weights': {'gates': 1.0, 'spam': 1.0, 'Gx': 10.0}}, label="Gx_heavy")

mdl_guess = est.models['target'].depolarize(op_noise=0.05, spam_noise=0.02) # a guess at what gates should be...
est.add_gaugeoptimized({'target_model': mdl_guess, 'item_weights': {'spam': 0.01}}, label="imperfect gopt target")


Confidence Region Factories

In order to compute confidence regions and intervals within reports (see later tutorials), an Estimate object must be equipped with one or more "confidence region factory" objects. These factories are instances of pygsti.objects.ConfidenceRegionFactory (suprise, suprise). Their purpose is to generate confidence regions and intervals (for any confidence level) for quantities computed from a particular Model that in turn resulted from optimizing the likelihood function corresponding to a particular set of circuits. Thus, a confidence region factory has associated with it three things: 1) a Model, 2) a list of Circuits, and 3) a DataSet. A dictionary of factories is held as the .confidence_region_factories member of an Estimate object. Each factory within this dictionary is associated with the one-and-only DataSet of the Estimate's parent ModelEstimateResults object, and the associated Model and Circuit list are given by the keys of .confidence_region_factories (model-key, circuit-list-key tuples). Here model-key is the key of a Model within the Estimate's .models member and circuit-list key is the key of a list within the parent ModelEstimateResults object's .circuit_lists member.

Thankfully, you won't usually need to deal with the .confidence_region_factories member directly. To create a new factory for a given Model, Circuit-list pair you can simply call the add_confidence_region_factory with the appropriate key labels. Once a factory is created, it must be initialized for computing confidence regions. The only non-experimental way to do this currently is to compute the Hessian of the log-likelihood (often computationally intensive) and then projecting the inverse of this Hessian onto the non-gauge space of the model. These two steps are performed via the compute_hessian and project_hessian member functions of a ConfidenceRegionFactory object.

In [ ]:
model_label = "stdgaugeopt"
clist_label = "final"
crfactory = results.estimates['GateSetTomography'].add_confidence_region_factory(model_label, clist_label)

Note that there are different ways of projecting the Hessian which have different strengths and weakenesses. The "optimal gate CIs" method is the most robust method for giving the smallest error bars possible, but it takes significant computation time. The "intrinsic error" method is fast and usually reliable, but may not always give the smallest possible error bars.

In [ ]:
crfactory.compute_hessian(comm=None) #could use lots of processors here...
inv_proj_H = crfactory.project_hessian('intrinsic error')

Alternate way: In the special case of constructing factories for Models which are gauge-equivalent to one another, one can skip the compute_hessian step for all but the first Model, so long as the gauge optimization parameters and the final gauge-tranformation element are stored in the Estimates .goparameters dictionary (automatically populated when adding a gauge optimization via add_gaugeoptimized). Instead, one must gauge-propagate the Hessian from the first Model to the others using the gauge_propagate_confidence_region_factory method of the Estimate object.

Below, we show how this might usually be done: first a confidence region factory for the "final iteration estimate" Model and 'final' operation sequence list (the defaults) is created and a Hessian is computed. Then, when a factory is needed for the gauge-equivalent Model "go0", the Hessian is propagated from the "final iteration estimate" Model. Note that the propagated Hessian must still be projected for the "go0" model.

In [ ]:
crfact_final = results.estimates['GateSetTomography'].add_confidence_region_factory() #default 'final iteration estimate'

results.estimates['GateSetTomography'].gauge_propagate_confidence_region_factory('stdgaugeopt', verbosity=1) #instead of computing one
crfact_stdgo = results.estimates['GateSetTomography'].get_confidence_region_factory('stdgaugeopt')
inv_proj_H = crfact_stdgo.project_hessian('intrinsic error')


In summary, when thinking about ProtocolResults, ModelEstimateResults, and Estimate objects, remember:

  • each ProtocolResults object represents the results for a single set of data (experiment design).
  • each contained Estimate object represents a single gauge-unfixed estimate based on the data. An Estimate may also contain one or more gauge-optimized versions of the gauge-invariant estimate.
  • an Estimate can construct confidence intervals only after a ConfidenceRegionFactory object is created and initialized using a multi-step process. Because it may be computationally expensive, these steps are not performed automatically when reports are generated.
In [ ]: