# Model Testing¶

This tutorial covers different methods of comparing data to given gate-set models. This is distinct from gate set tomography, which finds the best-fitting model for a data set within a space of considered gate-set models. You might use this as a tool alongside or separate from GST. Perhaps you suspect that a given noisy gate-set model is compatible with your data - model testing is the way to find out. Because there is no optimization involved, model testing requires much less time than GST does.

## Setup¶

First, after some usual imports, we'll create some test data based on a depolarized and rotated version of a standard 1-qubit gate set consisting of $I$ (the identity), $X(\pi/2)$ and $Y(\pi/2)$ gates.

In [1]:
from __future__ import division, print_function

import pygsti
import numpy as np
import scipy
from scipy import stats
from pygsti.construction import std1Q_XYI

/usr/local/lib/python3.6/site-packages/pyGSTi-0.9.4.4-py3.6.egg/pygsti/tools/matrixtools.py:23: UserWarning: Could not import Cython extension - falling back to slower pure-python routines
_warnings.warn("Could not import Cython extension - falling back to slower pure-python routines")

In [2]:
datagen_gateset = std1Q_XYI.gs_target.depolarize(gate_noise=0.05, spam_noise=0.1).rotate((0.05,0,0.03))
exp_list = pygsti.construction.make_lsgst_experiment_list(
std1Q_XYI.gs_target, std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,
std1Q_XYI.germs, [1,2,4,8,16,32,64])
ds = pygsti.construction.generate_fake_data(datagen_gateset, exp_list, nSamples=1000,
sampleError='binomial', seed=100)


## Step 1: Construct a test model¶

After we have some data, the first step is creating a model or models that we want to test. This just means creating a GateSet object containing the gates and spam lables found in the data set. We'll create several gate sets that are meant to look like guesses (some including more types of noise) of the true underlying gate set.

In [3]:
gs_target = std1Q_XYI.gs_target
test_model1 = gs_target.copy()
test_model2 = gs_target.depolarize(gate_noise=0.07, spam_noise=0.07)
test_model3 = gs_target.depolarize(gate_noise=0.07, spam_noise=0.07).rotate( (0.02,0.02,0.02) )


## Step 2: Test it!¶

There are three different ways to test a model. Note that in each case the default behavior (and the only behavior demonstrated here) is to never gauge-optimize the test GateSet. Whenever gauge-optimized versions of an Estimate are useful for comparisons with other estimates copies of the test GateSet are used without actually performing any true modification of the GateSet.

### Method1: do_model_test¶

First, you can do it "from scratch" by calling do_model_test, which has a similar signature as do_long_sequence_gst and folows its pattern of returning a Results object. The "estimateLabel" advanced option, which names the Estimate within the returned Results object, can be particularly useful.

In [4]:
# creates a Results object with a "default" estimate
results = pygsti.do_model_test(test_model1, ds, gs_target,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,
[1,2,4,8,16,32,64])

# creates a Results object with a "default2" estimate
results2 = pygsti.do_model_test(test_model2, ds, gs_target,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,

# creates a Results object with a "default3" estimate
results3 = pygsti.do_model_test(test_model3, ds, gs_target,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,

--- Gate Sequence Creation ---
2122 sequences created
Dataset has 2122 entries: 2122 utilized, 0 requested sequences were missing
-- Adding Gauge Optimized (go0) --
--- Gate Sequence Creation ---
2122 sequences created
Dataset has 2122 entries: 2122 utilized, 0 requested sequences were missing
-- Adding Gauge Optimized (go0) --
--- Gate Sequence Creation ---
2122 sequences created
Dataset has 2122 entries: 2122 utilized, 0 requested sequences were missing
-- Adding Gauge Optimized (go0) --


Like any other set of Results objects which share the same DataSet and gate sequences, we can collect all of these estimates into a single Results object and easily make a report containing all three.

In [5]:
results.add_estimates(results2)

pygsti.report.create_standard_report(results, "tutorial_files/modeltest_report",
title="Model Test Example Report",
auto_open=True, verbosity=1)

*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to tutorial_files/modeltest_report directory
Opening tutorial_files/modeltest_report/main.html...
*** Report Generation Complete!  Total time 245.253s ***

Out[5]:
<pygsti.report.workspace.Workspace at 0x109360eb8>

### Method 2: add_model_test¶

Alternatively, you can add a model-to-test to an existing Results object. This is convenient when running GST via do_long_sequence_gst or do_stdpractice_gst has left you with a Results object and you also want to see how well a hand-picked model fares. Since the Results object already contains a DataSet and list of sequences, all you need to do is provide a GateSet. This is accomplished using the add_model_test method of a Results object.

In [6]:
#Create some GST results using do_stdpractice_gst
gst_results = pygsti.do_stdpractice_gst(ds, gs_target,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,
std1Q_XYI.germs, [1,2,4,8,16,32,64])

#Create a report to see that we've added an estimate labeled "MyModel3"
pygsti.report.create_standard_report(gst_results, "tutorial_files/gstwithtest_report1",
title="GST with Model Test Example Report 1",
auto_open=True, verbosity=1)

-- Std Practice:  Iter 1 of 3  (TP) --:
--- Gate Sequence Creation ---
--- LGST ---
--- Iterative MLGST: [##################################################] 100.0%  2122 gate strings ---
Iterative MLGST Total Time: 13.1s
-- Performing 'single' gauge optimization on TP estimate --
-- Std Practice:  Iter 2 of 3  (CPTP) --:
--- Gate Sequence Creation ---
--- Iterative MLGST: [##################################################] 100.0%  2122 gate strings ---
Iterative MLGST Total Time: 14.1s

WARNING: MLGST failed to improve logl: retaining chi2-objective estimate

  -- Performing 'single' gauge optimization on CPTP estimate --
-- Std Practice:  Iter 3 of 3  (Target) --:
--- Gate Sequence Creation ---
-- Performing 'single' gauge optimization on Target estimate --
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to tutorial_files/gstwithtest_report1 directory
Opening tutorial_files/gstwithtest_report1/main.html...
*** Report Generation Complete!  Total time 269.538s ***

Out[6]:
<pygsti.report.workspace.Workspace at 0x11236cfd0>

### Method 3: modelToTest argument¶

Finally, yet another way to perform model testing alongside GST is by using the modelsToTest argument of do_stdpractice_gst. This essentially combines calls to do_stdpractice_gst and Results.add_model_test (demonstrated above) with the added control of being able to specify the ordering of the estimates via the modes argument. To important remarks are in order:

1. You must specify the names (keys of the modelsToTest argument) of your test models in the comma-delimited string that is the modes argument. Just giving a dictionary of GateSets as modelsToTest will not automatically test those models in the returned Results object.

2. You don't actually need to run any GST modes, and can use do_stdpractice_gst in this way to in one call create a single Results object containing multiple model tests, with estimate names that you specify. Thus do_stdpractice_gst can replace the multiple do_model_test calls (with "estimateLabel" advanced options) followed by collecting the estimates using Results.add_estimates demonstrated under "Method 1" above.

In [7]:
gst_results = pygsti.do_stdpractice_gst(ds, gs_target, std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,
[1,2,4,8,16,32,64], modes="TP,Test2,Test3,Target", # You MUST
modelsToTest={'Test2': test_model2, 'Test3': test_model3})

pygsti.report.create_standard_report(gst_results, "tutorial_files/gstwithtest_report2",
title="GST with Model Test Example Report 2",
auto_open=True, verbosity=1)

-- Std Practice:  Iter 1 of 4  (TP) --:
--- Gate Sequence Creation ---
--- LGST ---
--- Iterative MLGST: [##################################################] 100.0%  2122 gate strings ---
Iterative MLGST Total Time: 7.4s
-- Performing 'single' gauge optimization on TP estimate --
-- Std Practice:  Iter 2 of 4  (Test2) --:
--- Gate Sequence Creation ---
-- Performing 'single' gauge optimization on Test2 estimate --
-- Std Practice:  Iter 3 of 4  (Test3) --:
--- Gate Sequence Creation ---
-- Performing 'single' gauge optimization on Test3 estimate --
-- Std Practice:  Iter 4 of 4  (Target) --:
--- Gate Sequence Creation ---
-- Performing 'single' gauge optimization on Target estimate --
*** Creating workspace ***
*** Generating switchboard ***
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
Found standard clifford compilation from std1Q_XYI
*** Generating tables ***
*** Generating plots ***
*** Merging into template file ***
Output written to tutorial_files/gstwithtest_report2 directory
Opening tutorial_files/gstwithtest_report2/main.html...
*** Report Generation Complete!  Total time 244.747s ***

Out[7]:
<pygsti.report.workspace.Workspace at 0x11234d518>
In [ ]: