This tutorial covers different methods of comparing data to given (fixed) QIP models. This is distinct from model-based tomography, which finds the best-fitting model for a data set within a space of models set by a Model
object's parameterization. You might use this as a tool alongside or separate from GST. Perhaps you suspect that a given noisy QIP model is compatible with your data - model testing is the way to find out. Because there is no optimization involved, model testing requires much less time than GST does, and doens't place any requirements on which circuits are used in performing the test (though some circuits will give a more precise result).
First, after some usual imports, we'll create some test data based on a depolarized and rotated version of a standard 1-qubit model consisting of $I$ (the identity), $X(\pi/2)$ and $Y(\pi/2)$ gates.
from __future__ import division, print_function
import pygsti
import numpy as np
import scipy
from scipy import stats
from pygsti.construction import std1Q_XYI
datagen_model = std1Q_XYI.target_model().depolarize(op_noise=0.05, spam_noise=0.1).rotate((0.05,0,0.03))
exp_list = pygsti.construction.make_lsgst_experiment_list(
std1Q_XYI.target_model(), std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,
std1Q_XYI.germs, [1,2,4,8,16,32,64])
ds = pygsti.construction.generate_fake_data(datagen_model, exp_list, nSamples=1000,
sampleError='binomial', seed=100)
After we have some data, the first step is creating a model or models that we want to test. This just means creating a Model
object containing the operations (including SPAM) found in the data set. We'll create several models that are meant to look like guesses (some including more types of noise) of the true underlying model.
target_model = std1Q_XYI.target_model()
test_model1 = target_model.copy()
test_model2 = target_model.depolarize(op_noise=0.07, spam_noise=0.07)
test_model3 = target_model.depolarize(op_noise=0.07, spam_noise=0.07).rotate( (0.02,0.02,0.02) )
There are three different ways to test a model. Note that in each case the default behavior (and the only behavior demonstrated here) is to never gauge-optimize the test Model
. (Whenever gauge-optimized versions of an Estimate
are useful for comparisons with other estimates, copies of the test Model
are used without actually performing any modification of the original Model
.)
do_model_test
¶First, you can do it "from scratch" by calling do_model_test
, which has a similar signature as do_long_sequence_gst
and folows its pattern of returning a Results
object. The "estimateLabel" advanced option, which names the Estimate
within the returned Results
object, can be particularly useful.
# creates a Results object with a "default" estimate
results = pygsti.do_model_test(test_model1, ds, target_model,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,
[1,2,4,8,16,32,64])
# creates a Results object with a "default2" estimate
results2 = pygsti.do_model_test(test_model2, ds, target_model,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,
[1,2,4,8,16,32,64], advancedOptions={'estimateLabel': 'default2'})
# creates a Results object with a "default3" estimate
results3 = pygsti.do_model_test(test_model3, ds, target_model,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,
[1,2,4,8,16,32,64], advancedOptions={'estimateLabel': 'default3'})
--- Circuit Creation --- 2122 sequences created Dataset has 2122 entries: 2122 utilized, 0 requested sequences were missing -- Adding Gauge Optimized (go0) -- --- Circuit Creation --- 2122 sequences created Dataset has 2122 entries: 2122 utilized, 0 requested sequences were missing -- Adding Gauge Optimized (go0) -- --- Circuit Creation --- 2122 sequences created Dataset has 2122 entries: 2122 utilized, 0 requested sequences were missing -- Adding Gauge Optimized (go0) --
Like any other set of Results
objects which share the same DataSet
and operation sequences, we can collect all of these estimates into a single Results
object and easily make a report containing all three.
results.add_estimates(results2)
results.add_estimates(results3)
pygsti.report.create_standard_report(results, "../tutorial_files/modeltest_report",
title="Model Test Example Report",
auto_open=True, verbosity=1)
*** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/modeltest_report directory Opening ../tutorial_files/modeltest_report/main.html... *** Report Generation Complete! Total time 72.3319s ***
<pygsti.report.workspace.Workspace at 0x1246f8eb8>
add_model_test
¶Alternatively, you can add a model-to-test to an existing Results
object. This is convenient when running GST via do_long_sequence_gst
or do_stdpractice_gst
has left you with a Results
object and you also want to see how well a hand-picked model fares. Since the Results
object already contains a DataSet
and list of sequences, all you need to do is provide a Model
. This is accomplished using the add_model_test
method of a Results
object.
#Create some GST results using do_stdpractice_gst
gst_results = pygsti.do_stdpractice_gst(ds, target_model,
std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,
std1Q_XYI.germs, [1,2,4,8,16,32,64])
#Add a model to test
gst_results.add_model_test(target_model, test_model3, estimate_key='MyModel3')
#Create a report to see that we've added an estimate labeled "MyModel3"
pygsti.report.create_standard_report(gst_results, "../tutorial_files/gstwithtest_report1",
title="GST with Model Test Example Report 1",
auto_open=True, verbosity=1)
-- Std Practice: Iter 1 of 3 (TP) --: --- Circuit Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 2122 operation sequences --- Iterative MLGST Total Time: 5.4s -- Performing 'single' gauge optimization on TP estimate -- -- Std Practice: Iter 2 of 3 (CPTP) --: --- Circuit Creation --- --- Iterative MLGST: [##################################################] 100.0% 2122 operation sequences --- Iterative MLGST Total Time: 49.8s --- Re-optimizing logl after robust data scaling --- -- Performing 'single' gauge optimization on CPTP estimate -- -- Conveying 'single' gauge optimization to CPTP.Robust+ estimate -- -- Std Practice: Iter 3 of 3 (Target) --: --- Circuit Creation --- -- Performing 'single' gauge optimization on Target estimate -- *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/gstwithtest_report1 directory Opening ../tutorial_files/gstwithtest_report1/main.html... *** Report Generation Complete! Total time 196.674s ***
<pygsti.report.workspace.Workspace at 0x1294522b0>
modelToTest
argument¶Finally, yet another way to perform model testing alongside GST is by using the modelsToTest
argument of do_stdpractice_gst
. This essentially combines calls to do_stdpractice_gst
and Results.add_model_test
(demonstrated above) with the added control of being able to specify the ordering of the estimates via the modes
argument. To important remarks are in order:
You must specify the names (keys of the modelsToTest
argument) of your test models in the comma-delimited string that is the modes
argument. Just giving a dictionary of Model
s as modelsToTest
will not automatically test those models in the returned Results
object.
You don't actually need to run any GST modes, and can use do_stdpractice_gst
in this way to in one call create a single Results
object containing multiple model tests, with estimate names that you specify. Thus do_stdpractice_gst
can replace the multiple do_model_test
calls (with "estimateLabel" advanced options) followed by collecting the estimates using Results.add_estimates
demonstrated under "Method 1" above.
gst_results = pygsti.do_stdpractice_gst(ds, target_model, std1Q_XYI.prepStrs, std1Q_XYI.effectStrs, std1Q_XYI.germs,
[1,2,4,8,16,32,64], modes="TP,Test2,Test3,Target", # You MUST
modelsToTest={'Test2': test_model2, 'Test3': test_model3})
pygsti.report.create_standard_report(gst_results, "../tutorial_files/gstwithtest_report2",
title="GST with Model Test Example Report 2",
auto_open=True, verbosity=1)
-- Std Practice: Iter 1 of 4 (TP) --: --- Circuit Creation --- --- LGST --- --- Iterative MLGST: [##################################################] 100.0% 2122 operation sequences --- Iterative MLGST Total Time: 5.3s -- Performing 'single' gauge optimization on TP estimate -- -- Std Practice: Iter 2 of 4 (Test2) --: --- Circuit Creation --- -- Performing 'single' gauge optimization on Test2 estimate -- -- Std Practice: Iter 3 of 4 (Test3) --: --- Circuit Creation --- -- Performing 'single' gauge optimization on Test3 estimate -- -- Std Practice: Iter 4 of 4 (Target) --: --- Circuit Creation --- -- Performing 'single' gauge optimization on Target estimate -- *** Creating workspace *** *** Generating switchboard *** Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI Found standard clifford compilation from std1Q_XYI *** Generating tables ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning: Idle tomography failed: Label{layers}
*** Generating plots *** *** Merging into template file *** Output written to ../tutorial_files/gstwithtest_report2 directory Opening ../tutorial_files/gstwithtest_report2/main.html... *** Report Generation Complete! Total time 141.304s ***
<pygsti.report.workspace.Workspace at 0x133484e80>
Thats it! Now that you know more about model-testing you may want to go back to the overview of pyGST applications.