An example of how to run GST on a 2-qubit system

This example gives an overview of the typical steps used to perform an end-to-end (i.e. experimental-data-to-report) Gate Set Tomography analysis on a 2-qubit system. The steps are very similar to the single-qubit case described in the tutorials, but we thought 2Q-GST is an important enough topic to deserve a separate example.

In [1]:
from __future__ import print_function
import pygsti

Step 1: Construct the desired 2-qubit model

Since the purpose of this example is to show how to run 2Q-GST, we'll just use a built-in "standard" 2-qubit model. (Another example covers how to create a custom 2-qubit model.)

In [2]:
from pygsti.construction import std2Q_XYICNOT
target_model = std2Q_XYICNOT.target_model()

Step 2: Obtain lists of fiducial and germ operation sequences

These are the building blocks of the operation sequences performed in the experiment. Typically, these lists are either provided by pyGSTi because you're using a "standard" model (as we are here), or computed using the "fiducial selection" and "germ selection" algorithms which are a part of pyGSTi and covered in the tutorials. Since 2Q-GST with the 71 germs of the complete set would take a while, we'll also create a couple of small germ sets to demonstrate 2Q-GST more quickly (because we know you have important stuff to do).

In [3]:
prep_fiducials = std2Q_XYICNOT.prepStrs
effect_fiducials = std2Q_XYICNOT.effectStrs
In [4]:
germs4 = pygsti.construction.circuit_list(
    [ ('Gix',), ('Giy',), ('Gxi',), ('Gyi',) ] )

germs11 = pygsti.construction.circuit_list(
    [ ('Gix',), ('Giy',), ('Gxi',), ('Gyi',), ('Gcnot',), ('Gxi','Gyi'), ('Gix','Giy'),
      ('Gix','Gcnot'), ('Gxi','Gcnot'), ('Giy','Gcnot'), ('Gyi','Gcnot') ] )

germs71 = std2Q_XYICNOT.germs

Step 3: Data generation

Now that fiducial and germ strings have been found, we can generate the list of experiments needed to run GST, just like in the 1-qubit case. As an additional input we'll need a list of lengths indicating the maximum length strings to use on each successive GST iteration.

In [5]:
#A list of maximum lengths for each GST iteration - typically powers of 2 up to
# the longest experiment you can glean information from.  Here we just pick 2 so things run quickly.
maxLengths = [1,2] # 4,16,32...

#Create a list of GST experiments for this model, with
#the specified fiducials, germs, and maximum lengths.  We use
#"germs4" here so that the tutorial runs quickly; really, you'd
#want to use germs71!
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(target_model.operations.keys(), prep_fiducials,
                                                                   effect_fiducials, germs4, maxLengths)

#Create an empty dataset file, which stores the list of experiments
# and zerod-out columns where data should be inserted.  Note the use of the SPAM
# labels in the "Columns" header line.
pygsti.io.write_empty_dataset("example_files/My2QDataTemplate.txt", listOfExperiments,
                              "## Columns = 00 count, 01 count, 10 count, 11 count")
In [6]:
#Generate some "fake" (simulated) data based on a depolarized version of the target model
mdl_datagen = target_model.depolarize(op_noise=0.1, spam_noise=0.001)
ds = pygsti.construction.generate_fake_data(mdl_datagen, listOfExperiments, nSamples=1000,
                                            sampleError="multinomial", seed=2016)

#if you have a dataset file with real data in it, load it using something like:
#ds = pygsti.io.load_dataset("mydir/My2QDataset.txt")

Step 4: Run GST using do_long_sequence_gst

Just like for 1-qubit GST, we call the driver routine do_long_sequence_gst to compute the GST estimates. Usually for two qubits this could take a long time (hours on a single cpu) based on the number of operation sequences used, and running on multiple processors is a good idea (see the MPI example). However, since we chose an incomplete set of only 4 germs and set our maximum max-length to 2, this will run fairly quickly (~10min).

Some notes about the options/arguments to do_long_sequence_gst that are particularly relevant to 2-qubit GST:

  • memoryLimit gives an estimate of how much memory is available to use on your system (in bytes). This is currently not a hard limit, and pyGSTi may require slightly more memory than this "limit". So you'll need to be conservative in the value you place here: if your machine has 10GB of RAM, set this to 6 or 8 GB initially and increase it as you see how much memory is actually used using a separate OS performance monitor tool. If you're running on multiple processors, this should be the memory available per processor.
  • verbosity tells the routine how much detail to print to stdout. If you don't mind waiting a while without getting any output, you can leave this at its default value (2). If you can't standing wondering whether GST is still running or has locked up, set this to 3.
  • advancedOptions is a dictionary that accepts various "advanced" settings that aren't typically needed. While we don't require its use below, the depolarizeStart key of this dictionary may be useful in certain cases: it gives an amount (in [0,1]) to depolarize the (LGST) estimate that is used as the initial guess for long-sequence GST. In practice, we find that, sometime, in the larger 2-qubit Hilbert space, the LGST estimate may be so poor as to adversely affect the subsequent long-sequence GST (e.g. very slow convergence). Depolarizing the LGST estimate can remedy this. If you're unsure what to put here, either don't specify depolarizeLGST at all (the same as using 0.0), or just use 0.1, i.e. advancedOptions={ 'depolarizeStart' : 0.1 }.
In [7]:
import time
start = time.time()
results = pygsti.do_long_sequence_gst(ds, target_model, prep_fiducials, effect_fiducials, germs4,
                                    maxLengths, gaugeOptParams={'itemWeights': {'spam':0.1,'gates': 1.0}},
                                    memLimit=3*(1024)**3, verbosity=3 )
end = time.time()
print("Total time=%f hours" % ((end - start) / 3600.0))
--- Circuit Creation ---
   1317 sequences created
   Dataset has 1317 entries: 1317 utilized, 0 requested sequences were missing
--- LGST ---
  Singular values of I_tilde (truncating to first 16 of 16) = 
  6.7502828285173155
  2.3518957405180436
  2.318639069417392
  1.2302334842041527
  1.2117463743117198
  1.1873969447789428
  0.8897042679854841
  0.8169918501235112
  0.5305300820082018
  0.5155269364101752
  0.3676760156532707
  0.3517041657230517
  0.30932109560940213
  0.2334365962706313
  0.22377281697587817
  0.14850701015514287
  
  Singular values of target I_tilde (truncating to first 16 of 16) = 
  6.868027641505519
  3.202537446873216
  3.202537446873215
  1.7692369322250323
  1.7692369322250308
  1.7320508075688799
  1.2340048586337
  1.2247448713915883
  0.7071067811865485
  0.7071067811865481
  0.5000000000000001
  0.49371439251332727
  0.49371439251332666
  0.3461223449171741
  0.34612234491717386
  0.2396420755723003
  
    Resulting model:
    
    rho0 = FullSPAMVec with dimension 16
     0.50   0   0 0.50   0   0   0   0   0   0   0   0 0.50   0   0 0.50
    
    
    Mdefault = UnconstrainedPOVM with effect vectors:
    00: FullSPAMVec with dimension 16
     0.60-0.06 0.07 0.45-0.04 0.03-0.02-0.04 0.07   0 0.02 0.05 0.45-0.07 0.08 0.49
    
    01: FullSPAMVec with dimension 16
     0.50 0.06-0.06-0.45-0.03-0.06-0.01 0.01   0 0.02-0.09-0.02 0.36 0.05-0.07-0.41
    
    10: FullSPAMVec with dimension 16
     0.49-0.02 0.05 0.35 0.05   0 0.04 0.06-0.05-0.02 0.02-0.07-0.45 0.03-0.06-0.40
    
    11: FullSPAMVec with dimension 16
     0.41 0.02-0.06-0.36 0.03 0.03-0.01-0.04-0.03   0 0.05 0.04-0.37   0 0.06 0.31
    
    
    
    Gii = 
    FullDenseOp with shape (16, 16)
     1.00   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
     0.02 0.87 0.05   0-0.04 0.05 0.02-0.01-0.02 0.05-0.07   0   0 0.02-0.03 0.02
     0.02 0.04 0.90   0 0.04-0.12 0.05-0.04   0 0.06 0.02 0.01 0.02-0.04 0.07 0.02
       0-0.02-0.02 0.91-0.02 0.04 0.03-0.02 0.02-0.04-0.03   0   0-0.02-0.03-0.01
    -0.02   0-0.01   0 0.92 0.02 0.03-0.02-0.05 0.05-0.11   0 0.03-0.05 0.03-0.02
    -0.02-0.10 0.05   0 0.04 0.91 0.13   0-0.03   0 0.08-0.04-0.02   0-0.03-0.04
    -0.04-0.04 0.03 0.03 0.04 0.07 0.88-0.04-0.01-0.16 0.07 0.05   0 0.09 0.04   0
       0 0.05-0.03-0.02-0.03-0.08 0.05 0.96-0.05 0.12   0-0.11-0.08 0.10-0.10 0.03
    -0.05 0.07-0.08 0.02 0.05-0.05 0.09-0.05 0.83 0.13-0.07 0.04   0 0.07 0.01 0.02
    -0.07 0.10-0.07-0.07 0.02-0.06   0 0.12 0.02 0.78 0.13-0.11-0.03 0.12-0.02-0.03
    -0.01 0.10 0.06-0.04-0.02-0.08-0.13 0.04   0 0.06 0.85   0   0-0.12 0.05   0
     0.04-0.04 0.06-0.05-0.05 0.04-0.02 0.05 0.07-0.22 0.05 0.83   0-0.04-0.01 0.06
       0   0   0   0-0.02-0.01-0.05   0 0.01-0.04 0.04-0.06 0.90   0   0   0
    -0.01 0.02-0.04   0 0.06-0.12 0.06 0.04 0.05-0.13   0 0.02 0.05 0.83 0.11-0.04
     0.04-0.06 0.05-0.02-0.02 0.08-0.04 0.03 0.08-0.09 0.10-0.04   0   0 0.89   0
       0-0.04-0.03   0 0.03   0   0 0.01-0.07   0-0.12 0.03   0   0-0.01 0.90
    
    
    Gix = 
    FullDenseOp with shape (16, 16)
     1.00   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
     0.02 0.89 0.02-0.02-0.02-0.01-0.02 0.02-0.02 0.08-0.02 0.02   0 0.02   0   0
    -0.08 0.10-0.08-0.92 0.04-0.08 0.04-0.04   0-0.02   0   0   0-0.02   0   0
    -0.10 0.10 0.90 0.10-0.03 0.05-0.03 0.03 0.01-0.01 0.01-0.01 0.02-0.04 0.02-0.02
    -0.01 0.03-0.01 0.01 0.92-0.04-0.08 0.08-0.02 0.08-0.02 0.02   0   0   0   0
       0   0   0   0 0.01 0.78 0.01-0.01 0.03-0.01 0.03-0.03   0   0   0   0
     0.01-0.05 0.01-0.01-0.13 0.18-0.13-0.87   0   0   0   0 0.02 0.01 0.02-0.02
       0   0   0   0-0.10 0.06 0.90 0.10   0-0.04   0   0-0.02 0.02-0.02 0.02
       0 0.01   0   0 0.02-0.01 0.02-0.02 0.89 0.05-0.11 0.11 0.02   0 0.02-0.02
    -0.06 0.02-0.06 0.06 0.06 0.11 0.06-0.06 0.03 0.81 0.03-0.03-0.01 0.06-0.01 0.01
     0.02-0.04 0.02-0.02-0.09 0.04-0.09 0.09-0.11 0.08-0.11-0.89 0.02-0.04 0.02-0.02
    -0.01 0.05-0.01 0.01 0.04 0.03 0.04-0.04-0.09 0.15 0.91 0.09   0-0.06   0   0
    -0.02   0-0.02 0.02-0.01 0.01-0.01 0.01   0   0   0   0 0.91   0-0.09 0.09
    -0.03 0.03-0.03 0.03 0.04-0.12 0.04-0.04-0.06 0.10-0.06 0.06 0.04 0.85 0.04-0.04
     0.03-0.04 0.03-0.03-0.13 0.18-0.13 0.13 0.05   0 0.05-0.05-0.09 0.14-0.09-0.91
    -0.02 0.02-0.02 0.02 0.05-0.13 0.05-0.05-0.04 0.01-0.04 0.04-0.09 0.12 0.91 0.09
    
    
    Giy = 
    FullDenseOp with shape (16, 16)
     1.00   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
     0.12-0.13 0.08 0.88-0.03 0.05 0.04 0.03-0.04   0-0.12 0.04-0.01   0   0 0.01
     0.03   0 0.94-0.03 0.05 0.08 0.03-0.05   0-0.06 0.06   0-0.02   0   0 0.02
    -0.09-0.93-0.10 0.09-0.05 0.04-0.02 0.05   0 0.01-0.05   0 0.01 0.01   0-0.01
    -0.02 0.02-0.06 0.02 0.93 0.06 0.04 0.07-0.08 0.07-0.15 0.08 0.04   0 0.04-0.04
     0.03-0.04 0.07-0.03 0.07-0.06 0.11 0.93 0.09-0.09 0.16-0.09-0.01 0.02   0 0.01
       0   0 0.09   0   0 0.08 0.80   0   0-0.04 0.17   0   0-0.03 0.02   0
       0-0.05-0.04   0-0.12-0.83-0.09 0.12-0.03-0.07-0.02 0.03   0-0.08-0.06   0
       0 0.01-0.05   0   0-0.02 0.05   0 0.87 0.07-0.07 0.13 0.02-0.02 0.01-0.02
    -0.05   0-0.08 0.05   0   0 0.07   0 0.10-0.09 0.14 0.90-0.03 0.05-0.03 0.03
     0.02-0.04 0.09-0.02-0.01-0.02-0.08 0.01 0.05 0.03 1.01-0.05 0.02 0.04 0.01-0.02
    -0.04   0   0 0.04-0.02 0.09 0.02 0.02-0.05-0.92-0.08 0.05 0.02-0.03 0.03-0.02
       0-0.01   0   0-0.01 0.02   0 0.01 0.02   0 0.02-0.02 0.90 0.11   0 0.10
    -0.03-0.02   0 0.03 0.06 0.03 0.04-0.06 0.03   0 0.09-0.03 0.15-0.11 0.11 0.85
     0.02-0.02   0-0.02-0.06-0.01-0.12 0.06 0.10 0.04 0.11-0.10 0.03-0.09 0.93-0.03
    -0.01   0   0 0.01 0.04-0.02 0.03-0.04-0.03-0.09-0.02 0.03-0.08-0.92-0.10 0.08
    
    
    Gxi = 
    FullDenseOp with shape (16, 16)
     1.00   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
     0.02 0.86 0.02   0-0.04 0.05   0 0.02 0.02-0.12 0.11   0   0 0.09 0.02-0.02
       0-0.05 0.93   0 0.06-0.06 0.04-0.03   0-0.04-0.11 0.02 0.02-0.02 0.07-0.03
    -0.01 0.03-0.03 0.92-0.03 0.01   0   0   0-0.02 0.03-0.10   0 0.04 0.03 0.10
       0   0   0   0 0.87-0.01   0   0 0.03-0.05 0.06-0.02   0-0.03 0.05   0
    -0.01-0.06-0.03   0-0.06 0.94   0 0.02 0.04-0.11 0.08-0.06   0   0-0.03 0.02
       0-0.05 0.02   0 0.03   0 0.94-0.03 0.03-0.08   0-0.03 0.02   0-0.04-0.02
     0.02   0-0.04   0-0.04   0-0.03 0.90-0.02 0.04-0.11 0.05   0-0.02-0.03-0.03
    -0.10 0.01-0.01   0 0.10 0.02 0.04 0.01-0.06-0.07 0.03-0.04-0.88-0.03 0.05-0.02
     0.01-0.12-0.04   0-0.02 0.19 0.07 0.07 0.04-0.14-0.07-0.02-0.02-0.83-0.02   0
    -0.02 0.02-0.09 0.06 0.01-0.06-0.09-0.08   0-0.07-0.14 0.03-0.08 0.09-0.97 0.05
       0-0.03   0-0.09 0.09 0.02 0.16 0.08-0.06-0.01 0.01-0.03   0 0.01 0.04-0.91
    -0.09-0.01   0   0 0.07-0.03 0.01   0 0.90   0   0   0 0.07 0.04-0.02 0.03
    -0.01-0.08 0.05 0.02 0.05-0.03-0.02-0.08 0.01 0.86   0   0-0.03 0.08-0.03 0.02
       0   0-0.14   0   0 0.06 0.07 0.02-0.04 0.02 0.85 0.05-0.06 0.09-0.02 0.06
     0.02-0.03   0-0.10-0.01-0.05-0.01 0.07 0.03 0.02   0 0.90 0.03-0.02 0.01 0.04
    
    
    Gyi = 
    FullDenseOp with shape (16, 16)
     1.00   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
     0.02 0.88-0.01-0.01 0.04 0.06-0.01   0   0 0.04-0.02-0.03-0.04 0.12 0.02 0.03
     0.02   0 0.93 0.01-0.02 0.02 0.07-0.01   0 0.07 0.02 0.02 0.03-0.06 0.06-0.06
       0-0.04   0 0.91   0 0.04   0 0.13 0.02-0.03   0   0-0.02 0.01 0.01 0.10
     0.08 0.03   0   0-0.15 0.05-0.08   0 0.05 0.07-0.12   0 0.92-0.03   0   0
       0 0.05 0.04   0 0.03-0.14 0.05 0.04   0-0.02 0.09 0.04 0.02 0.83 0.13-0.02
       0 0.06 0.15 0.03-0.02-0.03-0.09-0.08-0.04 0.05 0.14 0.03 0.02-0.06 0.88-0.04
    -0.01-0.02-0.03 0.08 0.06-0.02 0.02-0.12-0.06 0.03 0.05 0.02 0.03   0 0.11 0.91
    -0.01-0.02   0 0.02 0.01   0   0   0 0.86 0.04-0.07 0.03   0 0.02   0-0.01
    -0.04-0.01   0-0.02-0.05 0.10-0.19 0.04   0 0.75 0.03-0.03   0   0   0 0.05
     0.01 0.03 0.09 0.03-0.03 0.04-0.04-0.02 0.08-0.08 0.96   0-0.04-0.05-0.06   0
    -0.01   0   0   0   0 0.08-0.01 0.02 0.02-0.09 0.02 0.91 0.01 0.06 0.04   0
    -0.09-0.02   0-0.01-0.91   0   0   0-0.08   0-0.03-0.04 0.09-0.06   0 0.02
    -0.04-0.10   0   0-0.06-0.81 0.05 0.04 0.01-0.13 0.04-0.03 0.05-0.02-0.01-0.01
       0 0.05-0.05 0.01 0.04-0.03-0.86-0.11 0.03-0.03 0.06-0.01-0.03 0.10 0.09 0.02
    -0.02 0.04-0.04-0.09   0-0.08 0.03-0.93-0.05 0.10 0.03-0.08 0.01-0.04   0 0.09
    
    
    Gcnot = 
    FullDenseOp with shape (16, 16)
     1.00   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
     0.02 0.87   0 0.02-0.01 0.02-0.03 0.02 0.03-0.02-0.03 0.07   0 0.01 0.04   0
     0.04-0.05 0.06 0.02-0.06 0.05-0.15 0.03 0.07-0.05 0.19   0   0 0.03 0.94-0.02
    -0.11 0.09   0-0.02 0.01   0-0.01-0.06   0-0.01   0 0.10 0.11-0.12-0.04 0.92
    -0.01 0.03 0.06   0-0.03 0.91-0.06 0.10 0.03 0.03 0.06-0.03 0.01-0.03-0.04   0
    -0.04 0.01-0.04   0 0.86-0.07-0.06 0.06-0.01-0.06 0.05 0.10   0-0.06-0.01-0.03
       0   0 0.10-0.04   0   0-0.01-0.01 0.10-0.07 0.13 0.79   0   0 0.07 0.02
     0.02-0.04-0.06 0.01 0.05-0.15 0.05-0.02 0.05-0.13-0.81-0.06-0.08 0.03-0.02 0.02
    -0.03 0.04-0.01   0 0.04 0.04-0.01 0.05 0.02 0.94 0.03-0.03 0.03-0.03   0 0.02
    -0.04 0.03-0.02-0.01 0.10 0.02 0.04-0.10 0.86 0.06-0.04 0.12-0.03 0.03-0.06-0.07
       0 0.05-0.02-0.03-0.12-0.02-0.07-0.87-0.03-0.02   0-0.02   0-0.08 0.06-0.02
       0 0.02 0.05 0.05 0.10-0.10 0.88 0.12 0.06 0.04 0.04 0.08-0.02-0.05 0.03   0
       0   0   0-0.07-0.12 0.10-0.04-0.04 0.07-0.09 0.02   0 0.91-0.02-0.01 0.09
       0-0.02 0.10-0.06 0.02-0.05-0.03 0.06-0.10 0.04 0.02-0.06 0.01 0.84-0.04 0.03
       0 0.06 0.90 0.03 0.04-0.03-0.09-0.06-0.02 0.12 0.01 0.09 0.06-0.10 0.04   0
     0.10-0.13   0 0.92-0.04 0.03-0.11 0.02   0-0.06-0.13   0-0.10 0.13   0-0.01
    
    
    
    
--- Iterative MLGST: Iter 1 of 2  907 operation sequences ---: 
  --- Minimum Chi^2 GST ---
  Memory limit = 3.00GB
  Cur, Persist, Gather = 0.14, 0.04, 0.30 GB
    Evaltree generation (default) w/mem limit = 2.52GB
     mem(1 subtrees, 1,1 param-grps, 1 proc-grps) in 0s = 2.80GB (2.80GB fc)
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 1616 params (taken as 2 param groups of ~808 params).
     Memory estimate = 1.40GB (cache=907, wrtLen1=808, wrtLen2=1616, subsPerProc=1).
    --- Outer Iter 0: norm_f = 5833.47, mu=0, |J|=9795.49
    --- Outer Iter 1: norm_f = 1952.43, mu=1933.11, |J|=9726.98
    --- Outer Iter 2: norm_f = 1642.96, mu=644.37, |J|=9666.38
    --- Outer Iter 3: norm_f = 1555.63, mu=214.79, |J|=9660.03
    --- Outer Iter 4: norm_f = 1517.27, mu=71.5967, |J|=9672.62
    --- Outer Iter 5: norm_f = 1501.95, mu=23.8656, |J|=9693.31
    --- Outer Iter 6: norm_f = 1497.5, mu=7.95519, |J|=9715.91
    --- Outer Iter 7: norm_f = 1496.41, mu=2.65173, |J|=9730.51
    --- Outer Iter 8: norm_f = 1496.21, mu=0.88391, |J|=9737.72
    --- Outer Iter 9: norm_f = 1496.19, mu=0.294637, |J|=9740.22
    --- Outer Iter 10: norm_f = 1496.19, mu=0.0982122, |J|=9740.87
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Finding num_nongauge_params is too expensive: using total params.
  Sum of Chi^2 = 1496.19 (2721 data params - 1616 model params = expected mean of 1105; p-value = 2.81997e-14)
  Completed in 339.6s
  2*Delta(log(L)) = 1501.17
  Iteration 1 took 339.7s
  
--- Iterative MLGST: Iter 2 of 2  1317 operation sequences ---: 
  --- Minimum Chi^2 GST ---
  Memory limit = 3.00GB
  Cur, Persist, Gather = 0.34, 0.06, 0.29 GB
    Evaltree generation (default) w/mem limit = 2.30GB
     mem(1 subtrees, 1,1 param-grps, 1 proc-grps) in 0s = 4.06GB (4.06GB fc)
    Created evaluation tree with 1 subtrees.  Will divide 1 procs into 1 (subtree-processing)
     groups of ~1 procs each, to distribute over 1616 params (taken as 2 param groups of ~808 params).
     Memory estimate = 2.03GB (cache=1317, wrtLen1=808, wrtLen2=1616, subsPerProc=1).
    --- Outer Iter 0: norm_f = 4476.66, mu=0, |J|=11844
    --- Outer Iter 1: norm_f = 3365.67, mu=2583.99, |J|=11739.7
    --- Outer Iter 2: norm_f = 3021.89, mu=861.328, |J|=11712.6
    --- Outer Iter 3: norm_f = 2855.59, mu=287.109, |J|=11699.6
    --- Outer Iter 4: norm_f = 2784.85, mu=95.7032, |J|=11689
    --- Outer Iter 5: norm_f = 2765.13, mu=31.9011, |J|=11686.2
    --- Outer Iter 6: norm_f = 2761.34, mu=10.6337, |J|=11686.7
    --- Outer Iter 7: norm_f = 2760.97, mu=3.54456, |J|=11686.7
    --- Outer Iter 8: norm_f = 2760.95, mu=1.18152, |J|=11686.5
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Finding num_nongauge_params is too expensive: using total params.
  Sum of Chi^2 = 2760.95 (3951 data params - 1616 model params = expected mean of 2335; p-value = 1.8225e-09)
  Completed in 408.5s
  2*Delta(log(L)) = 2771.33
  Iteration 2 took 408.5s
  
  Switching to ML objective (last iteration)
  --- MLGST ---
  Memory: limit = 3.00GB(cur, persist, gthr = 0.31, 0.06, 0.29 GB)
    --- Outer Iter 0: norm_f = 1385.67, mu=0, |J|=8271.12
    --- Outer Iter 1: norm_f = 1384.07, mu=1284.17, |J|=8283.99
    --- Outer Iter 2: norm_f = 1383.96, mu=428.057, |J|=8282.83
    --- Outer Iter 3: norm_f = 1383.91, mu=142.686, |J|=8282.29
    --- Outer Iter 4: norm_f = 1383.88, mu=47.5619, |J|=8282.08
    --- Outer Iter 5: norm_f = 1383.88, mu=15.854, |J|=8282.01
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Finding num_nongauge_params is too expensive: using total params.
    Maximum log(L) = 1383.88 below upper bound of -2.95403e+06
      2*Delta(log(L)) = 2767.75 (3951 data params - 1616 model params = expected mean of 2335; p-value = 1.0588e-09)
    Completed in 261.1s
  2*Delta(log(L)) = 2767.75
  Final MLGST took 261.1s
  
Iterative MLGST Total Time: 1009.3s
  -- Adding Gauge Optimized (go0) --
--- Re-optimizing logl after robust data scaling ---
  --- MLGST ---
  Memory: limit = 3.00GB(cur, persist, gthr = 0.29, 0.06, 0.29 GB)
    --- Outer Iter 0: norm_f = 1383.88, mu=0, |J|=8282.01
    Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06
  Finding num_nongauge_params is too expensive: using total params.
    Maximum log(L) = 1383.88 below upper bound of -2.95403e+06
      2*Delta(log(L)) = 2767.75 (3951 data params - 1616 model params = expected mean of 2335; p-value = 1.0588e-09)
    Completed in 43.1s
  -- Adding Gauge Optimized (go0) --
Total time=0.293146 hours

Step 5: Create report(s) using the returned Results object

The Results object returned from do_long_sequence_gst can be used to generate a "general" HTML report, just as in the 1-qubit case:

In [8]:
pygsti.report.create_standard_report(results, filename="example_files/easy_2q_report",
                                    title="Example 2Q-GST Report", verbosity=2)
*** Creating workspace ***
*** Generating switchboard ***
/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:

Idle tomography failed:
Label{layers}

*** Generating tables ***
  targetSpamBriefTable                          took 0.558514 seconds
  targetGatesBoxTable                           took 0.393092 seconds
  datasetOverviewTable                          took 0.041697 seconds
  bestGatesetSpamParametersTable                took 0.000538 seconds
  bestGatesetSpamBriefTable                     took 0.372994 seconds
  bestGatesetSpamVsTargetTable                  took 1.529932 seconds
  bestGatesetGaugeOptParamsTable                took 0.000415 seconds
  bestGatesetGatesBoxTable                      took 0.431361 seconds
  bestGatesetChoiEvalTable                      took 0.901477 seconds
  bestGatesetDecompTable                        took 7.244969 seconds
  bestGatesetEvalTable                          took 0.04129 seconds
  bestGermsEvalTable                            took 0.012821 seconds
  bestGatesetVsTargetTable                      took 0.051373 seconds
/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/theory.py:200: UserWarning:

Output may be unreliable because the model is not approximately trace-preserving.

  bestGatesVsTargetTable_gv                     took 9.331426 seconds
  bestGatesVsTargetTable_gvgerms                took 0.223209 seconds
  bestGatesVsTargetTable_gi                     took 0.09484 seconds
  bestGatesVsTargetTable_gigerms                took 0.01241 seconds
  bestGatesVsTargetTable_sum                    took 9.432998 seconds
  bestGatesetErrGenBoxTable                     took 1.9915 seconds
  metadataTable                                 took 0.000748 seconds
  stdoutBlock                                   took 0.000933 seconds
  profilerTable                                 took 0.000542 seconds
  softwareEnvTable                              took 0.032556 seconds
  exampleTable                                  took 0.045136 seconds
  singleMetricTable_gv                          took 9.97891 seconds
  singleMetricTable_gi                          took 0.165092 seconds
  fiducialListTable                             took 0.000747 seconds
  prepStrListTable                              took 0.000383 seconds
  effectStrListTable                            took 0.000164 seconds
  colorBoxPlotKeyPlot                           took 0.060831 seconds
  germList2ColTable                             took 0.000195 seconds
  progressTable                                 took 3.552062 seconds
*** Generating plots ***
  gramBarPlot                                   took 0.17467 seconds
  progressBarPlot                               took 2.135457 seconds
  progressBarPlot_sum                           took 0.001909 seconds
  finalFitComparePlot                           took 1.073815 seconds
  bestEstimateColorBoxPlot                      took 4.376099 seconds
  bestEstimateTVDColorBoxPlot                   took 3.977058 seconds
  bestEstimateColorScatterPlot                  took 5.122052 seconds
  bestEstimateColorHistogram                    took 4.155411 seconds
  progressTable_scl                             took 7.1e-05 seconds
  progressBarPlot_scl                           took 5.6e-05 seconds
  bestEstimateColorBoxPlot_scl                  took 7.8e-05 seconds
  bestEstimateColorScatterPlot_scl              took 6.6e-05 seconds
  bestEstimateColorHistogram_scl                took 7.1e-05 seconds
  dataScalingColorBoxPlot                       took 5.1e-05 seconds
*** Merging into template file ***
  Rendering topSwitchboard                      took 0.000108 seconds
  Rendering maxLSwitchboard1                    took 7.7e-05 seconds
  Rendering targetSpamBriefTable                took 0.16939 seconds
  Rendering targetGatesBoxTable                 took 0.188537 seconds
  Rendering datasetOverviewTable                took 0.000937 seconds
  Rendering bestGatesetSpamParametersTable      took 0.001861 seconds
  Rendering bestGatesetSpamBriefTable           took 0.51621 seconds
  Rendering bestGatesetSpamVsTargetTable        took 0.001745 seconds
  Rendering bestGatesetGaugeOptParamsTable      took 0.000908 seconds
  Rendering bestGatesetGatesBoxTable            took 0.398657 seconds
  Rendering bestGatesetChoiEvalTable            took 0.311584 seconds
  Rendering bestGatesetDecompTable              took 0.206528 seconds
  Rendering bestGatesetEvalTable                took 0.043702 seconds
  Rendering bestGermsEvalTable                  took 0.042149 seconds
  Rendering bestGatesetVsTargetTable            took 0.000978 seconds
  Rendering bestGatesVsTargetTable_gv           took 0.003662 seconds
  Rendering bestGatesVsTargetTable_gvgerms      took 0.002338 seconds
  Rendering bestGatesVsTargetTable_gi           took 0.003787 seconds
  Rendering bestGatesVsTargetTable_gigerms      took 0.001739 seconds
  Rendering bestGatesVsTargetTable_sum          took 0.00384 seconds
  Rendering bestGatesetErrGenBoxTable           took 0.940568 seconds
  Rendering metadataTable                       took 0.004467 seconds
  Rendering stdoutBlock                         took 0.001252 seconds
  Rendering profilerTable                       took 0.001671 seconds
  Rendering softwareEnvTable                    took 0.001998 seconds
  Rendering exampleTable                        took 0.020202 seconds
  Rendering metricSwitchboard_gv                took 4.1e-05 seconds
  Rendering metricSwitchboard_gi                took 3.1e-05 seconds
  Rendering singleMetricTable_gv                took 0.006546 seconds
  Rendering singleMetricTable_gi                took 0.005427 seconds
  Rendering fiducialListTable                   took 0.005624 seconds
  Rendering prepStrListTable                    took 0.00406 seconds
  Rendering effectStrListTable                  took 0.002822 seconds
  Rendering colorBoxPlotKeyPlot                 took 0.029401 seconds
  Rendering germList2ColTable                   took 0.00213 seconds
  Rendering progressTable                       took 0.001933 seconds
  Rendering gramBarPlot                         took 0.022023 seconds
  Rendering progressBarPlot                     took 0.019769 seconds
  Rendering progressBarPlot_sum                 took 0.01847 seconds
  Rendering finalFitComparePlot                 took 0.020392 seconds
  Rendering bestEstimateColorBoxPlot            took 0.07229 seconds
  Rendering bestEstimateTVDColorBoxPlot         took 0.059786 seconds
  Rendering bestEstimateColorScatterPlot        took 0.120078 seconds
  Rendering bestEstimateColorHistogram          took 0.068549 seconds
  Rendering progressTable_scl                   took 0.000931 seconds
  Rendering progressBarPlot_scl                 took 0.000874 seconds
  Rendering bestEstimateColorBoxPlot_scl        took 0.000825 seconds
  Rendering bestEstimateColorScatterPlot_scl    took 0.000721 seconds
  Rendering bestEstimateColorHistogram_scl      took 0.000796 seconds
  Rendering dataScalingColorBoxPlot             took 0.000599 seconds
Output written to example_files/easy_2q_report directory
*** Report Generation Complete!  Total time 71.5234s ***
Out[8]:
<pygsti.report.workspace.Workspace at 0x1235010f0>

Now open example_files/easy_2q_report/main.html to see the results. You've run 2-qubit GST!

You can save the Results object for later by just pickling it:

In [9]:
import pickle
with open("example_files/easy_2q_results.pkl","wb") as pklfile:
        pickle.dump(results, pklfile)
In [ ]: