Once we have data for GST, there are several algorithms we can run on it to produce tomographic estimates. Depending on the amount of data you have, and time available for running LinearOperator Set Tomography, one algorithm may be preferable over the others. What is typically thought of as "standard GST" is the iterative maximum-likelihood optimization implemented by do_iterative_mlgst
which uses a combination of minimum-$\chi^2$ GST and maximum-likelihood GST.
pygsti
provides support for the following "primary" GST algorithms:
Linear LinearOperator Set Tomography (LGST): Uses short operation sequences to quickly compute a rough (low accuracy) estimate of a model by linear inversion.
Extended Linear LinearOperator Set Tomography (eLGST or EXLGST): Minimizes the sub-of-squared errors between independent LGST estimates and the estimates obtained from a single model to find a best-estimate model. This is typically done in an interative fashion, using LGST estimates for longer and longer sequences.
Minimum-$\chi^2$ LinearOperator Set Tomography (MC2GST): Minimizes the $\chi^{2}$ statistic of the data frequencies and model probabilities to find a best-estimate model. Typically done in an interative fashion, using successively larger sets of longer and longer operation sequences.
Maximum-Likelihood LinearOperator Set Tomography (MLGST): Maximizes the log-likelihood statistic of the data frequencies and model probabilities to find a best-estimate model. Typically done in an interative fashion similar to MC2GST. This maximum likelihood estimation (MLE) is very well-motivated from a statistics standpoint and should be the most accurate among the algorithms.
If you're curious, the implementation of the algorithms for LGST, EXLGST, MC2GST, and MLGST may be found in the pygsti.algorithms.core
module. In this tutorial, we'll show how to invoke each of these algorithms.
Additionally, pygsti
contains gauge-optimization algorithms. Because the outcome data (the input to the GST algorithms above) only determines a model up to some number of un-physical "gauge" degrees of freedom, it is often desirable to optimize the Model
estimate obtained from a GST algorithm within the space of its gauge freedoms. This process is called "gauge-optimization" and the final part of this tutorial demonstrates how to gauge-optimize a model using various criteria.
The ingredients needed to as input to the "primary" GST algorithms are:
Model
which defines the desired gates. This model is used by LGST to specify the various gate, state preparation, POVM effect, and SPAM labels, as well as to provide an initial guess for the gauge degrees of freedom.DataSet
containing the data that GST attempts to fit using the probabilities generated by a single Model
. This data set must at least contain the data for the operation sequences required by the algorithm that is chosen.Circuit
objects, which specify which operation sequences are used during each iteration of the algorithm (the length of the top-level list defines the number of interations). Note that which operation sequences are included in these lists is different for EXLGST than it is for MC2GST and MLGST.from __future__ import print_function
import pygsti
import json
#We'll use the standard I, X(pi/2), Y(pi/2) model that we generated data for in the DataSet tutorial
from pygsti.construction import std1Q_XYI
target_model = std1Q_XYI.target_model()
prep_fiducials = std1Q_XYI.prepStrs
meas_fiducials = std1Q_XYI.effectStrs
germs = std1Q_XYI.germs
maxLengthList = [1,2,4,8,16] #for use in iterative algorithms
ds = pygsti.io.load_dataset("../../tutorial_files/Example_Dataset.txt", cache=True)
dsLowCounts = pygsti.io.load_dataset("../../tutorial_files/Example_Dataset_LowCnts.txt", cache=True)
depol_model = target_model.depolarize(op_noise=0.1)
print("Loaded target model with operation labels: ", target_model.operations.keys())
print("Loaded fiducial lists of lengths: ", (len(prep_fiducials),len(meas_fiducials)))
print("Loaded dataset of length: ", len(ds))
Loading ../../tutorial_files/Example_Dataset.txt: 100% Writing cache file (to speed future loads): ../../tutorial_files/Example_Dataset.txt.cache Loading from cache file: ../../tutorial_files/Example_Dataset_LowCnts.txt.cache Loaded target model with operation labels: odict_keys([Label{Gi}, Label{Gx}, Label{Gy}]) Loaded fiducial lists of lengths: (6, 6) Loaded dataset of length: 3382
An important and distinguising property of the LGST algorithm is that it does not require an initial-guess Model
as an input. It uses linear inversion and short sequences to obtain a rough model estimate. As such, it is very common to use the LGST estimate as the initial-guess starting point for more advanced forms of GST.
#Run LGST to get an initial estimate for the gates in target_model based on the data in ds
#run LGST
mdl_lgst = pygsti.do_lgst(ds, prep_fiducials, meas_fiducials, targetModel=target_model, verbosity=1)
#Gauge optimize the result to match the target model
mdl_lgst_after_gauge_opt = pygsti.gaugeopt_to_target(mdl_lgst, target_model, verbosity=1)
#Contract the result to CPTP, guaranteeing that the gates are CPTP
mdl_clgst = pygsti.contract(mdl_lgst_after_gauge_opt, "CPTP", verbosity=1)
--- LGST --- Gauge optimization completed in 0.0483651s. --- Contract to CPTP (direct) --- The closest legal point found was distance: 0.00016805024023937382
print(mdl_lgst)
rho0 = FullSPAMVec with dimension 4 0.71-0.02 0.03 0.75 Mdefault = UnconstrainedPOVM with effect vectors: 0: FullSPAMVec with dimension 4 0.73 0 0 0.65 1: FullSPAMVec with dimension 4 0.69 0 0-0.65 Gi = FullDenseOp with shape (4, 4) 1.00 0 0 0 0.01 0.92-0.03 0.02 0.01-0.01 0.90 0.02 -0.01 0 0 0.91 Gx = FullDenseOp with shape (4, 4) 1.00 0 0 0 0 0.91-0.01 0 -0.02-0.02-0.04-0.99 -0.05 0.03 0.81 0 Gy = FullDenseOp with shape (4, 4) 1.00 0 0 0 0.05 0 0 0.98 0.01 0 0.89-0.03 -0.06-0.82 0 0
EXLGST requires a list-of-lists of operation sequences, one per iteration. The elements of these lists are typically repetitions of short "germ" strings such that the final strings does not exceed some maximum length. We created such lists in the operation sequence tutorial. Now, we just load these lists from the text files they were saved in.
#Get rho and E specifiers, needed by LGST
elgstListOfLists = pygsti.construction.make_elgst_lists(target_model, germs, maxLengthList)
#run EXLGST. The result, mdl_exlgst, is a Model containing the estimated quantities
mdl_exlgst = pygsti.do_iterative_exlgst(ds, mdl_clgst, prep_fiducials, meas_fiducials, elgstListOfLists,
targetModel=target_model, verbosity=2)
--- Iterative eLGST: Iter 5 of 5 ; 43 operation sequences ---:
MC2GST and MLGST also require a list-of-lists of operation sequences, one per iteration. However, the elements of these lists are typically repetitions of short "germ" strings sandwiched between fiducial strings such that the repeated-germ part of the string does not exceed some maximum length. We created such lists in the operation sequence tutorial. Now, we just load these lists from the text files they were saved in.
#Get lists of operation sequences for successive iterations of LSGST to use
lsgstListOfLists = pygsti.construction.make_lsgst_lists(target_model, prep_fiducials, meas_fiducials, germs, maxLengthList)
#run MC2GST. The result is a Model containing the estimated quantities
mdl_mc2 = pygsti.do_iterative_mc2gst(ds, mdl_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MC2GST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 49.5246 (92 data params - 44 model params = expected mean of 48; p-value = 0.412189) Completed in 0.2s Iteration 1 took 0.2s --- Iterative MC2GST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 111.174 (168 data params - 44 model params = expected mean of 124; p-value = 0.788613) Completed in 0.3s Iteration 2 took 0.3s --- Iterative MC2GST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 421.338 (450 data params - 44 model params = expected mean of 406; p-value = 0.289462) Completed in 0.7s Iteration 3 took 0.7s --- Iterative MC2GST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 812.173 (862 data params - 44 model params = expected mean of 818; p-value = 0.550887) Completed in 1.2s Iteration 4 took 1.2s --- Iterative MC2GST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1249.42 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.404349) Completed in 2.0s Iteration 5 took 2.0s Iterative MC2GST Total Time: 4.4s
#Write the resulting EXLGST and MC2GST results to model text files for later reference.
pygsti.io.write_model(mdl_exlgst, "../../tutorial_files/Example_eLGST_Gateset.txt","# Example result from running eLGST")
pygsti.io.write_model(mdl_mc2, "../../tutorial_files/Example_MC2GST_Gateset.txt","# Example result from running MC2GST")
#Run MC2GST again but use a DataSet with a lower number of counts
mdl_mc2_lowcnts = pygsti.do_iterative_mc2gst(dsLowCounts, mdl_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MC2GST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 67.4997 (92 data params - 44 model params = expected mean of 48; p-value = 0.0331213) Completed in 0.3s Iteration 1 took 0.3s --- Iterative MC2GST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 146.092 (168 data params - 44 model params = expected mean of 124; p-value = 0.0855004) Completed in 0.4s Iteration 2 took 0.5s --- Iterative MC2GST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 433.435 (450 data params - 44 model params = expected mean of 406; p-value = 0.167224) Completed in 0.8s Iteration 3 took 0.8s --- Iterative MC2GST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 833.955 (862 data params - 44 model params = expected mean of 818; p-value = 0.341496) Completed in 2.7s Iteration 4 took 2.7s --- Iterative MC2GST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1257.07 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.346556) Completed in 2.3s Iteration 5 took 2.3s Iterative MC2GST Total Time: 6.5s
Executing MLGST is very similar to MC2GST: the same operation sequence lists can be used and calling syntax is nearly identitcal.
#run MLGST. The result is a Model containing the estimated quantities
mdl_mle = pygsti.do_iterative_mlgst(ds, mdl_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 49.5246 (92 data params - 44 model params = expected mean of 48; p-value = 0.412189) Completed in 0.2s 2*Delta(log(L)) = 49.7443 Iteration 1 took 0.2s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 111.174 (168 data params - 44 model params = expected mean of 124; p-value = 0.788613) Completed in 0.3s 2*Delta(log(L)) = 111.58 Iteration 2 took 0.4s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 421.338 (450 data params - 44 model params = expected mean of 406; p-value = 0.289462) Completed in 0.7s 2*Delta(log(L)) = 422.019 Iteration 3 took 0.8s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 812.173 (862 data params - 44 model params = expected mean of 818; p-value = 0.550887) Completed in 1.2s 2*Delta(log(L)) = 813.941 Iteration 4 took 1.3s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1249.42 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.404349) Completed in 2.0s 2*Delta(log(L)) = 1251.79 Iteration 5 took 2.3s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 625.713 below upper bound of -2.13594e+06 2*Delta(log(L)) = 1251.43 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.388875) Completed in 1.8s 2*Delta(log(L)) = 1251.43 Final MLGST took 1.8s Iterative MLGST Total Time: 6.8s
#Run MLGST again but use a DataSet with a lower number of counts
mdl_mle_lowcnts = pygsti.do_iterative_mlgst(dsLowCounts, mdl_clgst, lsgstListOfLists, verbosity=2)
--- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 67.4997 (92 data params - 44 model params = expected mean of 48; p-value = 0.0331213) Completed in 0.3s 2*Delta(log(L)) = 68.9835 Iteration 1 took 0.3s --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 146.092 (168 data params - 44 model params = expected mean of 124; p-value = 0.0855004) Completed in 0.4s 2*Delta(log(L)) = 150.151 Iteration 2 took 0.5s --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 433.435 (450 data params - 44 model params = expected mean of 406; p-value = 0.167224) Completed in 0.7s 2*Delta(log(L)) = 444.862 Iteration 3 took 0.8s --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 833.955 (862 data params - 44 model params = expected mean of 818; p-value = 0.341496) Completed in 2.6s 2*Delta(log(L)) = 854.287 Iteration 4 took 2.7s --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: --- Minimum Chi^2 GST --- Sum of Chi^2 = 1257.07 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.346556) Completed in 2.1s 2*Delta(log(L)) = 1285.45 Iteration 5 took 2.3s Switching to ML objective (last iteration) --- MLGST --- Maximum log(L) = 638.847 below upper bound of -106241 2*Delta(log(L)) = 1277.69 (1282 data params - 44 model params = expected mean of 1238; p-value = 0.211066) Completed in 2.7s 2*Delta(log(L)) = 1277.69 Final MLGST took 2.7s Iterative MLGST Total Time: 9.3s
All gauge optimization algorithms perform essentially the same task - to find the model which optimizes some objective function from within the set or space of models that are gauge-equivalent to some starting set. This is accomplished in pygsti
using the following mechanism:
ExplicitOpModel
, call it mdl
, to be gauge-optimized.pygsti.objects.GaugeGroup
instance defines a parameterized group of allowable gauge transformations. This gauge group must be compatible with the mdl
's parameterization, so that mdl.transform
(which calls LinearOperator.transform
and SPAMVec.transform
) is able to process elements of the GaugeGroup
(obtained via a call to GaugeGroup.get_element(params)
). That is, the gauge transformation must map between models with the same parameterization (that give by mdl
). Because of the close interplay between a model's parameterization and its allowed gauge transformations, Model
objects can contain a GaugeGroup
instance as their default_gauge_group
member. In many circumstances, mdl.default_gauge_group
is set to the correct gauge group to use for a given Model
.pygsti.gaugeopt_custom(...)
takes an intial ExplicitOpModel
, an objective function, and a GaugeGroup
(along with other optimization parameters) and returns a gauge-optimized ExplicitOpModel
. Note that if its gauge_group
argument is left as None
, then the model's default gauge group is used. And objective function which takes a single ExplicitOpModel
argument and returns a float can be supplied, giving the user a fair amount of flexiblity.pygsti.gaugeopt_to_target(...)
is a routine able to perform these common types of gauge optimization. Instead of an objective function, gaugeopt_to_target
takes a target ExplicitOpModel
and additional arguments (see below) from which it constructs a objective function and then calls gaugeopt_custom
. It is essetially a convenience routine for constructing common gauge optimization objective functions. Relevant arguments which affect what objective function is used are:targetModel
: the ExplicitOpModel
to compare against - i.e., the one you want to gauge optimize toward. Note that this doesn't have to be a set of ideal gates - it can be any (imperfect) model that reflects your expectations about what the estimates should look like.itemWeights
: a dictionary of weights allowing different gates and/or SPAM operations to be weighted differently when computing the objective function's value.CPpenalty
: a prefactor multiplying the sum of all the negative Choi-matrix eigenvalues corresponding to each of the gates.TPpenalty
: a prefactor multiplying the sum of absoulte-value differences between the first row of each operation matrix and [1 0 ... 0 ]
and the discrpance between the first element of each state preparation vector and its expected value.validSpamPenalty
: a prefactor multiplying penalty terms enforcing the non-negativity of state preparation eigenavlues and that POVM effect eigenvalues lie between 0 and 1.gatesMetric
: how to compare corresponding gates in the gauge-optimized and target sets. "frobenius
" uses the frobenius norm (weighted before taking the final sqrt), "fidelity"
uses the squared process infidelity (squared to avoid negative-infidelity issues in non-TP models), and "tracedist"
uses the trace distance (weighted after computing the trace distance between corresponding gates).spamMetric
: how to compare corresponding SPAM vectors. "frobenius"
(the default) should be used here, as "fidelity"
and "tracedist"
compare the "SPAM gates" -- the outer product of state prep and POVM effect vectors -- which isn't a meaningful metric.The cell below demonstrates some of common usages of gaugeopt_to_target
.
mdl = mdl_mle.copy() #we'll use the MLGST result from above as an example
mdl_go1 = pygsti.gaugeopt_to_target(mdl, target_model) # optimization to the perfect target gates
mdl_go2 = pygsti.gaugeopt_to_target(mdl, depol_model) # optimization to a "guess" at what the estimate should be
mdl_go3 = pygsti.gaugeopt_to_target(mdl, target_model, {'gates': 1.0, 'spam': 0.01})
# weight the gates differently from the SPAM operations
mdl_go4 = pygsti.gaugeopt_to_target(mdl, target_model, {'gates': 1.0, 'spam': 0.01, 'Gx': 10.0, 'E0': 0.001})
# weight an individual gate/SPAM separately (note the specific gate/SPAM labels always override
# the more general 'gates' and 'spam' weight values).
mdl_go5 = pygsti.gaugeopt_to_target(mdl, target_model, gatesMetric="tracedist") #use trace distance instead of frobenius
print("Default gauge group = ",type(mdl.default_gauge_group)) # default is FullGaugeGroup
mdl_go6 = pygsti.gaugeopt_to_target(mdl, target_model, gauge_group=pygsti.objects.UnitaryGaugeGroup(mdl.dim, 'pp'))
#gauge optimize only over unitary gauge transformations
print("\ngaugeopt_to_target output:")
mdl_go7 = pygsti.gaugeopt_to_target(mdl, target_model, verbosity=3) # show output
print("Final frobenius distance between mdl_go7 and target_model = ", mdl_go7.frobeniusdist(target_model))
Default gauge group = <class 'pygsti.objects.gaugegroup.FullGaugeGroup'> gaugeopt_to_target output: --- Gauge Optimization (ls method) --- --- Outer Iter 0: norm_f = 0.0915364, mu=0, |J|=7.25392 --- Outer Iter 1: norm_f = 0.0915349, mu=0.00171472, |J|=7.25435 --- Outer Iter 2: norm_f = 0.0915349, mu=0.000571574, |J|=7.25438 Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-08 Gauge optimization completed in 0.0394931s. Final frobenius distance between mdl_go7 and target_model = 0.039058692291428755
Both MLGST and MC2GST use a $\chi^{2}$ optimization procedure for all but the final iteration. For the last set of circuits (the last iteration), MLGST uses a maximum likelihood estimation. Below, we show how close the two estimates are to one another. Before making the comparison, however, we optimize the gauge so the estimated gates are as close to the target gates as the gauge degrees of freedom allow.
# We optimize over the model gauge
mdl_mle = pygsti.gaugeopt_to_target(mdl_mle,depol_model)
mdl_mle_lowcnts = pygsti.gaugeopt_to_target(mdl_mle_lowcnts,depol_model)
mdl_mc2 = pygsti.gaugeopt_to_target(mdl_mc2,depol_model)
mdl_mc2_lowcnts = pygsti.gaugeopt_to_target(mdl_mc2_lowcnts,depol_model)
print("Frobenius diff btwn MLGST and datagen = {0}".format(round(mdl_mle.frobeniusdist(depol_model), 6)))
print("Frobenius diff btwn MC2GST and datagen = {0}".format(round(mdl_mc2.frobeniusdist(depol_model), 6)))
print("Frobenius diff btwn MLGST and LGST = {0}".format(round(mdl_mle.frobeniusdist(mdl_clgst), 6)))
print("Frobenius diff btwn MLGST and MC2GST = {0}".format(round(mdl_mle.frobeniusdist(mdl_mc2), 6)))
print("Chi^2 ( MC2GST ) = {0}".format(round(pygsti.chi2(mdl_mc2, ds, lsgstListOfLists[-1]), 4)))
print("Chi^2 ( MLGST ) = {0}".format(round(pygsti.chi2(mdl_mle, ds, lsgstListOfLists[-1] ), 4)))
print("LogL ( MC2GST ) = {0}".format(round(pygsti.logl(mdl_mc2, ds, lsgstListOfLists[-1]), 4)))
print("LogL ( MLGST ) = {0}".format(round(pygsti.logl(mdl_mle, ds, lsgstListOfLists[-1]), 4)))
Frobenius diff btwn MLGST and datagen = 0.002268 Frobenius diff btwn MC2GST and datagen = 0.002266 Frobenius diff btwn MLGST and LGST = 0.00886 Frobenius diff btwn MLGST and MC2GST = 5.4e-05 Chi^2 ( MC2GST ) = 1249.4157 Chi^2 ( MLGST ) = 1249.7777 LogL ( MC2GST ) = -2136569.4054 LogL ( MLGST ) = -2136569.2228
Notice that, as expected, the MC2GST estimate has a slightly lower $\chi^{2}$ score than the MLGST estimate, and the MLGST estimate has a slightly higher loglikelihood than the MC2GST estimate. In addition, both are close (in terms of the Frobenius difference) to the depolarized model. Which is good - it means GST is giving us estimates which are close to the true model used to generate the data. Performing the same analysis with the low-count data shows larger differences between the two, which is expected since the $\chi^2$ and loglikelihood statistics are more similar at large $N$, that is, for large numbers of samples.
print("LOW COUNT DATA:")
print("Frobenius diff btwn MLGST and datagen = {0}".format(round(mdl_mle_lowcnts.frobeniusdist(depol_model), 6)))
print("Frobenius diff btwn MC2GST and datagen = {0}".format(round(mdl_mc2_lowcnts.frobeniusdist(depol_model), 6)))
print("Frobenius diff btwn MLGST and LGST = {0}".format(round(mdl_mle_lowcnts.frobeniusdist(mdl_clgst), 6)))
print("Frobenius diff btwn MLGST and MC2GST = {0}".format(round(mdl_mle_lowcnts.frobeniusdist(mdl_mc2), 6)))
print("Chi^2 ( MC2GST ) = {0}".format(round(pygsti.chi2(mdl_mc2_lowcnts, dsLowCounts, lsgstListOfLists[-1]), 4)))
print("Chi^2 ( MLGST ) = {0}".format(round(pygsti.chi2(mdl_mle_lowcnts, dsLowCounts, lsgstListOfLists[-1] ), 4)))
print("LogL ( MC2GST ) = {0}".format(round(pygsti.logl(mdl_mc2_lowcnts, dsLowCounts, lsgstListOfLists[-1]), 4)))
print("LogL ( MLGST ) = {0}".format(round(pygsti.logl(mdl_mle_lowcnts, dsLowCounts, lsgstListOfLists[-1]), 4)))
LOW COUNT DATA: Frobenius diff btwn MLGST and datagen = 0.009468 Frobenius diff btwn MC2GST and datagen = 0.009511 Frobenius diff btwn MLGST and LGST = 0.013287 Frobenius diff btwn MLGST and MC2GST = 0.009869 Chi^2 ( MC2GST ) = 1257.068 Chi^2 ( MLGST ) = 1264.5518 LogL ( MC2GST ) = -106884.0807 LogL ( MLGST ) = -106880.2052