Institute for Quantum Computing¶

**Joint Work with** Christopher Ferrie and D. G. Cory

Producing useful quantum information devices requires efficiently
assessing control of quantum systems, so that
we can determine whether we have implemented a desired *gate*,
and refine accordingly.
*Randomized benchmarking* uses symmetry to reduce the difficulty of this task.

We bound the resources required for benchmarking and show that
with prior information, *orders* of magnitude in accuracy can be obtained.
We reach these accuracies with near-optimal resources, improving dramatically
on curve fitting.
Finally, we show that our approach is useful for physical devices
by comparing to simulations.

Slides, references and source code are available at https://www.cgranade.com/research/arb/. $\renewcommand{\vec}[1]{\boldsymbol{#1}}$ $\newcommand{\ket}[1]{\left|#1\right\rangle}$ $\newcommand{\dd}{\mathrm{d}}$ $\newcommand{\expect}{\mathbb{E}}$ $\newcommand{\matr}[1]{\mathbf{#1}}$ $\newcommand{\T}{\mathrm{T}}$

To compile these slides, we use **nbconvert**.

In [3]:

```
!ipython nbconvert --to slides --template slides.tpl slides.ipynb
!mv slides.slides.html slides.html
```

If you want to view them in your browser complete with speaker notes, remote control support, etc., then you need to host the slides. The instructions for Reveal.js include directions for hosting via a library called Grunt. Unfortunately, this doesn't work well with remot.io, as that tool requires that you serve from port 80.

Since we're going to display some `<iframe>`

s in this talk, we'll need to import the display functionality from IPython and write a small function. These have no part in the talk itself, so we mark these cells as *Skip* in the Cell Toolbar.

In [2]:

```
from IPython.display import HTML
```

In [3]:

```
def iframe(src):
return HTML('<iframe src="{}" width=1000 height=400></iframe>'.format(src))
```

Fully characterizing large quantum systems is very difficult.

- Exponentially many parameters.
- Classical simulation of large quantum systems is intractable.
- State preparation and measurement (SPAM) errors.

For some applications, fidelity alone can be useful. Ex:

- Comparison to adversarial thresholds.
- Feedback for refining control.

Fidelity isn't the full story, though (Puzzuoli et al, PRA **89** 022306),
so some care is needed.

- The Clifford group is a unitary 2-design, so conjugating a map $\Lambda$ by random Cliffords samples from the Haar averaged channel $$ W[\Lambda](\rho) = \int U \Lambda(U^\dagger \rho U) U^\dagger\ \dd U. $$
- $W$ is a
*superchannel*that maps quantum channels to*depolarizing*channels that preserve their inputs with probability $$ p = \frac{d F - 1} {d - 1}, $$ where $F$ is the fidelity of $\Lambda$.

Knill et al, PRA

77012307 (2008). Magesan, Gambetta and Emerson, PRA85042311 (2012). Wood, in preparation.

- Can't tell with one instance how much of the error is due to SPAM, how much is due to imperfect twirling.
- The
*decay*of fidelity with sequence length gives info, even w/ SPAM, imperfect gates. - Choose each sequence such that ideal action is identity.

- Used in a wide range of experiments to characterize gatesets.

$$ F_g(m) = A p^m + B. $$

- $A$, $B$ are constants describing SPAM errors. $F_g(m)$ is the average fidelity of a sequence of length $m$.
- $1 - p$ is then the depolarizing strength of $$ W[\Lambda] = W\left[\expect_{C \sim \mathcal{C}} [\Lambda_C] \right], $$ taken over the implementation of each Clifford gate $C$.

Knill et al, PRA

77012307 (2008). Magesan, Gambetta and Emerson, PRA85042311 (2012).

- Can probe performance of a desired gate by
*interleaving*it into the twirling sequences. - Effective depolarizing parameter $p_{\bar{\mathcal{C}}} := \tilde{p} p_{\text{ref}}$.
- $\tilde{p}$: depolarizing parameter for the gate of interest.
- $p_{\text{ref}}$: expected depolarizing parameter, taken over gateset.

For example, to measure the fidelity of $S_C$:

Magesan et al, PRL

109080505 (2012).

$\tilde{p} = 0.99994$, $p_{\text{ref}} = 0.99999$

- Randomized benchmarking data is 0/1 data: was the initial state preserved by a random sequence of length $m$ or not?
- Conventional approach: estimate $\hat{F}_g(m)$ using $K$ sequences for each of many $m$.
- Epstein et al (1308.2928) investigate error bounds due to finite sampling.

- Known from frequency estimation examples that 1 bit/experiment limit can be useful.

- We extend by using Fisher information to investigate limit as $K \to 1$.
Mean (taken over data) error matrix $\expect_D[(\hat{\vec{x}} - \vec{x}) (\hat{\vec{x}} - \vec{x})^\T]$ for

*unbiased*estimators bounded by Cramer-Rao Bound: $$ \matr{E}(\vec{x}) \ge \matr{I}^{-1}(\vec{x}), $$ where $$ \matr{I}(\vec{x}) := \expect_{D | \vec{x}} [\nabla_{\vec{x}} \log\Pr(D | \vec{x}) \cdot \nabla_{\vec{x}}^\T \log\Pr(D | \vec{x}) ] $$ is the Fisher information at $\vec{x}$.Here, $\vec{x} = (p, A, B)$ or $(\tilde{p}, p_{\text{ref}}, A, B)$ are the unknown parameters being estimated.

Ferrie and Granade, QIP

12611 (2012).

- Using Fisher information, we can choose most informative $m$ for hypotheses about $A$ and $B$.

In practice, we often have prior information. Demanding *unbiased* estimators is too strong.

Let's take a Bayesian approach instead. After observing a datum $d$ taken from a sequence of length $m$: $$ \Pr(\vec{x} | d; m) = \frac{\Pr(d | \vec{x}; m)}{\Pr(d | m)} \Pr(\vec{x}). $$

We can implement this on a computer using sequential Monte Carlo (SMC). For example, to incorporate a uniform prior:

In [9]:

```
from qinfer.smc import SMCUpdater
from qinfer.rb import RandomizedBenchmarkingModel
from qinfer.distributions import UniformDistribution
prior = UniformDistribution([[0.9, 1], [0.4, 0.5], [0.5, 0.6]])
updater = SMCUpdater(RandomizedBenchmarkingModel(), 10000, prior)
# As data arrives:
# updater.update(datum, experiment)
```

Granade et al, NJP

14103013 (2012).

With prior information, we need the *Bayesian* Cramer-Rao Bound,
$$
\expect_{\vec{x}} [\matr{E}(\vec{x})] \ge \matr{J}^{-1},
$$
where
$$
\matr{J} := \expect_{\vec{x}} [\matr{I}(\vec{x})]
$$
is the Bayesian information matrix.

This again can also be computed by using SMC.

In [5]:

```
from qinfer.smc import SMCUpdaterBCRB
updater = SMCUpdaterBCRB(RandomizedBenchmarkingModel(), 10000, prior)
# As data arrives, the BCRB is given by:
# updater.current_bim
```

Ferrie and Granade, QIP

12611 (2012). Granade et al, NJP14103013 (2012).

SMC-accelerate algorithm, outperforms least-squares fitting, esp. with small amounts of data.

This advantage persists for changing the maximum sequence length as well.

To show that SMC acceleration is experimentally useful, we use a prior that is approximately 7 standard deviations away from the correct values for a cumulant-simulated gateset.

The data was simulated using the methods of Puzzuoli et al, PRA **89** 022306.

Even with a significantly bad prior, SMC does quite well.

$$\begin{array}{l|cccc} & \tilde{p} & p_{\text{ref}} & A_0 & B_0 \\ \hline \text{True} & 0.9983 & 0.9957 & 0.3185 & 0.5012 \\ \text{SMC Estimate} & 0.9940 & 0.9968 & 0.3071 & 0.5134 \\ \text{LSF Estimate} & 0.9947 & 0.9972 & 0.3369 & 0.4820 \\ \hline \text{SMC Error} & 0.0043 & 0.0011 & 0.0113 & 0.0122 \\ \text{LSF Error} & 0.0036 & 0.0015 & 0.0184 & 0.0192 \end{array}$$

Due to the bad prior, it doesn't outperform least-squares fitting in this case for $\tilde{p}$, but it does very well at $p_{\text{ref}}$, $A$ and $B$, lending credibility to the estimate.

We have developed a flexible and easy-to-use Python library, **QInfer**, for implementing SMC-based applications.

In [11]:

```
iframe("http://python-qinfer.readthedocs.org/en/latest/")
```

Out[11]:

- Randomized benchmarking allows for efficiently extracting useful information about a channel.
- Bayesian inference dramatically reduces the data required for randomized benchmarking.
- SMC produces nearly optimal estimates.
- Our method is robust to bad priors, and works on physically-reasonable models.

Full reference information is available on Zotero.

In [7]:

```
iframe('https://www.zotero.org/cgranade/items/collectionKey/2NQVPRK9')
```

Out[7]: