Exporting model from PyTorch to ONNX

In this tutorial, we describe how to use ONNX to convert a model defined in PyTorch into the ONNX format.

ONNX exporter is part of the PyTorch repository.

For working with this tutorial, you will need to install onnx. You can get binary builds of onnx with conda install -c conda-forge onnx.

NOTE: ONNX is under active development so for the best support consider building PyTorch master branch which can be installed by following the instructions here

Invoking exporter

Pretty much it's a matter of replacing my_model(input) with torch.onnx.export(my_model, input, "my_model.onnx") in your script.

Limitations

The ONNX exporter is a trace-based exporter, which means that it operates by executing your model once, and exporting the operators which were actually run during this run. This means that if your model is dynamic, e.g., changes behavior depending on input data, the export won’t be accurate.

Similarly, a trace is might be valid only for a specific input size (which is one reason why we require explicit inputs on tracing). Most of the operators export size-agnostic versions and should work on different batch sizes or input sizes. We recommend examining the model trace and making sure the traced operators look reasonable.

In [1]:
import torch.onnx
help(torch.onnx.export)
Help on function export in module torch.onnx:

export(model, args, f, export_params=True, verbose=False, training=False)
    Export a model into ONNX format.  This exporter runs your model
    once in order to get a trace of its execution to be exported; at the
    moment, it does not support dynamic models (e.g., RNNs.)
    
    See also: :ref:`onnx-export`
    
    Arguments:
        model (torch.nn.Module): the model to be exported.
        args (tuple of arguments): the inputs to
            the model, e.g., such that ``model(*args)`` is a valid
            invocation of the model.  Any non-Variable arguments will
            be hard-coded into the exported model; any Variable arguments
            will become inputs of the exported model, in the order they
            occur in args.  If args is a Variable, this is equivalent
            to having called it with a 1-ary tuple of that Variable.
            (Note: passing keyword arguments to the model is not currently
            supported.  Give us a shout if you need it.)
        f: a file-like object (has to implement fileno that returns a file descriptor)
            or a string containing a file name.  A binary Protobuf will be written
            to this file.
        export_params (bool, default True): if specified, all parameters will
            be exported.  Set this to False if you want to export an untrained model.
            In this case, the exported model will first take all of its parameters
            as arguments, the ordering as specified by ``model.state_dict().values()``
        verbose (bool, default False): if specified, we will print out a debug
            description of the trace being exported.
        training (bool, default False): export the model in training mode.  At
            the moment, ONNX is oriented towards exporting models for inference
            only, so you will generally not need to set this to True.

Trying it out on AlexNet

If you already have your model built, it's just a few lines:

In [2]:
from torch.autograd import Variable
import torch.onnx
import torchvision

# Standard ImageNet input - 3 channels, 224x224,
# values don't matter as we care about network structure.
# But they can also be real inputs.
dummy_input = Variable(torch.randn(1, 3, 224, 224))
# Obtain your model, it can be also constructed in your script explicitly
model = torchvision.models.alexnet(pretrained=True)
# Invoke export
torch.onnx.export(model, dummy_input, "alexnet.onnx")

That's it!

Inspecting model

You can also use ONNX tooling to check the validity of the resulting model or inspect the details

In [3]:
import onnx

# Load the ONNX model
model = onnx.load("alexnet.onnx")

# Check that the IR is well formed
onnx.checker.check_model(model)

# Print a human readable representation of the graph
print(onnx.helper.printable_graph(model.graph))
graph torch-jit-export (
  %0[FLOAT, 1x3x224x224]
) initializers (
  %1[FLOAT, 64x3x11x11]
  %2[FLOAT, 64]
  %3[FLOAT, 192x64x5x5]
  %4[FLOAT, 192]
  %5[FLOAT, 384x192x3x3]
  %6[FLOAT, 384]
  %7[FLOAT, 256x384x3x3]
  %8[FLOAT, 256]
  %9[FLOAT, 256x256x3x3]
  %10[FLOAT, 256]
  %11[FLOAT, 4096x9216]
  %12[FLOAT, 4096]
  %13[FLOAT, 4096x4096]
  %14[FLOAT, 4096]
  %15[FLOAT, 1000x4096]
  %16[FLOAT, 1000]
) {
  %17 = Conv[dilations = [1, 1], group = 1, kernel_shape = [11, 11], pads = [2, 2, 2, 2], strides = [4, 4]](%0, %1)
  %18 = Add[axis = 1, broadcast = 1](%17, %2)
  %19 = Relu(%18)
  %20 = MaxPool[kernel_shape = [3, 3], pads = [0, 0], strides = [2, 2]](%19)
  %21 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%20, %3)
  %22 = Add[axis = 1, broadcast = 1](%21, %4)
  %23 = Relu(%22)
  %24 = MaxPool[kernel_shape = [3, 3], pads = [0, 0], strides = [2, 2]](%23)
  %25 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%24, %5)
  %26 = Add[axis = 1, broadcast = 1](%25, %6)
  %27 = Relu(%26)
  %28 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%27, %7)
  %29 = Add[axis = 1, broadcast = 1](%28, %8)
  %30 = Relu(%29)
  %31 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%30, %9)
  %32 = Add[axis = 1, broadcast = 1](%31, %10)
  %33 = Relu(%32)
  %34 = MaxPool[kernel_shape = [3, 3], pads = [0, 0], strides = [2, 2]](%33)
  %35 = Reshape[shape = [1, 9216]](%34)
  %36, %37 = Dropout[is_test = 1, ratio = 0.5](%35)
  %38 = Transpose[perm = [1, 0]](%11)
  %40 = Gemm[alpha = 1, beta = 1, broadcast = 1](%36, %38, %12)
  %41 = Relu(%40)
  %42, %43 = Dropout[is_test = 1, ratio = 0.5](%41)
  %44 = Transpose[perm = [1, 0]](%13)
  %46 = Gemm[alpha = 1, beta = 1, broadcast = 1](%42, %44, %14)
  %47 = Relu(%46)
  %48 = Transpose[perm = [1, 0]](%15)
  %50 = Gemm[alpha = 1, beta = 1, broadcast = 1](%47, %48, %16)
  return %50
}

Notice that all parameters are listed as graph's inputs but they also have stored values initialized in model.graph.initializers.

In [ ]:
## What's next

Check [PyTorch documentation on onnx file](http://pytorch.org/docs/master/onnx.html)
Take a look at [other tutorials, including importing of ONNX models to other frameworks](https://github.com/onnx/tutorials/tree/master/tutorials)