Compile ONNX Models

Author: Joshua Z. Zhang

This article is an introductory tutorial to deploy ONNX models with Relay.

For us to begin with, ONNX package must be installed.

A quick solution is to install protobuf compiler, and

pip install onnx --user

or please refer to offical site. https://github.com/onnx/onnx

import onnx
import numpy as np
import tvm
from tvm import te
import tvm.relay as relay
from tvm.contrib.download import download_testdata

Load pretrained ONNX model

The example super resolution model used here is exactly the same model in onnx tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html we skip the pytorch model construction part, and download the saved onnx model

model_url = "".join(
    [
        "https://gist.github.com/zhreshold/",
        "bcda4716699ac97ea44f791c24310193/raw/",
        "93672b029103648953c4e5ad3ac3aadf346a4cdc/",
        "super_resolution_0.2.onnx",
    ]
)
model_path = download_testdata(model_url, "super_resolution.onnx", module="onnx")
# now you have super_resolution.onnx on disk
onnx_model = onnx.load(model_path)

Out:

File /workspace/.tvm_test_data/onnx/super_resolution.onnx exists, skip.

Load a test image

A single cat dominates the examples!

from PIL import Image

img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
img_path = download_testdata(img_url, "cat.png", module="data")
img = Image.open(img_path).resize((224, 224))
img_ycbcr = img.convert("YCbCr")  # convert to YCbCr
img_y, img_cb, img_cr = img_ycbcr.split()
x = np.array(img_y)[np.newaxis, np.newaxis, :, :]

Out:

File /workspace/.tvm_test_data/data/cat.png exists, skip.

Compile the model with relay

target = "llvm"

input_name = "1"
shape_dict = {input_name: x.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)

with tvm.transform.PassContext(opt_level=1):
    intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)

Out:

/workspace/docs/../python/tvm/relay/frontend/onnx.py:2736: UserWarning: Mismatched attribute type in ' : kernel_shape'

==> Context: Bad node spec: input: "1" input: "2" output: "11" op_type: "Conv" attribute { name: "kernel_shape" ints: 5 ints: 5 } attribute { name: "strides" ints: 1 ints: 1 } attribute { name: "pads" ints: 2 ints: 2 ints: 2 ints: 2 } attribute { name: "dilations" ints: 1 ints: 1 } attribute { name: "group" i: 1 }
  warnings.warn(str(e))

Execute on TVM

dtype = "float32"
tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), **params).asnumpy()

Display results

We put input and output image neck to neck

from matplotlib import pyplot as plt

out_y = Image.fromarray(np.uint8((tvm_output[0, 0]).clip(0, 255)), mode="L")
out_cb = img_cb.resize(out_y.size, Image.BICUBIC)
out_cr = img_cr.resize(out_y.size, Image.BICUBIC)
result = Image.merge("YCbCr", [out_y, out_cb, out_cr]).convert("RGB")
canvas = np.full((672, 672 * 2, 3), 255)
canvas[0:224, 0:224, :] = np.asarray(img)
canvas[:, 672:, :] = np.asarray(result)
plt.imshow(canvas.astype(np.uint8))
plt.show()
../../_images/sphx_glr_from_onnx_001.png

Notes

By default, ONNX defines models in terms of dynamic shapes. The ONNX importer retains that dynamism upon import, and the compiler attemps to convert the model into a static shapes at compile time. If this fails, there may still be dynamic operations in the model. Not all TVM kernels currently support dynamic shapes, please file an issue on discuss.tvm.apache.org if you hit an error with dynamic kernels.

Gallery generated by Sphinx-Gallery