tvm.relay.testing

Utilities for testing and benchmarks

Functions

add_nat_definitions(prelude)

Given a Relay prelude, adds a Peano nat ADT, as well as functions for adding nats and doubling nats.

check_grad(func[, inputs, eps, atol, rtol, …])

Perform numerical gradient checking given a relay function.

count(prelude, n)

Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer.

create_workload(net[, initializer, seed])

Helper function to create benchmark image classification workload.

ctx_list()

Get context list for testcases

gradient(expr[, mod, mode])

Transform the input function, returning a function that calculate the original result, paired with gradient of the input.

make_nat_expr(prelude, n)

Given a non-negative Python integer, constructs a Python expression representing that integer’s value as a nat.

make_nat_value(prelude, n)

The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat.

run_as_python(expr[, mod, target])

Converts the given Relay expression into a Python script and executes it.

to_python(expr[, mod, target])

Converts the given Relay expression into a Python script (as a Python AST object).

tvm.relay.testing.ctx_list()

Get context list for testcases

tvm.relay.testing.create_workload(net, initializer=None, seed=0)

Helper function to create benchmark image classification workload.

Parameters
  • net (tvm.relay.Function) – The selected function of the network.

  • initializer (Initializer) – The initializer used

  • seed (int) – The seed used in initialization.

Returns

  • mod (tvm.IRModule) – The created relay module.

  • params (dict of str to NDArray) – The parameters.

tvm.relay.testing.add_nat_definitions(prelude)

Given a Relay prelude, adds a Peano nat ADT, as well as functions for adding nats and doubling nats. It also adds versions of update, nth, and iterate that take nats instead of scalars (the names are prefixed with nat_).

tvm.relay.testing.count(prelude, n)

Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer. This is an example of using an ADT value in Python.

tvm.relay.testing.make_nat_value(prelude, n)

The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat.

tvm.relay.testing.make_nat_expr(prelude, n)

Given a non-negative Python integer, constructs a Python expression representing that integer’s value as a nat.

tvm.relay.testing.to_python(expr: tvm.ir.expr.RelayExpr, mod=None, target=llvm)

Converts the given Relay expression into a Python script (as a Python AST object). For easiest debugging, import the astor package and use to_source().

tvm.relay.testing.run_as_python(expr: tvm.ir.expr.RelayExpr, mod=None, target=llvm)

Converts the given Relay expression into a Python script and executes it.

tvm.relay.testing.gradient(expr, mod=None, mode='higher_order')

Transform the input function, returning a function that calculate the original result, paired with gradient of the input.

Parameters
  • expr (tvm.relay.Expr) – The input expression, which is a Function or a GlobalVar.

  • mod (Optional[tvm.IRModule]) –

  • mode (Optional[String]) – The mode of the automatic differentiation algorithm. ‘first_order’ only works on first order code, but will not produce reference nor closure. ‘higher_order’ works on all code using reference and closure.

Returns

expr – The transformed expression.

Return type

tvm.relay.Expr

tvm.relay.testing.check_grad(func, inputs=None, eps=1e-06, atol=1e-05, rtol=0.001, scale=None, mean=0)

Perform numerical gradient checking given a relay function.

Compare analytical gradients to numerical gradients derived from two-sided approximation. Note that this test may fail if your function input types are not of high enough precision.

Parameters
  • func (tvm.relay.Function) – The relay function to test.

  • inputs (List[np.array]) – Optional user-provided input parameters to use. If not given, will generate random normal inputs scaled to be close to the chosen epsilon value to avoid numerical precision loss.

  • eps (float) – The epsilon value to use for computing numerical gradient approximation.

  • atol (float) – The absolute tolerance on difference between numerical and analytical gradients. Note that this needs to be scaled appropriately relative to the chosen eps and inputs.

  • rtol (float) – The relative tolerance on difference between numerical and analytical gradients. Note that this needs to be scaled appropriately relative to the chosen eps.

  • scale (float) – The standard deviation of the inputs.

  • mean (float) – The mean of the inputs.

a simple multilayer perceptron

tvm.relay.testing.mlp.get_net(batch_size, num_classes=10, image_shape=1, 28, 28, dtype='float32')

Get network a simple multilayer perceptron.

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns

net – The dataflow.

Return type

relay.Function

tvm.relay.testing.mlp.get_workload(batch_size, num_classes=10, image_shape=1, 28, 28, dtype='float32')

Get benchmark workload for a simple multilayer perceptron.

Parameters
  • batch_size (int) – The batch size used in the model

  • num_classes (int, optional) – Number of claseses

  • image_shape (tuple, optional) – The input image shape

  • dtype (str, optional) – The data type

Returns

  • mod (tvm.IRModule) – The relay module that contains a mlp network.

  • params (dict of str to NDArray) – The parameters.

Adapted from https://github.com/tornadomeet/ResNet/blob/master/symbol_resnet.py Original author Wei Wu

Implemented the following paper:

Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Identity Mappings in Deep Residual Networks”

tvm.relay.testing.resnet.residual_unit(data, num_filter, stride, dim_match, name, bottle_neck=True, data_layout='NCHW', kernel_layout='IOHW')

Return ResNet Unit symbol for building ResNet

Parameters
  • data (str) – Input data

  • num_filter (int) – Number of output channels

  • bnf (int) – Bottle neck channels factor with regard to num_filter

  • stride (tuple) – Stride used in convolution

  • dim_match (bool) – True means channel number between input and output is the same, otherwise means differ

  • name (str) – Base name of the operators

tvm.relay.testing.resnet.resnet(units, num_stages, filter_list, num_classes, data_shape, bottle_neck=True, layout='NCHW', dtype='float32')

Return ResNet Program.

Parameters
  • units (list) – Number of units in each stage

  • num_stages (int) – Number of stages

  • filter_list (list) – Channel size of each stage

  • num_classes (int) – Ouput size of symbol

  • data_shape (tuple of int.) – The shape of input data.

  • bottle_neck (bool) – Whether apply bottleneck transformation.

  • layout (str) – The data layout for conv2d

  • dtype (str) – The global data type.

tvm.relay.testing.resnet.get_net(batch_size, num_classes, num_layers=50, image_shape=3, 224, 224, layout='NCHW', dtype='float32', **kwargs)

Adapted from https://github.com/tornadomeet/ResNet/blob/master/train_resnet.py Original author Wei Wu

tvm.relay.testing.resnet.get_workload(batch_size=1, num_classes=1000, num_layers=18, image_shape=3, 224, 224, layout='NCHW', dtype='float32', **kwargs)

Get benchmark workload for resnet

Parameters
  • batch_size (int) – The batch size used in the model

  • num_classes (int, optional) – Number of classes

  • num_layers (int, optional) – Number of layers

  • image_shape (tuple, optional) – The input image shape

  • layout (str) – The data layout for conv2d

  • dtype (str, optional) – The data type

  • kwargs (dict) – Extra arguments

Returns

  • mod (tvm.IRModule) – The relay module that contains a ResNet network.

  • params (dict of str to NDArray) – The parameters.

Net of the generator of DCGAN

Adopted from: https://github.com/tqchen/mxnet-gan/blob/master/mxgan/generator.py

Reference: Radford, Alec, Luke Metz, and Soumith Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks.” arXiv preprint arXiv:1511.06434 (2015).

tvm.relay.testing.dcgan.deconv2d(data, ishape, oshape, kshape, name, stride=2, 2)

a deconv layer that enlarges the feature map

tvm.relay.testing.dcgan.deconv2d_bn_relu(data, prefix, **kwargs)

a block of deconv + batch norm + relu

tvm.relay.testing.dcgan.get_net(batch_size, random_len=100, oshape=3, 64, 64, ngf=128, code=None, dtype='float32')

get net of dcgan generator

tvm.relay.testing.dcgan.get_workload(batch_size, oshape=3, 64, 64, ngf=128, random_len=100, dtype='float32')

Get benchmark workload for a DCGAN generator

Parameters
  • batch_size (int) – The batch size used in the model

  • oshape (tuple, optional) – The shape of output image, layout=”CHW”

  • ngf (int, optional) – The number of final feature maps in the generator

  • random_len (int, optional) – The length of random input

  • dtype (str, optional) – The data type

Returns

  • mod (tvm.IRModule) – The relay module that contains a DCGAN network.

  • params (dict of str to NDArray) – The parameters.

Port of NNVM version of MobileNet to Relay.

tvm.relay.testing.mobilenet.conv_block(data, name, channels, kernel_size=3, 3, strides=1, 1, padding=1, 1, epsilon=1e-05, layout='NCHW')

Helper function to construct conv_bn-relu

tvm.relay.testing.mobilenet.separable_conv_block(data, name, depthwise_channels, pointwise_channels, kernel_size=3, 3, downsample=False, padding=1, 1, epsilon=1e-05, layout='NCHW', dtype='float32')

Helper function to get a separable conv block

tvm.relay.testing.mobilenet.mobile_net(num_classes=1000, data_shape=1, 3, 224, 224, dtype='float32', alpha=1.0, is_shallow=False, layout='NCHW')

Function to construct a MobileNet

tvm.relay.testing.mobilenet.get_workload(batch_size=1, num_classes=1000, image_shape=3, 224, 224, dtype='float32', layout='NCHW')

Get benchmark workload for mobilenet

Parameters
  • batch_size (int, optional) – The batch size used in the model

  • num_classes (int, optional) – Number of classes

  • image_shape (tuple, optional) – The input image shape, cooperate with layout

  • dtype (str, optional) – The data type

  • layout (str, optional) – The data layout of image_shape and the operators cooperate with image_shape

Returns

  • mod (tvm.IRModule) – The relay module that contains a MobileNet network.

  • params (dict of str to NDArray) – The parameters.

Implementation of a Long Short-Term Memory (LSTM) cell.

Adapted from: https://gist.github.com/merrymercy/5eb24e3b019f84200645bd001e9caae9

tvm.relay.testing.lstm.lstm_cell(num_hidden, batch_size=1, dtype='float32', name='')

Long-Short Term Memory (LSTM) network cell.

Parameters
  • num_hidden (int) – Number of units in output symbol.

  • batch_size (int) – Batch size (length of states).

Returns

result – A Relay function that evaluates an LSTM cell. The function takes in a tensor of input data, a tuple of two states, and weights and biases for dense operations on the inputs and on the state. It returns a tuple with two members, an output tensor and a tuple of two new states.

Return type

tvm.relay.Function

tvm.relay.testing.lstm.get_net(iterations, num_hidden, batch_size=1, dtype='float32')

Constructs an unrolled RNN with LSTM cells

tvm.relay.testing.lstm.get_workload(iterations, num_hidden, batch_size=1, dtype='float32')

Get benchmark workload for an LSTM RNN.

Parameters
  • iterations (int) – The number of iterations in the desired LSTM RNN.

  • num_hidden (int) – The size of the hiddxen state

  • batch_size (int, optional (default 1)) – The batch size used in the model

  • dtype (str, optional (default "float32")) – The data type

Returns

  • mod (tvm.IRModule) – The relay module that contains a LSTM network.

  • params (dict of str to NDArray) – The parameters.

Inception V3, suitable for images with around 299 x 299

Reference: Szegedy, Christian, et al. “Rethinking the Inception Architecture for Computer Vision.” arXiv preprint arXiv:1512.00567 (2015).

Adopted from https://github.com/apache/incubator-mxnet/blob/

master/example/image-classification/symbols/inception-v3.py

tvm.relay.testing.inception_v3.get_net(batch_size, num_classes, image_shape, dtype)

Get network a Inception v3 network.

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns

net – The dataflow.

Return type

relay.Function

tvm.relay.testing.inception_v3.get_workload(batch_size=1, num_classes=1000, image_shape=3, 299, 299, dtype='float32')

Get benchmark workload for InceptionV3

Parameters
  • batch_size (int) – The batch size used in the model

  • num_classes (int, optional) – Number of classes

  • image_shape (tuple, optional) – The input image shape

  • dtype (str, optional) – The data type

Returns

  • mod (tvm.IRModule) – The relay module that contains an Inception V3 network.

  • params (dict of str to NDArray) – The parameters.

Symbol of SqueezeNet

Reference: Iandola, Forrest N., et al. “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size.” (2016).

tvm.relay.testing.squeezenet.get_net(batch_size, image_shape, num_classes, version, dtype)

Get symbol of SqueezeNet

Parameters
  • batch_size (int) – The batch size used in the model

  • image_shape (tuple, optional) – The input image shape

  • num_classes (int) – The number of classification results

  • version (str, optional) – “1.0” or “1.1” of SqueezeNet

tvm.relay.testing.squeezenet.get_workload(batch_size=1, num_classes=1000, version='1.0', image_shape=3, 224, 224, dtype='float32')

Get benchmark workload for SqueezeNet

Parameters
  • batch_size (int) – The batch size used in the model

  • num_classes (int, optional) – Number of classes

  • version (str, optional) – “1.0” or “1.1” of SqueezeNet

  • image_shape (tuple, optional) – The input image shape

  • dtype (str, optional) – The data type

Returns

  • mod (tvm.IRModule) – The relay module that contains a SqueezeNet network.

  • params (dict of str to NDArray) – The parameters.

References:

Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.” arXiv preprint arXiv:1409.1556 (2014).

tvm.relay.testing.vgg.get_feature(internal_layer, layers, filters, batch_norm=False)

Get VGG feature body as stacks of convoltions.

tvm.relay.testing.vgg.get_classifier(input_data, num_classes)

Get VGG classifier layers as fc layers.

tvm.relay.testing.vgg.get_net(batch_size, image_shape, num_classes, dtype, num_layers=11, batch_norm=False)
Parameters
  • batch_size (int) – The batch size used in the model

  • image_shape (tuple, optional) – The input image shape

  • num_classes (int, optional) – Number of claseses

  • dtype (str, optional) – The data type

  • num_layers (int) – Number of layers for the variant of vgg. Options are 11, 13, 16, 19.

  • batch_norm (bool, default False) – Use batch normalization.

tvm.relay.testing.vgg.get_workload(batch_size, num_classes=1000, image_shape=3, 224, 224, dtype='float32', num_layers=11, batch_norm=False)

Get benchmark workload for VGG nets.

Parameters
  • batch_size (int) – The batch size used in the model

  • num_classes (int, optional) – Number of claseses

  • image_shape (tuple, optional) – The input image shape

  • dtype (str, optional) – The data type

  • num_layers (int) – Number of layers for the variant of vgg. Options are 11, 13, 16, 19.

  • batch_norm (bool) – Use batch normalization.

Returns

  • mod (tvm.IRModule) – The relay module that contains a VGG network.

  • params (dict of str to NDArray) – The parameters.

Port of MxNet version of Densenet to Relay. https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/model_zoo/vision/densenet.py

tvm.relay.testing.densenet.get_workload(densenet_size=121, classes=1000, batch_size=4, image_shape=3, 224, 224, dtype='float32')

Gets benchmark workload for densenet.

Parameters
  • densenet_size (int, optional (default 121)) – Parameter for the network size. The supported sizes are 121, 161, 169, and 201.

  • classes (int, optional (default 1000)) – The number of classes.

  • batch_size (int, optional (detault 4)) – The batch size for the network.

  • image_shape (shape, optional (default (3, 224, 224))) – The shape of the input data.

  • dtype (data type, optional (default 'float32')) – The data type of the input data.

Returns

  • mod (tvm.IRModule) – The relay module that contains a DenseNet network.

  • params (dict of str to NDArray) – The benchmark paraeters.