tvm.relay.testing¶
Utilities for testing and benchmarks
Classes:
|
Contains standard definitions. |
Functions:
Get all enabled targets with associated devices. |
|
|
Helper function to create benchmark image classification workload. |
|
Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer. |
|
The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat. |
|
Given a non-negative Python integer, constructs a Python expression representing that integer's value as a nat. |
|
Converts the given Relay expression into a Python script (as a Python AST object). |
|
Converts the given Relay expression into a Python script and executes it. |
|
Transform the input function, returning a function that calculate the original result, paired with gradient of the input. |
|
Perform numerical gradient checking given a relay function. |
|
count number of times a given op is called in the graph |
- class tvm.relay.testing.Prelude(mod=None)¶
Contains standard definitions.
Methods:
get_name
(canonical, dtype)Get name corresponding to the canonical name
get_global_var
(canonical, dtype)Get global var corresponding to the canonical name
get_type
(canonical, dtype)Get type corresponding to the canonical name
get_ctor
(ty_name, canonical, dtype)Get constructor corresponding to the canonical name
get_name_static
(canonical, dtype, shape[, ...])Get name corresponding to the canonical name
get_global_var_static
(canonical, dtype, shape)Get var corresponding to the canonical name
get_type_static
(canonical, dtype, shape)Get type corresponding to the canonical name
get_ctor_static
(ty_name, name, dtype, shape)Get constructor corresponding to the canonical name
get_tensor_ctor_static
(name, dtype, shape)Get constructor corresponding to the canonical name
Parses the Prelude from Relay's text format into a module.
- get_name(canonical, dtype)¶
Get name corresponding to the canonical name
- get_global_var(canonical, dtype)¶
Get global var corresponding to the canonical name
- get_type(canonical, dtype)¶
Get type corresponding to the canonical name
- get_ctor(ty_name, canonical, dtype)¶
Get constructor corresponding to the canonical name
- get_name_static(canonical, dtype, shape, batch_dim=None)¶
Get name corresponding to the canonical name
- get_global_var_static(canonical, dtype, shape, batch_dim=None)¶
Get var corresponding to the canonical name
- get_type_static(canonical, dtype, shape)¶
Get type corresponding to the canonical name
- get_ctor_static(ty_name, name, dtype, shape)¶
Get constructor corresponding to the canonical name
- get_tensor_ctor_static(name, dtype, shape)¶
Get constructor corresponding to the canonical name
- load_prelude()¶
Parses the Prelude from Relay’s text format into a module.
- tvm.relay.testing.enabled_targets()¶
Get all enabled targets with associated devices.
In most cases, you should use
tvm.testing.parametrize_targets()
instead of this function.In this context, enabled means that TVM was built with support for this target, the target name appears in the TVM_TEST_TARGETS environment variable, and a suitable device for running this target exists. If TVM_TEST_TARGETS is not set, it defaults to variable DEFAULT_TEST_TARGETS in this module.
If you use this function in a test, you must decorate the test with
tvm.testing.uses_gpu()
(otherwise it will never be run on the gpu).- Returns
targets – A list of pairs of all enabled devices and the associated context
- Return type
- tvm.relay.testing.create_workload(net, initializer=None, seed=0)¶
Helper function to create benchmark image classification workload.
- Parameters
net (tvm.relay.Function) – The selected function of the network.
initializer (Initializer) – The initializer used
seed (int) – The seed used in initialization.
- Returns
mod (tvm.IRModule) – The created relay module.
params (dict of str to NDArray) – The parameters.
- tvm.relay.testing.count(prelude, n)¶
Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer. This is an example of using an ADT value in Python.
- tvm.relay.testing.make_nat_value(prelude, n)¶
The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat.
- tvm.relay.testing.make_nat_expr(prelude, n)¶
Given a non-negative Python integer, constructs a Python expression representing that integer’s value as a nat.
- tvm.relay.testing.to_python(expr: tvm.ir.expr.RelayExpr, mod=None, target=llvm -keys=cpu)¶
Converts the given Relay expression into a Python script (as a Python AST object). For easiest debugging, import the astor package and use to_source().
- tvm.relay.testing.run_as_python(expr: tvm.ir.expr.RelayExpr, mod=None, target=llvm -keys=cpu)¶
Converts the given Relay expression into a Python script and executes it.
- tvm.relay.testing.gradient(expr, mod=None, mode='higher_order')¶
Transform the input function, returning a function that calculate the original result, paired with gradient of the input.
- Parameters
expr (tvm.relay.Expr) – The input expression, which is a Function or a GlobalVar.
mod (Optional[tvm.IRModule]) –
mode (Optional[String]) – The mode of the automatic differentiation algorithm. ‘first_order’ only works on first order code, but will not produce reference nor closure. ‘higher_order’ works on all code using reference and closure.
- Returns
expr – The transformed expression.
- Return type
tvm.relay.Expr
- tvm.relay.testing.check_grad(func, inputs=None, test_inputs=None, eps=1e-06, atol=1e-05, rtol=0.001, scale=None, mean=0, mode='higher_order', target_devices=None, executor_kind='debug')¶
Perform numerical gradient checking given a relay function.
Compare analytical gradients to numerical gradients derived from two-sided approximation. Note that this test may fail if your function input types are not of high enough precision.
- Parameters
func (tvm.relay.Function) – The relay function to test.
inputs (List[np.array]) – Optional user-provided input parameters to use. If not given, will generate random normal inputs scaled to be close to the chosen epsilon value to avoid numerical precision loss.
test_inputs (List[np.array]) – The inputs to test for gradient matching. Useful in cases where some inputs are not differentiable, such as symbolic inputs to dynamic ops. If not given, all inputs are tested.
eps (float) – The epsilon value to use for computing numerical gradient approximation.
atol (float) – The absolute tolerance on difference between numerical and analytical gradients. Note that this needs to be scaled appropriately relative to the chosen eps and inputs.
rtol (float) – The relative tolerance on difference between numerical and analytical gradients. Note that this needs to be scaled appropriately relative to the chosen eps.
scale (float) – The standard deviation of the inputs.
mean (float) – The mean of the inputs.
target_devices (Optional[List[Tuple[tvm.target.Target, tvm.runtime.Device]]]) – A list of targets/devices on which the gradient should be tested. If not specified, will default to tvm.testing.enabled_targets().
- tvm.relay.testing.count_ops(expr)¶
count number of times a given op is called in the graph
a simple multilayer perceptron
- tvm.relay.testing.mlp.get_net(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype='float32')¶
Get network a simple multilayer perceptron.
- batch_sizeint
The batch size used in the model
- num_classesint, optional
Number of claseses
- image_shapetuple, optional
The input image shape
- dtypestr, optional
The data type
- Returns
net – The dataflow.
- Return type
relay.Function
- tvm.relay.testing.mlp.get_workload(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype='float32')¶
Get benchmark workload for a simple multilayer perceptron.
- Parameters
- Returns
mod (tvm.IRModule) – The relay module that contains a mlp network.
params (dict of str to NDArray) – The parameters.
Adapted from https://github.com/tornadomeet/ResNet/blob/master/symbol_resnet.py Original author Wei Wu
Implemented the following paper:
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Identity Mappings in Deep Residual Networks”
- tvm.relay.testing.resnet.residual_unit(data, num_filter, stride, dim_match, name, bottle_neck=True, data_layout='NCHW', kernel_layout='IOHW')¶
Return ResNet Unit symbol for building ResNet
- Parameters
data (str) – Input data
num_filter (int) – Number of output channels
bnf (int) – Bottle neck channels factor with regard to num_filter
stride (tuple) – Stride used in convolution
dim_match (bool) – True means channel number between input and output is the same, otherwise means differ
name (str) – Base name of the operators
- tvm.relay.testing.resnet.resnet(units, num_stages, filter_list, num_classes, data_shape, bottle_neck=True, layout='NCHW', dtype='float32')¶
Return ResNet Program.
- Parameters
units (list) – Number of units in each stage
num_stages (int) – Number of stages
filter_list (list) – Channel size of each stage
num_classes (int) – Output size of symbol
data_shape (tuple of int.) – The shape of input data.
bottle_neck (bool) – Whether apply bottleneck transformation.
layout (str) – The data layout for conv2d
dtype (str) – The global data type.
- tvm.relay.testing.resnet.get_net(batch_size, num_classes, num_layers=50, image_shape=(3, 224, 224), layout='NCHW', dtype='float32', **kwargs)¶
Adapted from https://github.com/tornadomeet/ResNet/blob/master/train_resnet.py Original author Wei Wu
- tvm.relay.testing.resnet.get_workload(batch_size=1, num_classes=1000, num_layers=18, image_shape=(3, 224, 224), layout='NCHW', dtype='float32', **kwargs)¶
Get benchmark workload for resnet
- Parameters
batch_size (int) – The batch size used in the model
num_classes (int, optional) – Number of classes
num_layers (int, optional) – Number of layers
image_shape (tuple, optional) – The input image shape
layout (str) – The data layout for conv2d
dtype (str, optional) – The data type
kwargs (dict) – Extra arguments
- Returns
mod (tvm.IRModule) – The relay module that contains a ResNet network.
params (dict of str to NDArray) – The parameters.
Net of the generator of DCGAN
Adopted from: https://github.com/tqchen/mxnet-gan/blob/main/mxgan/generator.py
Reference: Radford, Alec, Luke Metz, and Soumith Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks.” arXiv preprint arXiv:1511.06434 (2015).
- tvm.relay.testing.dcgan.deconv2d(data, ishape, oshape, kshape, layout, name, stride=(2, 2))¶
a deconv layer that enlarges the feature map
- tvm.relay.testing.dcgan.deconv2d_bn_relu(data, prefix, **kwargs)¶
a block of deconv + batch norm + relu
- tvm.relay.testing.dcgan.get_net(batch_size, random_len=100, oshape=(3, 64, 64), ngf=128, code=None, layout='NCHW', dtype='float32')¶
get net of dcgan generator
- tvm.relay.testing.dcgan.get_workload(batch_size, oshape=(3, 64, 64), ngf=128, random_len=100, layout='NCHW', dtype='float32')¶
Get benchmark workload for a DCGAN generator
- Parameters
batch_size (int) – The batch size used in the model
oshape (tuple, optional) – The shape of output image, layout=”CHW”
ngf (int, optional) – The number of final feature maps in the generator
random_len (int, optional) – The length of random input
layout (str, optional) – The layout of conv2d transpose
dtype (str, optional) – The data type
- Returns
mod (tvm.IRModule) – The relay module that contains a DCGAN network.
params (dict of str to NDArray) – The parameters.
Port of NNVM version of MobileNet to Relay.
- tvm.relay.testing.mobilenet.conv_block(data, name, channels, kernel_size=(3, 3), strides=(1, 1), padding=(1, 1), epsilon=1e-05, layout='NCHW')¶
Helper function to construct conv_bn-relu
- tvm.relay.testing.mobilenet.separable_conv_block(data, name, depthwise_channels, pointwise_channels, kernel_size=(3, 3), downsample=False, padding=(1, 1), epsilon=1e-05, layout='NCHW', dtype='float32')¶
Helper function to get a separable conv block
- tvm.relay.testing.mobilenet.mobile_net(num_classes=1000, data_shape=(1, 3, 224, 224), dtype='float32', alpha=1.0, is_shallow=False, layout='NCHW')¶
Function to construct a MobileNet
- tvm.relay.testing.mobilenet.get_workload(batch_size=1, num_classes=1000, image_shape=(3, 224, 224), dtype='float32', layout='NCHW')¶
Get benchmark workload for mobilenet
- Parameters
batch_size (int, optional) – The batch size used in the model
num_classes (int, optional) – Number of classes
image_shape (tuple, optional) – The input image shape, cooperate with layout
dtype (str, optional) – The data type
layout (str, optional) – The data layout of image_shape and the operators cooperate with image_shape
- Returns
mod (tvm.IRModule) – The relay module that contains a MobileNet network.
params (dict of str to NDArray) – The parameters.
Implementation of a Long Short-Term Memory (LSTM) cell.
Adapted from: https://gist.github.com/merrymercy/5eb24e3b019f84200645bd001e9caae9
- tvm.relay.testing.lstm.lstm_cell(num_hidden, batch_size=1, dtype='float32', name='')¶
Long-Short Term Memory (LSTM) network cell.
- Parameters
- Returns
result – A Relay function that evaluates an LSTM cell. The function takes in a tensor of input data, a tuple of two states, and weights and biases for dense operations on the inputs and on the state. It returns a tuple with two members, an output tensor and a tuple of two new states.
- Return type
tvm.relay.Function
- tvm.relay.testing.lstm.get_net(iterations, num_hidden, batch_size=1, dtype='float32')¶
Constructs an unrolled RNN with LSTM cells
- tvm.relay.testing.lstm.get_workload(iterations, num_hidden, batch_size=1, dtype='float32')¶
Get benchmark workload for an LSTM RNN.
- Parameters
- Returns
mod (tvm.IRModule) – The relay module that contains a LSTM network.
params (dict of str to NDArray) – The parameters.
Inception V3, suitable for images with around 299 x 299
Reference: Szegedy, Christian, et al. “Rethinking the Inception Architecture for Computer Vision.” arXiv preprint arXiv:1512.00567 (2015).
- Adopted from https://github.com/apache/incubator-mxnet/blob/master/
example/image-classification/symbols/inception-v3.py
- tvm.relay.testing.inception_v3.get_net(batch_size, num_classes, image_shape, dtype)¶
Get network a Inception v3 network.
- batch_sizeint
The batch size used in the model
- num_classesint, optional
Number of claseses
- image_shapetuple, optional
The input image shape
- dtypestr, optional
The data type
- Returns
net – The dataflow.
- Return type
relay.Function
- tvm.relay.testing.inception_v3.get_workload(batch_size=1, num_classes=1000, image_shape=(3, 299, 299), dtype='float32')¶
Get benchmark workload for InceptionV3
- Parameters
- Returns
mod (tvm.IRModule) – The relay module that contains an Inception V3 network.
params (dict of str to NDArray) – The parameters.
Symbol of SqueezeNet
Reference: Iandola, Forrest N., et al. “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size.” (2016).
- tvm.relay.testing.squeezenet.get_net(batch_size, image_shape, num_classes, version, dtype)¶
Get symbol of SqueezeNet
- tvm.relay.testing.squeezenet.get_workload(batch_size=1, num_classes=1000, version='1.0', image_shape=(3, 224, 224), dtype='float32')¶
Get benchmark workload for SqueezeNet
- Parameters
- Returns
mod (tvm.IRModule) – The relay module that contains a SqueezeNet network.
params (dict of str to NDArray) – The parameters.
References:
Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.” arXiv preprint arXiv:1409.1556 (2014).
- tvm.relay.testing.vgg.get_feature(internal_layer, layers, filters, batch_norm=False)¶
Get VGG feature body as stacks of convolutions.
- tvm.relay.testing.vgg.get_classifier(input_data, num_classes)¶
Get VGG classifier layers as fc layers.
- tvm.relay.testing.vgg.get_net(batch_size, image_shape, num_classes, dtype, num_layers=11, batch_norm=False)¶
- Parameters
batch_size (int) – The batch size used in the model
image_shape (tuple, optional) – The input image shape
num_classes (int, optional) – Number of claseses
dtype (str, optional) – The data type
num_layers (int) – Number of layers for the variant of vgg. Options are 11, 13, 16, 19.
batch_norm (bool, default False) – Use batch normalization.
- tvm.relay.testing.vgg.get_workload(batch_size, num_classes=1000, image_shape=(3, 224, 224), dtype='float32', num_layers=11, batch_norm=False)¶
Get benchmark workload for VGG nets.
- Parameters
batch_size (int) – The batch size used in the model
num_classes (int, optional) – Number of claseses
image_shape (tuple, optional) – The input image shape
dtype (str, optional) – The data type
num_layers (int) – Number of layers for the variant of vgg. Options are 11, 13, 16, 19.
batch_norm (bool) – Use batch normalization.
- Returns
mod (tvm.IRModule) – The relay module that contains a VGG network.
params (dict of str to NDArray) – The parameters.
Port of MxNet version of Densenet to Relay. https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/model_zoo/vision/densenet.py
- tvm.relay.testing.densenet.get_workload(densenet_size=121, classes=1000, batch_size=4, image_shape=(3, 224, 224), dtype='float32')¶
Gets benchmark workload for densenet.
- Parameters
densenet_size (int, optional (default 121)) – Parameter for the network size. The supported sizes are 121, 161, 169, and 201.
classes (int, optional (default 1000)) – The number of classes.
batch_size (int, optional (detault 4)) – The batch size for the network.
image_shape (shape, optional (default (3, 224, 224))) – The shape of the input data.
dtype (data type, optional (default 'float32')) – The data type of the input data.
- Returns
mod (tvm.IRModule) – The relay module that contains a DenseNet network.
params (dict of str to NDArray) – The benchmark paraeters.