tvm.topi

TVM Operator Inventory.

TOPI is the operator collection library for TVM, to provide sugars for constructing compute declaration as well as optimized schedules.

Some of the schedule function may have been specially optimized for a specific workload.

Exceptions:

InvalidShapeError

Invalid shape for a topi function.

Functions:

abs(x)

Take absolute value of the input of x, element-wise.

acos(x)

Take arc cos of input x.

acosh(x)

Take arc cosh of input x.

add(lhs, rhs)

Addition with auto-broadcasting

adv_index(data, indices)

Numpy style indexing with tensors.

all(data[, axis, keepdims])

Logical AND of array elements over a given axis or a list of axes

any(data[, axis, keepdims])

Logical OR of array elements over a given axis or a list of axes

arange(start[, stop, step, dtype])

Creates a tensor with evenly spaced values within a given interval.

argmax(data[, axis, keepdims])

Returns the indices of the maximum values along an axis.

argmin(data[, axis, keepdims])

Returns the indices of the minimum values along an axis.

argsort(data[, valid_count, axis, …])

Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.

argwhere(output_shape, condition)

Find the indices of elements of a tensor that are non-zero.

asin(x)

Take arc sin of input x.

asinh(x)

Take arc sinh of input x.

atan(x)

Take atan of input x.

atanh(x)

Take atanh of input x.

bitwise_and(lhs, rhs)

Compute element-wise bitwise and of data.

bitwise_not(data)

Compute element-wise bitwise not of data.

bitwise_or(lhs, rhs)

Compute element-wise bitwise or of data.

bitwise_xor(lhs, rhs)

Compute element-wise bitwise xor of data.

broadcast_to(data, shape)

Broadcast the src to the target shape

cast(x, dtype)

Cast input to specified data type.

ceil(x)

Take ceil of input x.

clip(x, a_min, a_max)

Clip (limit) the values in an array.

concatenate(a_tuple[, axis])

Join a sequence of arrays along an existing axis.

cos(x)

Take cos of input x.

cosh(x)

Take cosh of input x.

divide(lhs, rhs)

Division with auto-broadcasting

elemwise_sum(xs)

Perform element-wise sum on inputs

equal(lhs, rhs)

Compute (lhs==rhs) with auto-broadcasting

erf(x)

Take gauss error function of input x.

exp(x)

Take exponential of input x.

expand_dims(a, axis[, num_newaxis])

Expand the shape of an array.

expand_like(a, shape_like, axis)

Expand an input array with the shape of second array.

fast_erf(x)

Take gauss error function of input x using fast_erf implementation.

fast_exp(x)

Take exponential of input x using fast_exp implementation

fast_tanh(x)

Take tanhonential of input x using fast_tanh implementation

fixed_point_multiply(x, multiplier, shift)

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

flip(a[, axis])

Flip/reverse elements of an array in a particular axis.

floor(x)

Take floor of input x.

floor_divide(lhs, rhs)

Floor division with auto-broadcasting

floor_mod(lhs, rhs)

Floor modulus with auto-broadcasting

full(shape, dtype, fill_value)

Fill tensor with fill_value

full_like(x, fill_value)

Construct a tensor with same shape as input tensor,

gather(data, axis, indices)

Gather values along given axis from given indices.

gather_nd(a, indices)

Gather elements from a n-dimension array..

get_const_tuple(in_tuple)

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

greater(lhs, rhs)

Compute (lhs>rhs) with auto-broadcasting

greater_equal(lhs, rhs)

Compute (lhs>=rhs) with auto-broadcasting

hybrid_argwhere_1d(output_shape, condition)

Find the indices of elements of a 1-D tensor that are non-zero.

hybrid_argwhere_2d(output_shape, condition)

Find the indices of elements of a 2-D tensor that are non-zero.

hybrid_argwhere_3d(output_shape, condition)

Find the indices of elements of a 3-D tensor that are non-zero.

hybrid_argwhere_4d(output_shape, condition)

Find the indices of elements of a 4-D tensor that are non-zero.

hybrid_argwhere_5d(output_shape, condition)

Find the indices of elements of a 5-D tensor that are non-zero.

identity(x)

Take identity of input x.

isfinite(x)

Check if value of x is finite, element-wise.

isinf(x)

Check if value of x is infinite, element-wise.

isnan(x)

Check if value of x is NaN, element-wise.

layout_transform(array, src_layout, dst_layout)

Transform the layout according to src_layout and dst_layout

left_shift(lhs, rhs)

Left shift with auto-broadcasting

less(lhs, rhs)

Compute (lhs<rhs) with auto-broadcasting

less_equal(lhs, rhs)

Compute (lhs<=rhs) with auto-broadcasting

log(x)

Take logarithm of input x.

log10(x)

Take logarithm to the base 10 of input x.

log2(x)

Take logarithm to the base 2 of input x.

logical_and(lhs, rhs)

Compute element-wise logical and of data.

logical_not(data)

Compute element-wise logical not of data.

logical_or(lhs, rhs)

Compute element-wise logical or of data.

logical_xor(lhs, rhs)

Compute element-wise logical xor of data.

make_idx(b, e, s, z, i)

Return the array position in the selection that corresponds to an array position in the full array.

matmul(a, b[, transp_a, transp_b])

Creates an operation that calculates a matrix multiplication (row-major notation): A(i, k) * B(k, j) if trans_a == trans_b, the usual transposed combinations, otherwise

matrix_set_diag(data, diagonal[, k, align])

Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.

max(data[, axis, keepdims])

Maximum of array elements over a given axis or a list of axes

maximum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

meshgrid(a_tuple, indexing)

Create coordinate matrices from coordinate vectors.

min(data[, axis, keepdims])

Minimum of array elements over a given axis or a list of axes

minimum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

mod(lhs, rhs)

Modulus with auto-broadcasting

multiply(lhs, rhs)

Multiplication with auto-broadcasting

ndarray_size(array[, dtype])

Get the number of elements of input array

negative(x)

Take negation of input x.

not_equal(lhs, rhs)

Compute (lhs!=rhs) with auto-broadcasting

one_hot(indices, on_value, off_value, depth, …)

Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value.

power(lhs, rhs)

Power with auto-broadcasting

prod(data[, axis, keepdims])

Product of array elements over a given axis or a list of axes

reinterpret(x, dtype)

Reinterpret input to specified data type.

repeat(a, repeats, axis)

Repeats elements of an array.

reshape(a, newshape)

Reshape the array

reverse_sequence(a, seq_lengths[, seq_axis, …])

Reverse the tensor for variable length slices.

right_shift(lhs, rhs)

Right shift with auto-broadcasting

round(x)

Round elements of x to nearest integer.

rsqrt(x)

Take inverse square root of input x.

scatter(data, indices, updates[, axis])

Update data at positions defined by indices with values in updates

scatter_add(data, indices, updates[, axis])

Update data by adding values in updates at positions defined by indices

sequence_mask(data, valid_length[, …])

Sets all elements outside the expected length of the sequence to a constant value.

shape(array[, dtype])

Get the shape of input array

sigmoid(x)

Take sigmoid tanh of input x.

sign(x)

Returns -1, 0, 1 based on sign of x.

sin(x)

Take sin of input x.

sinh(x)

Take sinh of input x.

sparse_to_dense(sparse_indices, …[, …])

Converts a sparse representation into a dense tensor.

split(ary, indices_or_sections[, axis])

Split an array into multiple sub-arrays.

sqrt(x)

Take square root of input x.

squeeze(a[, axis])

Remove single-dimensional entries from the shape of an array.

stack(a, axis)

Repeats the whole array multiple times.

strided_set(a, v, begin, end[, strides])

Set slice of an array.

strided_slice(a, begin, end[, strides, …])

Slice of an array.

subtract(lhs, rhs)

Subtraction with auto-broadcasting

sum(data[, axis, keepdims])

Sum of array elements over a given axis or a list of axes

take(a, indices[, axis, mode])

Take elements from an array along an axis.

tan(x)

Take tan of input x.

tanh(x)

Take hyperbolic tanh of input x.

tensordot(a, b, axes)

A generalization of matrix multiplication to tensor.

tile(a, reps)

Repeats the whole array multiple times.

topk(data[, k, axis, ret_type, is_ascend, dtype])

Get the top k elements in an input tensor along the given axis.

transpose(a[, axes])

Permute the dimensions of an array.

trunc(x)

Take truncated value of the input of x, element-wise.

unravel_index(indices, shape)

Convert a flat index or array of flat indices into a tuple of coordinate arrays.

where(condition, x, y)

Get the elements, either from x or y, depending on the condition.

within_index(b, e, s, i)

Return a boolean value that indicates if i is within the given index.

tvm.topi.abs(x)

Take absolute value of the input of x, element-wise.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.acos(x)

Take arc cos of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.acosh(x)

Take arc cosh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.add(lhs, rhs)

Addition with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.adv_index(data, indices)

Numpy style indexing with tensors.

Parameters
  • data (tvm.te.Tensor) – Input data.

  • indices (A list of tvm.te.Tensor) – Tensor index.

Returns

result – Output tensor

Return type

tvm.te.Tensor

tvm.topi.all(data, axis=None, keepdims=False)

Logical AND of array elements over a given axis or a list of axes

Parameters
  • data (tvm.te.Tensor) – The input tvm boolean tensor

  • axis (None or int or tuple of int) – Axis or axes along which a logical AND is performed. The default, axis=None, will perform logical AND over all elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.any(data, axis=None, keepdims=False)

Logical OR of array elements over a given axis or a list of axes

Parameters
  • data (tvm.te.Tensor) – The input tvm boolean tensor

  • axis (None or int or tuple of int) – Axis or axes along which a logical OR is performed. The default, axis=None, will perform logical OR over all elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.arange(start, stop=None, step=1, dtype='float32')

Creates a tensor with evenly spaced values within a given interval.

Parameters
  • start (tvm.Expr, optional) – Start of interval. The interval includes this value. The default start value is 0.

  • stop (tvm.Expr) – Stop of interval. The interval does not include this value.

  • step (tvm.Expr, optional) – Spacing between values. The default step size is 1.

  • dtype (str, optional) – The target data type.

Returns

result – The resulting tensor.

Return type

tvm.te.Tensor

tvm.topi.argmax(data, axis=None, keepdims=False)

Returns the indices of the maximum values along an axis.

Parameters
  • data (tvm.te.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a argmax operation is performed. The default, axis=None, will find the indices of the maximum element of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.argmin(data, axis=None, keepdims=False)

Returns the indices of the minimum values along an axis.

Parameters
  • data (tvm.te.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a argmin operation is performed. The default, axis=None, will find the indices of minimum element all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.argsort(data, valid_count=None, axis=- 1, is_ascend=1, dtype='float32')

Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.

Parameters
  • data (tvm.te.Tensor) – The input tensor.

  • valid_count (tvm.te.Tensor, optional) – 1-D tensor for valid number of boxes.

  • axis (int, optional) – Axis along which to sort the input tensor. By default the flattened array is used.

  • is_ascend (boolean, optional) – Whether to sort in ascending or descending order.

  • dtype (string, optional) – DType of the output indices.

Returns

out – Sorted index tensor.

Return type

tvm.te.Tensor

Example

# An example to use argsort
dshape = (1, 5, 6)
data = te.placeholder(dshape, name="data")
axis = 0
is_ascend = False
out = argsort(data, axis=axis, is_ascend=is_ascend)
np_data = np.random.uniform(dshape)
s = topi.generic.schedule_argsort(out)
f = tvm.build(s, [data, out], "llvm")
ctx = tvm.cpu()
tvm_data = tvm.nd.array(np_data, ctx)
tvm_out = tvm.nd.array(np.zeros(dshape, dtype=data.dtype), ctx)
f(tvm_data, tvm_out)
tvm.topi.argwhere(output_shape, condition)

Find the indices of elements of a tensor that are non-zero.

Parameters

condition (tvm.te.Tensor) – Tensor with boolean values.

Returns

out – Indices of non-zero elements.

Return type

tvm.te.Tensor

tvm.topi.asin(x)

Take arc sin of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.asinh(x)

Take arc sinh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.atan(x)

Take atan of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.atanh(x)

Take atanh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.bitwise_and(lhs, rhs)

Compute element-wise bitwise and of data.

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.bitwise_not(data)

Compute element-wise bitwise not of data.

Parameters

data (tvm.te.Tensor or Expr) –

Returns

ret – Returns Expr if the operand are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.bitwise_or(lhs, rhs)

Compute element-wise bitwise or of data.

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.bitwise_xor(lhs, rhs)

Compute element-wise bitwise xor of data.

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.broadcast_to(data, shape)

Broadcast the src to the target shape

We follows the numpy broadcasting rule. See also https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html

Parameters
Returns

ret

Return type

tvm.te.Tensor

tvm.topi.cast(x, dtype)

Cast input to specified data type.

Parameters
Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.ceil(x)

Take ceil of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.clip(x, a_min, a_max)

Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges.

Parameters
Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.concatenate(a_tuple, axis=0)

Join a sequence of arrays along an existing axis.

Parameters
  • a_tuple (tuple of tvm.te.Tensor) – The arrays to concatenate

  • axis (int, optional) – The axis along which the arrays will be joined. Default is 0.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.cos(x)

Take cos of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.cosh(x)

Take cosh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.divide(lhs, rhs)

Division with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.elemwise_sum(xs)

Perform element-wise sum on inputs

Parameters

xs (list of tvm.te.Tensor) – Input arguments.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.equal(lhs, rhs)

Compute (lhs==rhs) with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.erf(x)

Take gauss error function of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.exp(x)

Take exponential of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.expand_dims(a, axis, num_newaxis=1)

Expand the shape of an array.

Parameters
  • a (tvm.te.Tensor) – The tensor to be expanded.

  • num_newaxis (int, optional) – Number of newaxis to be inserted on axis

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.expand_like(a, shape_like, axis)

Expand an input array with the shape of second array. This operation can always be composed of unsqueezing and expanding dims on those unsqueezed axes.

Examples

input = [ 12.  19.  27.]
input.shape = (3,)

new_shape_array = [[[1,2],[2,3],[1,3]],
                [[1,4],[4,3],[5,2]],
                [[7,1],[7,2],[7,3]]]
new_shape_array.shape = (3, 3, 2)

expand_like(input, [1,2], new_shape_array) =
                [[[12,12],[12,12],[12,12]],
                [[19,19],[19,19],[19,19]],
                [[27,27],[27,27],[27,27]]]
Parameters
  • a (tvm.te.Tensor) – The tensor to be expanded.

  • shape_like (tvm.te.Tensor) – The tensor to with target shape.

  • axis (list of int) – axis to be expanded on

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.fast_erf(x)

Take gauss error function of input x using fast_erf implementation.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.fast_exp(x)

Take exponential of input x using fast_exp implementation

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.fast_tanh(x)

Take tanhonential of input x using fast_tanh implementation

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.fixed_point_multiply(x, multiplier, shift)

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

Parameters
  • x (tvm.te.Tensor or Expr) – Input argument.

  • multiplier (int) – Multiplier of a fixed floating point number described as multiplier*2^(-shift).

  • shift (int) – Shift of a fixed floating point number described as multiplier*2^(-shift).

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.flip(a, axis=0)

Flip/reverse elements of an array in a particular axis.

Parameters
  • a (tvm.te.Tensor) – The tensor to be expanded.

  • axis (int, optional) – The axis along which the tensors will be reveresed.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.floor(x)

Take floor of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.floor_divide(lhs, rhs)

Floor division with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.floor_mod(lhs, rhs)

Floor modulus with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.full(shape, dtype, fill_value)

Fill tensor with fill_value

Parameters
  • shape (tuple) – Input tensor shape.

  • dtype (str) – Data type

  • fill_value (float) – Value to be filled

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.full_like(x, fill_value)
Construct a tensor with same shape as input tensor,

then fill tensor with fill_value.

Parameters
Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.gather(data, axis, indices)

Gather values along given axis from given indices.

E.g. for a 3D tensor, output is computed as:

out[i][j][k] = data[indices[i][j][k]][j][k]  # if axis == 0
out[i][j][k] = data[i][indices[i][j][k]][k]  # if axis == 1
out[i][j][k] = data[i][j][indices[i][j][k]]  # if axis == 2

indices must have same shape as data, except at dimension axis which must just be not null. Output will have same shape as indices.

Parameters
  • data (tvm.te.Tensor) – The input data to the operator.

  • axis (int) – The axis along which to index.

  • indices (tvm.te.Tensor) – The indices of the values to extract.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.gather_nd(a, indices)

Gather elements from a n-dimension array..

Parameters
Returns

ret

Return type

tvm.te.Tensor

tvm.topi.get_const_tuple(in_tuple)

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

Parameters

in_tuple (tuple of Expr) – The input.

Returns

out_tuple – The output.

Return type

tuple of int

tvm.topi.greater(lhs, rhs)

Compute (lhs>rhs) with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.greater_equal(lhs, rhs)

Compute (lhs>=rhs) with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.hybrid_argwhere_1d(output_shape, condition)

Find the indices of elements of a 1-D tensor that are non-zero.

Parameters

condition (tvm.te.Tensor) – 1-D tensor with boolean values.

Returns

out – Indices of non-zero elements.

Return type

tvm.te.Tensor

tvm.topi.hybrid_argwhere_2d(output_shape, condition)

Find the indices of elements of a 2-D tensor that are non-zero.

Parameters

condition (tvm.te.Tensor) – 2-D tensor with boolean values.

Returns

out – Indices of non-zero elements.

Return type

tvm.te.Tensor

tvm.topi.hybrid_argwhere_3d(output_shape, condition)

Find the indices of elements of a 3-D tensor that are non-zero.

Parameters

condition (tvm.te.Tensor) – 3-D tensor with boolean values.

Returns

out – Indices of non-zero elements.

Return type

tvm.te.Tensor

tvm.topi.hybrid_argwhere_4d(output_shape, condition)

Find the indices of elements of a 4-D tensor that are non-zero.

Parameters

condition (tvm.te.Tensor) – 4-D tensor with boolean values.

Returns

out – Indices of non-zero elements.

Return type

tvm.te.Tensor

tvm.topi.hybrid_argwhere_5d(output_shape, condition)

Find the indices of elements of a 5-D tensor that are non-zero.

Parameters

condition (tvm.te.Tensor) – 5-D tensor with boolean values.

Returns

out – Indices of non-zero elements.

Return type

tvm.te.Tensor

tvm.topi.identity(x)

Take identity of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.isfinite(x)

Check if value of x is finite, element-wise.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.isinf(x)

Check if value of x is infinite, element-wise.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.isnan(x)

Check if value of x is NaN, element-wise.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.layout_transform(array, src_layout, dst_layout)

Transform the layout according to src_layout and dst_layout

Parameters
  • array (tvm.te.Tensor) – The source array.

  • src_layout (str) – the source layout.

  • dst_layout (str) – the destination layout.

tvm.topi.left_shift(lhs, rhs)

Left shift with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.less(lhs, rhs)

Compute (lhs<rhs) with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.less_equal(lhs, rhs)

Compute (lhs<=rhs) with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.log(x)

Take logarithm of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.log10(x)

Take logarithm to the base 10 of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.log2(x)

Take logarithm to the base 2 of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.logical_and(lhs, rhs)

Compute element-wise logical and of data.

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.logical_not(data)

Compute element-wise logical not of data.

Parameters

data (tvm.te.Tensor or Expr) –

Returns

ret – Returns Expr if the operand are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.logical_or(lhs, rhs)

Compute element-wise logical or of data.

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.logical_xor(lhs, rhs)

Compute element-wise logical xor of data.

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.make_idx(b, e, s, z, i)

Return the array position in the selection that corresponds to an array position in the full array.

The returned value is only meaningful if within_index() returns True for the same set of parameters.

Parameters
  • b (Expr) – beginning of the index

  • e (Expr) – end of the index

  • s (Expr) – strides of index

  • z (Expr) – size of the indexed dimension

  • i (Expr) – array position

Returns

postion – int expression that corresponds to an array position in the selection.

Return type

Expr

tvm.topi.matmul(a, b, transp_a=False, transp_b=False)

Creates an operation that calculates a matrix multiplication (row-major notation): A(i, k) * B(k, j) if trans_a == trans_b, the usual transposed combinations, otherwise

Parameters
  • a (The matrix A) –

  • b (The matrix B) –

  • trans_a (Is A's layout transposed?) –

  • trans_b (Is B's layout transposed?) –

Returns

Return type

A Tensor whose op member is the matmul operation

tvm.topi.matrix_set_diag(data, diagonal, k=0, align='RIGHT_LEFT')

Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.

Parameters
  • data (relay.Expr) – Input Tensor.

  • diagonal (relay.Expr) – Values to be filled in the diagonal.

  • k (int or tuple of int, optional) – Diagonal Offset(s). The diagonal or range of diagonals to set. (0 by default) Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1].

  • align (string, optional) – Some diagonals are shorter than max_diag_len and need to be padded. align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: “RIGHT_LEFT” (default), “LEFT_RIGHT”, “LEFT_LEFT”, and “RIGHT_RIGHT”. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses “LEFT_RIGHT”, which is the opposite alignment.

Returns

result – New tensor with given diagonal values.

Return type

relay.Expr

Examples

data = [[[7, 7, 7, 7],
         [7, 7, 7, 7],
         [7, 7, 7, 7]],
        [[7, 7, 7, 7],
         [7, 7, 7, 7],
         [7, 7, 7, 7]]]

diagonal = [[1, 2, 3],
            [4, 5, 6]]

topi.matrix_set_diag(input, diagonal) =
    [[[1, 7, 7, 7],
      [7, 2, 7, 7],
      [7, 7, 3, 7]],
     [[4, 7, 7, 7],
      [7, 5, 7, 7],
      [7, 7, 6, 7]]]
tvm.topi.max(data, axis=None, keepdims=False)

Maximum of array elements over a given axis or a list of axes

Parameters
  • data (tvm.te.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which the max operation is performed. The default, axis=None, will find the max element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.maximum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.meshgrid(a_tuple, indexing)

Create coordinate matrices from coordinate vectors.

Parameters
  • a_tuple (tuple of tvm.te.Tensor) – The coordinate vectors or scalars.

  • indexing (str) – Indexing mode, either “ij” or “xy”.

Returns

result – The resulting grids for each axis.

Return type

tuple of tvm.te.Tensor

tvm.topi.min(data, axis=None, keepdims=False)

Minimum of array elements over a given axis or a list of axes

Parameters
  • data (tvm.te.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a minimum operation is performed. The default, axis=None, will find the minimum element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.minimum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.mod(lhs, rhs)

Modulus with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.multiply(lhs, rhs)

Multiplication with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.ndarray_size(array, dtype='int32')

Get the number of elements of input array

Parameters
  • array (tvm.te.Tensor) – The source tensor.

  • dtype (str, optional) – The target data type.

Returns

result – The resulting tensor.

Return type

tvm.te.Tensor

tvm.topi.negative(x)

Take negation of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.not_equal(lhs, rhs)

Compute (lhs!=rhs) with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.one_hot(indices, on_value, off_value, depth, axis, dtype)

Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value. Final dimension is <indices outer dimensions> x depth x <indices inner dimensions>.

Parameters
  • indices (tvm.te.Tensor) – Locations to set to on_value.

  • on_value (tvm.te.Tensor) – Value to fill at indices.

  • off_value (tvm.te.Tensor) – Value to fill at all other positions besides indices.

  • depth (int) – Depth of the one-hot dimension.

  • axis (int) – Axis to fill.

  • dtype (relay.DataType) – Data type of the output tensor.

Returns

ret – The one-hot tensor.

Return type

relay.Expr

Examples

indices = [0, 1, 2]

relay.one_hot(indices, 3) =
    [[1, 0, 0],
     [0, 1, 0],
     [0, 0, 1]]
tvm.topi.power(lhs, rhs)

Power with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.prod(data, axis=None, keepdims=False)

Product of array elements over a given axis or a list of axes

Parameters
  • data (tvm.te.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a prod operation is performed. The default, axis=None, will get the prod element over all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.reinterpret(x, dtype)

Reinterpret input to specified data type.

Parameters
Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.repeat(a, repeats, axis)

Repeats elements of an array.

Parameters
  • a (tvm.te.Tensor) – The tensor to be repeated.

  • repeats (int, required) – Number of repetitions for each element

  • axis (int, optional) – The axis along which to repeat values

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.reshape(a, newshape)

Reshape the array

Parameters
  • a (tvm.te.Tensor) – The tensor to be reshaped

  • newshape (tuple of ints) – The new shape

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.reverse_sequence(a, seq_lengths, seq_axis=1, batch_axis=0)

Reverse the tensor for variable length slices. Input is first sliced along batch axis and then elements are reversed along seq axis.

Parameters
  • a (tvm.te.Tensor) – The tensor to be reversed.

  • seq_lengths (tvm.te.Tensor) – A 1D Tensor with length a.dims[batch_axis] Must be one of the following types: int32, int64 if seq_lengths[i] > a.dims[seq_axis], it is rounded to a.dims[seq_axis] if seq_lengths[i] < 1, it is rounded to 1

  • seq_axis (int, optional) – The axis along which the elements will be reversed. Default is 1.

  • batch_axis (int, optional) – The axis along which the tensor will be sliced. Default is 0.

Returns

ret – The computed result of same shape and type as of input.

Return type

tvm.te.Tensor

tvm.topi.right_shift(lhs, rhs)

Right shift with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.round(x)

Round elements of x to nearest integer.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.rsqrt(x)

Take inverse square root of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.scatter(data, indices, updates, axis=0)

Update data at positions defined by indices with values in updates

Parameters
  • data (relay.Expr) – The input data to the operator.

  • indices (relay.Expr) – The index locations to update.

  • updates (relay.Expr) – The values to update.

  • axis (int) – The axis to scatter on

Returns

ret – The computed result.

Return type

relay.Expr

tvm.topi.scatter_add(data, indices, updates, axis=0)

Update data by adding values in updates at positions defined by indices

Parameters
  • data (relay.Expr) – The input data to the operator.

  • indices (relay.Expr) – The index locations to update.

  • updates (relay.Expr) – The values to update.

  • axis (int) – The axis to scatter_add on

Returns

ret – The computed result.

Return type

relay.Expr

tvm.topi.sequence_mask(data, valid_length, mask_value=0, axis=0)

Sets all elements outside the expected length of the sequence to a constant value.

This function takes an n-dimensional input array of the form [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] and returns an array of the same shape.

axis means the axis of the length dimension and can only be 0 or 1. If axis is 0, the data must have shape [MAX_LENGTH, batch_size, …]. Otherwise (axis=1), the data must have shape [batch_size, MAX_LENGTH, …].

valid_length gives the length of each sequence. valid_length should be a 1D int array with positive ints and has dimension [batch_size,].

Parameters
  • data (tvm.te.Tensor) – N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.

  • valid_length (tvm.te.Tensor) – 1-D with shape [batch_size,]

  • mask_value (float, optional) – The masking value, default 0

  • axis (int, optional) – axis of the length dimension, must be 0 or 1, default 0

Returns

output – N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.

Return type

tvm.te.Tensor

tvm.topi.shape(array, dtype='int32')

Get the shape of input array

Parameters
  • array (tvm.te.Tensor) – The source tensor.

  • dtype (str, optional) – The target data type.

Returns

result – The resulting tensor.

Return type

tvm.te.Tensor

tvm.topi.sigmoid(x)

Take sigmoid tanh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.sign(x)

Returns -1, 0, 1 based on sign of x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.sin(x)

Take sin of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.sinh(x)

Take sinh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0)

Converts a sparse representation into a dense tensor.

Example:: - sparse_to_dense([[0, 0], [1, 1]], [2, 2], [3, 3], 0) = [[3, 0], [0, 3]]

Parameters
  • sparse_indices (tvm.te.Tensor) – A 0-D, 1-D, or 2-D tensor of integers containing location of sparse values.

  • output_shape (A list of integers) – Shape of the dense output tensor.

  • sparse_values (tvm.te.Tensor) – A 0-D or 1-D tensor containing the sparse values for the sparse indices.

  • default_value (tvm.te.Tensor) – A 0-D tensor containing the default value for the remaining locations. Defaults to 0.

Returns

result – Dense tensor of shape output_shape. Has the same type as sparse_values.

Return type

tvm.te.Tensor

tvm.topi.split(ary, indices_or_sections, axis=0)

Split an array into multiple sub-arrays.

Parameters
Returns

ret

Return type

tuple of tvm.te.Tensor

tvm.topi.sqrt(x)

Take square root of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.squeeze(a, axis=None)

Remove single-dimensional entries from the shape of an array.

Parameters
  • a (tvm.te.Tensor) –

  • axis (None or int or tuple of ints, optional) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.

Returns

squeezed

Return type

tvm.te.Tensor

tvm.topi.stack(a, axis)

Repeats the whole array multiple times.

Parameters
  • a (tvm.te.Tensor) – The tensor to be stacked.

  • axis (int, optional) – The axis in the result array along which the input arrays are stacked.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.strided_set(a, v, begin, end, strides=None)

Set slice of an array.

Parameters
  • a (tvm.te.Tensor) – The tensor to be sliced.

  • v (tvm.te.Tensor) – The values to set

  • begin (tvm.te.Tensor) – The indices to begin with in the slicing.

  • end (tvm.te.Tensor) – Indicies indicating end of the slice.

  • strides (tvm.te.Tensor, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.strided_slice(a, begin, end, strides=None, slice_mode='end')

Slice of an array.

Parameters
  • a (tvm.te.Tensor) – The tensor to be sliced.

  • begin (list of int) – The indices to begin with in the slicing.

  • end (list of int) – Indicies indicating end of the slice.

  • strides (list of int, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

  • slice_mode (str, optional) – The slice mode [end, size]. end - The ending indices for the slice [default]. size - The input strides will be ignored, input end in this mode indicates the sizeof a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.subtract(lhs, rhs)

Subtraction with auto-broadcasting

Parameters
Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.te.Tensor or Expr

tvm.topi.sum(data, axis=None, keepdims=False)

Sum of array elements over a given axis or a list of axes

Parameters
  • data (tvm.te.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.take(a, indices, axis=None, mode='clip')

Take elements from an array along an axis.

Parameters
  • a (tvm.te.Tensor) – The source array.

  • indices (tvm.te.Tensor) – The indices of the values to extract.

  • axis (int, optional) – The axis over which to select values. By default, the flattened input array is used.

  • mode (str, optional) – Specifies how out-of-bound indices will behave. clip - clip to the range (default) wrap - wrap around the indices fast - no clip or wrap around (user must make sure indices are in-bound)

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.tan(x)

Take tan of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.tanh(x)

Take hyperbolic tanh of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.tensordot(a, b, axes)

A generalization of matrix multiplication to tensor.

Parameters
  • a (The tensor A) –

  • b (The tensor B) –

  • axes (The number of dimensions to reduce over) –

Returns

Return type

A Tensor computing the result

tvm.topi.tile(a, reps)

Repeats the whole array multiple times.

Parameters
  • a (tvm.te.Tensor) – The tensor to be tiled.

  • reps (tuple of ints, required) – The number of times for repeating the tensor

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.topk(data, k=1, axis=- 1, ret_type='both', is_ascend=False, dtype='int64')

Get the top k elements in an input tensor along the given axis.

Parameters
  • data (tvm.te.Tensor) – The input tensor.

  • k (int or tvm.te.Tensor, optional) – Number of top elements to select. Return all elements if k < 1.

  • axis (int, optional) – Axis long which to sort the input tensor.

  • ret_type (str, optional) – The return type [both, values, indices]. “both”: return both top k data and indices. “values”: return top k data only. “indices”: return top k indices only.

  • is_ascend (boolean, optional) – Whether to sort in ascending or descending order.

  • dtype (string, optional) – The data type of the indices output.

Returns

out – The computed result.

Return type

tvm.te.Tensor or List[tvm.te.Tensor]

tvm.topi.transpose(a, axes=None)

Permute the dimensions of an array.

Parameters
  • a (tvm.te.Tensor) – The tensor to be expanded.

  • axes (tuple of ints, optional) – By default, reverse the dimensions.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.trunc(x)

Take truncated value of the input of x, element-wise.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.unravel_index(indices, shape)

Convert a flat index or array of flat indices into a tuple of coordinate arrays.

Example:: - unravel_index([22, 41, 37], [7, 6]) = [[3, 6, 6], [4, 5, 1]]

Parameters
  • indices (relay.Expr) – An integer array containing indices.

  • shape (relay.Expr) – The shape of the array.

Returns

result – The tuple of coordinate arrays.

Return type

relay.Expr

tvm.topi.where(condition, x, y)

Get the elements, either from x or y, depending on the condition.

Parameters
Returns

result – A Tensor selected from x or y depending on condition.

Return type

tvm.te.Tensor

tvm.topi.within_index(b, e, s, i)

Return a boolean value that indicates if i is within the given index.

Parameters
  • b (Expr) – beginning of the index

  • e (Expr) – end of the index

  • s (Expr) – strides of index

  • i (Expr) – array position

Returns

selected – bool expression that is True is the array position would be selected by the index and False otherwise

Return type

Expr

exception tvm.topi.InvalidShapeError

Invalid shape for a topi function. i.e. call winograd template for non-3x3 kernel)

tvm.topi.nn

Neural network operators

Classes:

Workload(in_dtype, out_dtype, height, width, …)

Functions:

adaptive_pool(data, output_size, pool_type)

Perform pooling on height and width dimension of data.

adaptive_pool3d(data, output_size, pool_type)

Perform pooling on three dimensional data.

batch_matmul(x, y[, oshape])

Computes batch matrix multiplication of x and y when x and y are data in batch.

binarize_pack(data[, axis, name])

Binarization and bit-packing along a certain axis.

binary_dense(data, weight)

Binary matrix multiplication using xor and bit-count.

bitpack(data, bits, pack_axis, bit_axis, …)

Packs data into format necessary for bitserial computation

bitserial_conv2d_legalize(attrs, inputs, types)

Legalizes Bitserial Conv2D op.

bitserial_conv2d_nchw(data, kernel, stride, …)

Bitserial Conv2D operator.

bitserial_conv2d_nhwc(data, kernel, stride, …)

Bitserial Conv2D operator.

bitserial_dense(data, weight, data_bits, …)

The default implementation of bitserial dense in topi.

concatenate(a_tuple[, axis])

Join a sequence of arrays along an existing axis.

conv1d(data, kernel[, strides, padding, …])

1D convolution forward operator.

conv1d_ncw(data, kernel[, strides, padding, …])

1D convolution forward operator for NCW layout.

conv1d_nwc(data, kernel[, strides, padding, …])

1D convolution forward operator for NWC layout.

conv1d_transpose_ncw(data, kernel, stride, …)

Transposed 1D convolution ncw forward operator.

conv2d(input, filter, strides, padding, dilation)

Conv2D operator.

conv2d_NCHWc(data, kernel, stride, padding, …)

Conv2D operator for nChw[x]c layout.

conv2d_NCHWc_int8(data, kernel, stride, …)

Conv2D operator for nChw[x]c layout.

conv2d_alter_layout(attrs, inputs, tinfos, …)

Change Conv2D layout.

conv2d_gemm_weight_transform(kernel, …)

Weight transformation for winograd

conv2d_hwcn(Input, Filter, stride, padding, …)

Convolution operator in HWCN layout.

conv2d_infer_layout(workload, cfg)

Infer input/output shapes and layouts from a workload and cfg.

conv2d_legalize(attrs, inputs, types)

Legalizes Conv2D op.

conv2d_nchw(Input, Filter, stride, padding, …)

Convolution operator in NCHW layout.

conv2d_nhwc(Input, Filter, stride, padding, …)

Convolution operator in NHWC layout.

conv2d_transpose_legalize(attrs, inputs, types)

Legalizes Transposed 2D convolution op.

conv2d_transpose_nchw(Input, Filter, …)

Transposed 2D convolution nchw forward operator.

conv2d_transpose_nchw_preprocess(data, …)

Preprocess data and kernel to make the compute pattern of conv2d_transpose the same as conv2d

conv2d_winograd_nnpack_weight_transform(…)

Weight transformation for winograd

conv2d_winograd_weight_transform(kernel, …)

Weight transformation for winograd

conv3d_alter_layout(attrs, inputs, tinfos, …)

Change Conv3D layout.

conv3d_ncdhw(Input, Filter, stride, padding, …)

Conv3D operator in NCDHW layout.

conv3d_ndhwc(Input, Filter, stride, padding, …)

Convolution operator in NDHWC layout.

conv3d_transpose_legalize(attrs, inputs, types)

Legalizes Transposed 3D convolution op.

conv3d_transpose_ncdhw(Input, Filter, …)

Transposed 3D convolution ncdhw forward operator.

conv3d_transpose_ncdhw_preprocess(data, …)

Preprocess data and kernel to make the compute pattern of conv3d_transpose the same as conv3d

conv3d_winograd_weight_transform(kernel, …)

Weight transformation for 3D winograd

correlation_nchw(data1, data2, kernel_size, …)

Correlation operator in NCHW layout.

declaration_conv2d_transpose_impl(data, …)

Implementation of conv2d transpose

declaration_conv3d_transpose_impl(data, …)

Implementation of conv3d transpose

deformable_conv2d_nchw(data, offset, kernel, …)

Deformable conv2D operator in NCHW layout.

dense(data, weight[, bias, out_dtype])

The default implementation of dense in topi.

depth_to_space(data, block_size[, layout, mode])

Perform depth to space transformation on the data

depthwise_conv2d_NCHWc(Input, Filter, …[, …])

Depthwise convolution NCHW[x]c forward operator.

depthwise_conv2d_backward_input_nhwc(Filter, …)

Depthwise convolution nhwc backward wrt input operator.

depthwise_conv2d_backward_weight_nhwc(Input, …)

Depthwise convolution nhwc backward wrt weight operator.

depthwise_conv2d_infer_layout(workload, cfg)

Infer input/output shapes and layouts from a workload and cfg.

depthwise_conv2d_nchw(Input, Filter, stride, …)

Depthwise convolution nchw forward operator.

depthwise_conv2d_nhwc(Input, Filter, stride, …)

Depthwise convolution nhwc forward operator.

dilate(data, strides[, dilation_value, name])

Dilate data with given dilation value (0 by default).

equal_const_int(expr, value)

Returns if expr equals value.

fifo_buffer(data, buffer, axis)

FIFO buffer to enable computation reuse in CNNs with sliding indow input

flatten(data)

Flattens the input array into a 2-D array by collapsing the higher dimensions.

get_const_int(expr)

Verifies expr is integer and get the constant value.

get_const_tuple(in_tuple)

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

get_pad_tuple(padding, kernel)

Common code to get the pad option

get_pad_tuple1d(padding, kernel)

Common code to get the pad option

get_pad_tuple3d(padding, kernel)

Common code to get the pad option

global_pool(data, pool_type[, layout])

Perform global pooling on height and width dimension of data.

group_conv2d_nchw(Input, Filter, stride, …)

Group convolution operator in NCHW layout.

group_conv2d_nhwc(Input, Filter, stride, …)

Group convolution operator in NHWC layout.

leaky_relu(x, alpha)

Take leaky relu of input x.

log_softmax(x)

Perform log softmax activation on the data

lrn(data, size[, axis, alpha, beta, bias])

Perform the across channels local response normalisation on the input data.

mirror_pad(data, pad_before[, pad_after, …])

Pad Input with mirroring either symmetric or reflected.

namedtuple(typename, field_names, *[, …])

Returns a new subclass of tuple with named fields.

pad(data, pad_before[, pad_after, …])

Pad Input with zeros.

pool(data, kernel, stride, padding, pool_type)

Perform pooling on height and width dimension of data.

pool1d(data, kernel, stride, padding, pool_type)

Perform pooling on width dimension of data.

pool3d(data, kernel, stride, padding, pool_type)

Perform pooling on depth, height and width dimension of data.

pool_grad(grads, data, kernel, stride, …)

Gradient of pooling on height and width dimension of data.

prelu(x, slope[, axis])

PReLU.

relu(x)

Take relu of input x.

scale_shift_nchw(Input, Scale, Shift)

Batch normalization operator in inference.

scale_shift_nhwc(Input, Scale, Shift)

Batch normalization operator in inference.

simplify(expr)

Simplify the expression if it is Expr, directly return if it is int.

softmax(x[, axis])

Perform softmax activation on the data

space_to_depth(data, block_size[, layout])

Perform space to depth transformation on the data

sparse_dense(data, weight_data, …)

Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T

sparse_dense_alter_layout(_attrs, _inputs, …)

Change Sparse Dense layout.

sparse_transpose(sparse_data, …)

Transpose a square sparse matrix, A is an n-by-n sparse matrix in the CSR format.

strided_slice(a, begin, end[, strides, …])

Slice of an array.

unpack_NCHWc_to_nchw(packed_out, out_dtype)

Unpack conv2d_NCHWc output from layout NCHWc to NCHW

upsampling(data, scale_h, scale_w[, layout, …])

Perform upsampling on the data.

upsampling3d(data, scale_d, scale_h, scale_w)

Perform upsampling on the data.

winograd_transform_matrices(tile_size, …)

Compute the A, B, and G transform matrices for tile_size as a tvm.Expr.

class tvm.topi.nn.Workload(in_dtype, out_dtype, height, width, in_filter, out_filter, hkernel, wkernel, hpad, wpad, hstride, wstride)

Attributes:

height

Alias for field number 2

hkernel

Alias for field number 6

hpad

Alias for field number 8

hstride

Alias for field number 10

in_dtype

Alias for field number 0

in_filter

Alias for field number 4

out_dtype

Alias for field number 1

out_filter

Alias for field number 5

width

Alias for field number 3

wkernel

Alias for field number 7

wpad

Alias for field number 9

wstride

Alias for field number 11

property height

Alias for field number 2

property hkernel

Alias for field number 6

property hpad

Alias for field number 8

property hstride

Alias for field number 10

property in_dtype

Alias for field number 0

property in_filter

Alias for field number 4

property out_dtype

Alias for field number 1

property out_filter

Alias for field number 5

property width

Alias for field number 3

property wkernel

Alias for field number 7

property wpad

Alias for field number 9

property wstride

Alias for field number 11

tvm.topi.nn.adaptive_pool(data, output_size, pool_type, layout='NCHW')
Perform pooling on height and width dimension of data.

The pooling kernel and stride sizes are automatically chosen for desired output sizes. It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.te.Tensor) – n-D with shape of layout

  • output_size (tuple of int) – output height and width.

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

Returns

output – n-D in the same layout

Return type

tvm.te.Tensor

tvm.topi.nn.adaptive_pool3d(data, output_size, pool_type, layout='NCDHW')

Perform pooling on three dimensional data. See the two dimensional version above for details.

tvm.topi.nn.batch_matmul(x, y, oshape=None)

Computes batch matrix multiplication of x and y when x and y are data in batch. Supports broadcasting for batch dimension.

Parameters
  • x (tvm.te.Tensor) – 3-D with shape [batch, M, K]

  • y (tvm.te.Tensor) – 3-D with shape [batch, N, K]

  • oshape (List[Optional]) – Explicit intended output shape of the computation. Can be useful in cases with dynamic input shapes.

Returns

output – 3-D with shape [batch, M, N]

Return type

tvm.te.Tensor

tvm.topi.nn.binarize_pack(data, axis=None, name='PackedInput')

Binarization and bit-packing along a certain axis.

Parameters
  • data (tvm.te.Tensor) – n-D input, can be any layout.

  • axis (None or int) – The axis along which to do binarization and bit-packing, default is the last axis.

  • name (str, optional) – The name prefix operators generate.

Returns

output – n-D, the same layout as input, dtype is uint32.

Return type

tvm.te.Tensor

tvm.topi.nn.binary_dense(data, weight)

Binary matrix multiplication using xor and bit-count.

Parameters
  • data (tvm.te.Tensor) – 2-D with shape [batch, in_dim], dtype is uint32.

  • weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim], dtype is uint32.

Returns

output – 2-D with shape [batch, out_dim], dtype is float32.

Return type

tvm.te.Tensor

tvm.topi.nn.bitpack(data, bits, pack_axis, bit_axis, pack_type, name='QuantizeInput')

Packs data into format necessary for bitserial computation

Parameters
  • pack_axis (int) – index of the axis to pack in data

  • bit_axis (int) – index of axis to place bit axis in resulting packed data

tvm.topi.nn.bitserial_conv2d_legalize(attrs, inputs, types)

Legalizes Bitserial Conv2D op.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current convolution

  • inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized

  • types (list of types) – List of input and output types

Returns

result – The legalized expr

Return type

tvm.relay.Expr

tvm.topi.nn.bitserial_conv2d_nchw(data, kernel, stride, padding, activation_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)

Bitserial Conv2D operator.

Parameters
  • input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]

  • stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of two or four ints) – padding size, [pad_height, pad_width], [pad_top, pad_left, pad_down, pad_right]

  • activation_bits (int) – number of bits used for activations/input elements

  • weight_bits (int) – number of bits used for weight elements

  • out_dtype (str) – return type of convolution

  • pack_dtype (str) – bit packing type

  • unipolar (bool) – if binarization style is in unipolar 1/0 format, instead of bipolar -1/+1 format

Returns

output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.bitserial_conv2d_nhwc(data, kernel, stride, padding, activation_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)

Bitserial Conv2D operator.

Parameters
  • input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]

  • stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of two or four ints) – padding size, [pad_height, pad_width], [pad_top, pad_left, pad_down, pad_right]

  • activation_bits (int) – number of bits used for activations/input elements

  • weight_bits (int) – number of bits used for weight elements

  • out_dtype (str) – return type of convolution

  • pack_dtype (str) – bit packing type

  • unipolar (bool) – if binarization style is in unipolar 1/0 format, instead of bipolar -1/+1 format

Returns

output – 4-D with shape [batch, out_height, out_width, out_channel]

Return type

tvm.te.Tensor

tvm.topi.nn.bitserial_dense(data, weight, data_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)

The default implementation of bitserial dense in topi.

Parameters
  • data (tvm.te.Tensor) – 2-D with shape [batch, in_dim]

  • weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim] or 3-D with shape [out_dim, weight_bits, in_dim]

Returns

output – 2-D with shape [batch, out_dim]

Return type

tvm.te.Tensor

tvm.topi.nn.concatenate(a_tuple, axis=0)

Join a sequence of arrays along an existing axis.

Parameters
  • a_tuple (tuple of tvm.te.Tensor) – The arrays to concatenate

  • axis (int, optional) – The axis along which the arrays will be joined. Default is 0.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.nn.conv1d(data, kernel, strides=1, padding='VALID', dilation=1, layout='NCW', out_dtype=None)

1D convolution forward operator.

Parameters
  • data (tvm.te.Tensor) – 3-D input shape [batch, in_channel, in_width] for layout == ‘NCW’ and [batch, in_width, in_channel] for layout == ‘NWC’

  • kernel (tvm.te.Tensor) – 3-D kernel with shape [num_filter, in_channel, filter_size] for layout == ‘NCW’ and [filter_size, in_channel, num_filter] for layout == ‘NWC’

  • strides (int or tuple) – The spatial stride along width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or tuple) – Dilation rate if convolution should be dilated.

  • layout (str) – How input data is laid out, must be one of [‘NCW’, ‘NWC’]

  • out_dtype (str) – The output data type. If None then output is same type as input.

tvm.topi.nn.conv1d_ncw(data, kernel, strides=1, padding='VALID', dilation=1, out_dtype=None)

1D convolution forward operator for NCW layout.

Parameters
  • data (tvm.te.Tensor) – 3-D with shape [batch, in_channel, in_width]

  • kernel (tvm.te.Tensor) – 3-D with shape [num_filter, in_channel, filter_size]

  • strides (int or tuple) – The spatial stride along width

  • padding (int, tuple, or str) – Padding size can be an integer for equal padding, a tuple of (left, right) or a string in [‘VALID’, ‘SAME’].

  • dilation (int or tuple) – Dilation rate if convolution should be dilated.

  • out_dtype (str) – The output data type. If None then output is same type as input.

tvm.topi.nn.conv1d_nwc(data, kernel, strides=1, padding='VALID', dilation=1, out_dtype=None)

1D convolution forward operator for NWC layout.

Parameters
  • data (tvm.te.Tensor) – 3-D with shape [batch, in_width, in_channel]

  • kernel (tvm.te.Tensor) – 3-D with shape [filter_size, in_channel, num_filter]

  • strides (int or tuple) – The spatial stride along width

  • padding (int, tuple, or str) – Padding size can be an integer for equal padding, a tuple of (left, right) or a string in [‘VALID’, ‘SAME’].

  • dilation (int or tuple) – Dilation rate if convolution should be dilated.

  • out_dtype (str) – The output data type. If None then output is same type as input.

tvm.topi.nn.conv1d_transpose_ncw(data, kernel, stride, padding, out_dtype, output_padding)

Transposed 1D convolution ncw forward operator.

Parameters
  • data (tvm.te.Tensor) – 3-D with shape [batch, in_channel, in_width]

  • kernel (tvm.te.Tensor) – 3-D with shape [in_channel, num_filter, filter_width]

  • stride (ints) – The spatial stride along width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • out_dtype (str) – The output data type. This is used for mixed precision.

  • output_padding (ints) – Used to recover the actual output shape in case there are more than one possible shape. Must be smaller than stride.

Returns

output – 3-D with shape [batch, out_channel, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d(input, filter, strides, padding, dilation, layout='NCHW', out_dtype=None)

Conv2D operator.

Parameters
  • input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]

  • strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • layout (str) – layout of data

Returns

output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_NCHWc(data, kernel, stride, padding, dilation, layout, out_layout, out_dtype='float32')

Conv2D operator for nChw[x]c layout.

Parameters
  • data (tvm.te.Tensor) – 5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]

  • kernel (tvm.te.Tensor) – 6-D with shape [num_filter_chunk, in_channel_chunk, filter_height, filter_width, in_channel_block, num_filter_block]

  • stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • layout (str) – Input data layout

  • out_layout (str) – Output data layout

  • out_dtype (str) – output data type

Returns

output – 5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_NCHWc_int8(data, kernel, stride, padding, dilation, layout, out_layout, out_dtype='int32')

Conv2D operator for nChw[x]c layout.

Parameters
  • data (tvm.te.Tensor) – 5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]

  • kernel (tvm.te.Tensor) – 7-D with shape [num_filter_chunk, in_channel_chunk, filter_height, filter_width, in_channel_block/4, num_filter_block, 4]

  • stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • layout (str) – Input data layout

  • out_layout (str) – Output data layout

  • out_dtype (str) – output data type

Returns

output – 5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, out_type)

Change Conv2D layout.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current convolution

  • inputs (tvm.relay.Expr) – Grouped input symbols

  • tinfos (list) – Input shape and dtype

  • out_type (type) – The output type

Note

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.conv2d_gemm_weight_transform(kernel, tile_rows, tile_cols)

Weight transformation for winograd

Parameters
  • kernel (Tensor) – The raw kernel tensor with layout “NHWC”.

  • tile_rows (int) – Tile rows of the weight transformation for ConvGemm.

  • tile_cols (int) – Tile columns of the weight transformation for ConvGemm.

Returns

output – 2-D with shape [CI*KH*KW,CO]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_hwcn(Input, Filter, stride, padding, dilation, out_dtype=None)

Convolution operator in HWCN layout.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [in_height, in_width, in_channel, batch]

  • Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

Returns

output – 4-D with shape [out_height, out_width, out_channel, batch]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_infer_layout(workload, cfg)

Infer input/output shapes and layouts from a workload and cfg.

Parameters
  • workload (tuple) – conv2d workload

  • cfg (tuple) – tvm.autotvm config

Returns

Output – Input shapes and layouts, and output shapes and layouts

Return type

[tuple of tuple and str, tuple of tuple and str]

tvm.topi.nn.conv2d_legalize(attrs, inputs, types)

Legalizes Conv2D op.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current convolution

  • inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized

  • types (list of types) – List of input and output types

Returns

result – The legalized expr

Return type

tvm.relay.Expr

tvm.topi.nn.conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)

Convolution operator in NCHW layout.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • Filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_nhwc(Input, Filter, stride, padding, dilation, out_dtype='float32')

Convolution operator in NHWC layout.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

Returns

output – 4-D with shape [batch, out_height, out_width, out_channel]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_transpose_legalize(attrs, inputs, types)

Legalizes Transposed 2D convolution op.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current Transposed 2D convolution

  • inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized

  • types (list of types) – List of input and output types

Returns

result – The legalized expr

Return type

tvm.relay.Expr

tvm.topi.nn.conv2d_transpose_nchw(Input, Filter, strides, padding, out_dtype, output_padding)

Transposed 2D convolution nchw forward operator.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • Filter (tvm.te.Tensor) – 4-D with shape [in_channel, num_filter, filter_height, filter_width]

  • strides (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • out_dtype (str) – The output data type. This is used for mixed precision.

  • output_padding (tuple of ints) – Used to get the right output shape for gradients

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_transpose_nchw_preprocess(data, kernel, strides, padding, out_dtype, output_padding)

Preprocess data and kernel to make the compute pattern of conv2d_transpose the same as conv2d

tvm.topi.nn.conv2d_winograd_nnpack_weight_transform(kernel, convolution_algorithm, out_dtype)

Weight transformation for winograd

Parameters
  • kernel (Tensor) – The raw kernel tensor with layout “NCHW”. Only 3x3 kernel is supported for now.

  • convolution_algorithm (int) – The convolution algorithm for Winograd NNPACK.

Returns

output – 4-D with shape [alpha, alpha, CO, CI]

Return type

tvm.te.Tensor

tvm.topi.nn.conv2d_winograd_weight_transform(kernel, tile_size)

Weight transformation for winograd

Parameters
  • kernel (Tensor) – The raw kernel tensor with layout “NCHW”.

  • tile_size (int) – Tile size of winograd transform. e.g. 2 for F(2x2, 3x3) and 4 for F(4x4, 3x3)

Returns

output – 4-D with shape [alpha, alpha, CO, CI]

Return type

tvm.te.Tensor

tvm.topi.nn.conv3d_alter_layout(attrs, inputs, tinfos, out_type)

Change Conv3D layout.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current convolution

  • inputs (tvm.relay.Expr) – Grouped input symbols

  • tinfos (list) – Input shape and dtype

  • out_type (type) – The output type

Note

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.conv3d_ncdhw(Input, Filter, stride, padding, dilation, out_dtype=None)

Conv3D operator in NCDHW layout.

Parameters
  • Input (tvm.te.Tensor) – 5-D with shape [batch, in_channel, in_depth, in_height, in_width]

  • Filter (tvm.te.Tensor) – 5-D with shape [num_filter, in_channel, filter_depth, filter_height, filter_width]

  • stride (int or a list/tuple of three ints) – Stride size, or [strid_depth, stride_height, stride_width]

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of three ints) – dilation size, or [dilation_depth, dilation_height, dilation_width]

Returns

Output – 5-D with shape [batch, out_channel, out_depth, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.conv3d_ndhwc(Input, Filter, stride, padding, dilation, out_dtype='float32')

Convolution operator in NDHWC layout.

Parameters
  • Input (tvm.te.Tensor) – 5-D with shape [batch, in_depth, in_height, in_width, in_channel]

  • Filter (tvm.te.Tensor) – 5-D with shape [filter_depth, filter_height, filter_width, in_channel, num_filter]

  • stride (int or a list/tuple of three ints) – Stride size, or [stride_depth, stride_height, stride_width]

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of three ints) – dilation size, or [dilation_depth, dilation_height, dilation_width]

Returns

Output – 5-D with shape [batch, out_depth, out_height, out_width, out_channel]

Return type

tvm.te.Tensor

tvm.topi.nn.conv3d_transpose_legalize(attrs, inputs, types)

Legalizes Transposed 3D convolution op.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current Transposed 3D convolution

  • inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized

  • types (list of types) – List of input and output types

Returns

result – The legalized expr

Return type

tvm.relay.Expr

tvm.topi.nn.conv3d_transpose_ncdhw(Input, Filter, strides, padding, out_dtype, output_padding)

Transposed 3D convolution ncdhw forward operator.

Parameters
  • Input (tvm.te.Tensor) – 5-D with shape [batch, in_channel, in_depth, in_height, in_width]

  • Filter (tvm.te.Tensor) – 5-D with shape [in_channel, num_filter, filter_depth, filter_height, filter_width]

  • strides (int or a list/tuple of three ints) – The spatial stride along depth,height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • out_dtype (str) – The output data type. This is used for mixed precision.

  • output_padding (tuple of ints) – Used to get the right output shape for gradients

Returns

Output – 5-D with shape [batch, out_channel, out_depth, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.conv3d_transpose_ncdhw_preprocess(data, kernel, strides, padding, out_dtype, output_padding)

Preprocess data and kernel to make the compute pattern of conv3d_transpose the same as conv3d

tvm.topi.nn.conv3d_winograd_weight_transform(kernel, tile_size)

Weight transformation for 3D winograd

Parameters
  • kernel (Tensor) – The raw kernel tensor with layout “NCDHW”.

  • tile_size (int) – Tile size of winograd transform. e.g. 2 for F(2x2, 3x3) and 4 for F(4x4, 3x3)

Returns

output – 5-D with shape [alpha, alpha, alpha, CO, CI]

Return type

tvm.te.Tensor

tvm.topi.nn.correlation_nchw(data1, data2, kernel_size, max_displacement, stride1, stride2, padding, is_multiply)

Correlation operator in NCHW layout.

Parameters
  • data1 (tvm.te.Tensor) – 4-D with shape [batch, channel, height, width]

  • data2 (tvm.te.Tensor) – 4-D with shape [batch, channel, height, width]

  • kernel_size (int) – Kernel size for correlation, must be an odd number

  • max_displacement (int) – Max displacement of Correlation

  • stride1 (int) – Stride for data1

  • stride2 (int) – Stride for data2 within the neightborhood centered around data1

  • padding (int or a list/tuple of 2 or 4 ints) – Padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • is_multiply (bool) – operation type is either multiplication or substraction

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.declaration_conv2d_transpose_impl(data, kernel, strides, padding, out_dtype, output_padding)

Implementation of conv2d transpose

tvm.topi.nn.declaration_conv3d_transpose_impl(data, kernel, strides, padding, out_dtype, output_padding)

Implementation of conv3d transpose

tvm.topi.nn.deformable_conv2d_nchw(data, offset, kernel, strides, padding, dilation, deformable_groups, groups, out_dtype)

Deformable conv2D operator in NCHW layout.

The deformable convolution operation is described in https://arxiv.org/abs/1703.06211

Parameters
  • data (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • offset (tvm.te.Tensor) – 4-D with shape [batch, deformable_groups * filter_height * filter_width * 2, out_height, out_width].

  • kernel (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]

  • strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • deformable_groups (int) – number of deformable groups

  • groups (int) – number of groups

Returns

output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.dense(data, weight, bias=None, out_dtype=None)

The default implementation of dense in topi.

Parameters
  • data (tvm.te.Tensor) – 2-D with shape [batch, in_dim]

  • weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim]

  • bias (tvm.te.Tensor, optional) – 1-D with shape [out_dim]

  • out_dtype (str) – The output type. This is used for mixed precision.

Returns

output – 2-D with shape [batch, out_dim]

Return type

tvm.te.Tensor

tvm.topi.nn.depth_to_space(data, block_size, layout='NCHW', mode='DCR')

Perform depth to space transformation on the data

Parameters
  • data (tvm.te.Tensor) – 4-D tensor in either NCHW or NHWC layout.

  • block_size (int) – Size of blocks to compose from channel dimension.

  • layout (string) – Either NCHW or NHWC, indicating data layout.

  • mode (string) – Either DCR or CDR, indicates how channels should be accessed. In DCR, channels are interwoven in the Tensorflow style while in CDR channels are accessed sequentially as in Pytorch.

Returns

output – Output of shape [N, C / block_size**2, H * block_size, W * block_size]

Return type

tvm.te.Tensor

tvm.topi.nn.depthwise_conv2d_NCHWc(Input, Filter, stride, padding, dilation, layout, out_layout, out_dtype=None)

Depthwise convolution NCHW[x]c forward operator.

Parameters
  • Input (tvm.te.Tensor) – 5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]

  • Filter (tvm.te.Tensor) – 6-D with shape [out_channel_chunk, 1, filter_height, filter_width, 1, out_channel_block] In NCHWc depthwise convolution, we group kernel’s in_channel and channel_multiplier together then do the tiling.

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • layout (str) – Input data layout

  • out_layout (str) – Output data layout

  • out_dtype (str, optional) – Output data type

Returns

Output – 5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]

Return type

tvm.te.Tensor

tvm.topi.nn.depthwise_conv2d_backward_input_nhwc(Filter, Out_grad, oshape, ishape, stride, padding)

Depthwise convolution nhwc backward wrt input operator.

Parameters
  • Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

  • Out_grad (tvm.te.Tensor) – 4-D with shape [batch, out_height, out_width, out_channel]

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

Returns

Output – 4-D with shape [batch, in_height, in_width, in_channel]

Return type

tvm.te.Tensor

tvm.topi.nn.depthwise_conv2d_backward_weight_nhwc(Input, Out_grad, oshape, fshape, stride, padding)

Depthwise convolution nhwc backward wrt weight operator.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • Out_grad (tvm.te.Tensor) – 4-D with shape [batch, out_height, out_width, out_channel]

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

Returns

Output – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

Return type

tvm.te.Tensor

tvm.topi.nn.depthwise_conv2d_infer_layout(workload, cfg)

Infer input/output shapes and layouts from a workload and cfg.

Parameters
  • workload (tuple) – conv2d workload

  • cfg (tuple) – tvm.autotvm config

Returns

Output – Input shapes and layouts, and output shapes and layouts

Return type

[tuple of tuple and str, tuple of tuple and str]

tvm.topi.nn.depthwise_conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)

Depthwise convolution nchw forward operator.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • Filter (tvm.te.Tensor) – 4-D with shape [in_channel, channel_multiplier, filter_height, filter_width]

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • out_dtype (str, optional) – Output data type

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.depthwise_conv2d_nhwc(Input, Filter, stride, padding, dilation, out_dtype=None)

Depthwise convolution nhwc forward operator.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • out_dtype (str, optional) – Output data type

Returns

Output – 4-D with shape [batch, out_height, out_width, out_channel]

Return type

tvm.te.Tensor

tvm.topi.nn.dilate(data, strides, dilation_value=0.0, name='DilatedInput')

Dilate data with given dilation value (0 by default).

Parameters
  • data (tvm.te.Tensor) – n-D, can be any layout.

  • strides (list / tuple of n ints) – Dilation stride on each dimension, 1 means no dilation.

  • dilation_value (int/float, optional) – Value used to dilate the input.

  • name (str, optional) – The name prefix operators generated

Returns

Output – n-D, the same layout as data.

Return type

tvm.te.Tensor

tvm.topi.nn.equal_const_int(expr, value)

Returns if expr equals value.

Parameters

expr (tvm.Expr) – The input expression.

Returns

equal – Whether they equals.

Return type

bool

tvm.topi.nn.fifo_buffer(data, buffer, axis)

FIFO buffer to enable computation reuse in CNNs with sliding indow input

Compute equivalent of

concat(buffer, data, axis=axis)
.slice_axis(axis=axis,
            begin=data.shape[axis],
            end=data.shape[axis]+buffer.shape[axis])

Useful for

  • Encoding explicit re-use of computation in convolution ops operated on a sliding window input

  • Implementing a FIFO queue to cache intermediate results, e.g. as in Fast WaveNet.

Parameters
  • data (tvm.te.Tensor) – The input data

  • buffer (tvm.te.Tensor) – Previous value of the FIFO buffer

  • axis (int) – Specify which axis should be used for buffering

Returns

result – Updated value for the buffer

Return type

tvm.te.Tensor

tvm.topi.nn.flatten(data)

Flattens the input array into a 2-D array by collapsing the higher dimensions.

Parameters

data (tvm.te.Tensor) – Input array.

Returns

output – 2-D array with collapsed higher dimensions.

Return type

tvm.te.Tensor

tvm.topi.nn.get_const_int(expr)

Verifies expr is integer and get the constant value.

Parameters

expr (tvm.Expr or int) – The input expression.

Returns

out_value – The output.

Return type

int

tvm.topi.nn.get_const_tuple(in_tuple)

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

Parameters

in_tuple (tuple of Expr) – The input.

Returns

out_tuple – The output.

Return type

tuple of int

tvm.topi.nn.get_pad_tuple(padding, kernel)

Common code to get the pad option

Parameters
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • kernel (tuple of int) – Conv kernel size

Returns

  • pad_top (int) – Padding size on top

  • pad_left (int) – Padding size on left

  • pad_down (int) – Padding size on down.

  • pad_right (int) – Padding size on right.

tvm.topi.nn.get_pad_tuple1d(padding, kernel)

Common code to get the pad option

Parameters
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • kernel (tuple of int) – Conv kernel size

Returns

  • pad_left (int) – Padding size on left

  • pad_right (int) – Padding size on right.

tvm.topi.nn.get_pad_tuple3d(padding, kernel)

Common code to get the pad option

Parameters
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • kernel (tuple of int) – Conv kernel size

Returns

  • pad_front (int) – Padding size on front.

  • pad_top (int) – Padding size on top

  • pad_left (int) – Padding size on left

  • pad_back (int) – Padding size on back.

  • pad_down (int) – Padding size on down.

  • pad_right (int) – Padding size on right.

tvm.topi.nn.global_pool(data, pool_type, layout='NCHW')
Perform global pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.te.Tensor) – n-D with shape of layout

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • layout (str) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

Returns

output – n-D in same layout with height and width dimension size of 1. e.g., for NCHW, the output shape will be [batch, channel, 1, 1]

Return type

tvm.te.Tensor

tvm.topi.nn.group_conv2d_nchw(Input, Filter, stride, padding, dilation, groups, out_dtype=None)

Group convolution operator in NCHW layout.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • Filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel // groups, filter_height, filter_width]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • groups (int) – number of groups

  • out_dtype (str) – The output type. This is used for mixed precision.

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.nn.group_conv2d_nhwc(Input, Filter, stride, padding, dilation, groups, out_dtype=None)

Group convolution operator in NHWC layout.

Parameters
  • Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel // groups, num_filter]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • groups (int) – number of groups

  • out_dtype (str) – The output type. This is used for mixed precision.

Returns

Output – 4-D with shape [batch, out_height, out_width, out_channel]

Return type

tvm.te.Tensor

tvm.topi.nn.leaky_relu(x, alpha)

Take leaky relu of input x.

Parameters
  • x (tvm.te.Tensor) – Input argument.

  • alpha (float) – The slope for the small gradient when x < 0

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.nn.log_softmax(x)

Perform log softmax activation on the data

Parameters

data (tvm.te.Tensor) – 2-D input data

Returns

output – 2-D output with same shape

Return type

tvm.te.Tensor

tvm.topi.nn.lrn(data, size, axis=1, alpha=0.0001, beta=0.75, bias=2)

Perform the across channels local response normalisation on the input data.

sum_sqr_up^i{x, y} = (bias+((alpha/size)* {sum_{j=max(0, i-size/2)}^{min(N-1,i+size/2)} (data^j{x,y})^2}))^beta output^i{x, y} = data^i{x, y}/sum_sqr_up^i{x, y} N is the number for input channels

Parameters
  • data (tvm.te.Tensor) – 4-D with shape [batch, channel, height, width]

  • size (int) – normalisation window size

  • axis (int) – input data layout channel axis default value is 1 for NCHW format

  • bias (float) – offset to avoid dividing by 0

  • alpha (float) – to be divided

  • beta (float) – exponent

Returns

output – 4-D output with same shape

Return type

tvm.te.Tensor

tvm.topi.nn.mirror_pad(data, pad_before, pad_after=None, mode='SYMMETRIC', name='MirrorPadInput')

Pad Input with mirroring either symmetric or reflected.

Parameters
  • data (tvm.te.Tensor) – n-D input, can be any layout.

  • pad_before (list / tuple of n ints) – Pad width on each dimension to pad the before the axis begin.

  • pad_after (list / tuple of n ints, optional) – Pad width each dimension to pad the after the axis end.

  • mode (str, optional) – Type of mirror padding to apply. Must be SYMMETRIC or REFLECT

  • name (str, optional) – The name prefix operators generated

Returns

Output – n-D, the same layout as Input.

Return type

tvm.te.Tensor

tvm.topi.nn.namedtuple(typename, field_names, *, verbose=False, rename=False, module=None)

Returns a new subclass of tuple with named fields.

>>> Point = namedtuple('Point', ['x', 'y'])
>>> Point.__doc__                   # docstring for the new class
'Point(x, y)'
>>> p = Point(11, y=22)             # instantiate with positional args or keywords
>>> p[0] + p[1]                     # indexable like a plain tuple
33
>>> x, y = p                        # unpack like a regular tuple
>>> x, y
(11, 22)
>>> p.x + p.y                       # fields also accessible by name
33
>>> d = p._asdict()                 # convert to a dictionary
>>> d['x']
11
>>> Point(**d)                      # convert from a dictionary
Point(x=11, y=22)
>>> p._replace(x=100)               # _replace() is like str.replace() but targets named fields
Point(x=100, y=22)
tvm.topi.nn.pad(data, pad_before, pad_after=None, pad_value=0.0, name='PadInput')

Pad Input with zeros.

Parameters
  • data (tvm.te.Tensor) – n-D input, can be any layout.

  • pad_before (list / tuple of n ints) – Pad width on each dimension to pad the before the axis begin.

  • pad_after (list / tuple of n ints, optional) – Pad width each dimension to pad the after the axis end.

  • pad_value (float, optional) – The value to be padded.

  • name (str, optional) – The name prefix operators generated

Returns

Output – n-D, the same layout as Input.

Return type

tvm.te.Tensor

tvm.topi.nn.pool(data, kernel, stride, padding, pool_type, ceil_mode=False, layout='NCHW', count_include_pad=True)
Perform pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.te.Tensor) – n-D with shape of layout

  • kernel (list/tuple of two ints) – Kernel size, [kernel_height, kernel_width]

  • stride (list/tuple of two ints) – Stride size, [stride_height, stride_width]

  • padding (list/tuple of four ints) – Pad size, [pad_top, pad_left, pad_bottom, pad_right]]

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • ceil_mode (bool) – Whether to use ceil when calculating output size.

  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

  • count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’

Returns

output – n-D in the same layout

Return type

tvm.te.Tensor

tvm.topi.nn.pool1d(data, kernel, stride, padding, pool_type, ceil_mode=False, layout='NCW', count_include_pad=True)
Perform pooling on width dimension of data.

Width axis is determined according to the layout string. in which ‘w’ means width. Width dimension cannot be split. For example, NCW, NCW16c, etc. are valid for pool, while NCW16w is not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.te.Tensor) – n-D with shape of layout

  • kernel (list/tuple of one int or int) – Kernel size, [kernel_width]

  • stride (list/tuple of one int or int) – Stride size, [stride_width]

  • padding (list/tuple of two ints) – Pad size, [pad_left, pad_right]

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • ceil_mode (bool) – Whether to use ceil when calculating output size.

  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCW16c can describe a 4-D tensor of [batch_size, channel, width, channel_block], in which channel_block=16 is a split of dimension channel.

  • count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’

Returns

output – n-D in the same layout

Return type

tvm.te.Tensor

tvm.topi.nn.pool3d(data, kernel, stride, padding, pool_type, ceil_mode=False, layout='NCDHW', count_include_pad=True)
Perform pooling on depth, height and width dimension of data.

It decides the depth, height and width dimension according to the layout string, in which ‘D’, ‘W’ and ‘H’ means depth, width and height respectively. Depth, width and height dimension cannot be split. For example, NCDHW, NCDHW16c, etc. are valid for pool, while NCDHW16d, NCDHW16w, NCDHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.te.Tensor) – n-D with shape of layout

  • kernel (list/tuple of three ints) – Kernel size, [kernel_depth, kernel_height, kernel_width]

  • stride (list/tuple of three ints) – Stride size, [stride_depth, stride_height, stride_width]

  • padding (list/tuple of six ints) – Pad size, [pad_front, pad_top, pad_left, pad_back, pad_bottom, pad_right]

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • ceil_mode (bool) – Whether to use ceil when calculating output size.

  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCDHW16c can describe a 6-D tensor of [batch_size, channel, depth, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

  • count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’

Returns

output – n-D in the same layout

Return type

tvm.te.Tensor

tvm.topi.nn.pool_grad(grads, data, kernel, stride, padding, pool_type, ceil_mode=False, layout='NCHW', count_include_pad=True)
Gradient of pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • grads (tvm.te.Tensor) – n-D with shape of layout

  • data (tvm.te.Tensor) – n-D with shape of layout

  • kernel (list/tuple of two ints) – Kernel size, [kernel_height, kernel_width]

  • stride (list/tuple of two ints) – Stride size, [stride_height, stride_width]

  • padding (list/tuple of four ints) – Pad size, [pad_top, pad_left, pad_bottom, pad_right]]

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • ceil_mode (bool) – Whether to use ceil when calculating output size.

  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

  • count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’

Returns

output – n-D in the same layout

Return type

tvm.te.Tensor

tvm.topi.nn.prelu(x, slope, axis=1)

PReLU. It accepts two arguments: an input x and a weight array W and computes the output as \(PReLU(x) y = x > 0 ? x : W * x\), where \(*\) is an elementwise multiplication for each sample in the batch.

Parameters
  • x (tvm.te.Tensor) – Input argument.

  • slope (tvm.te.Tensor) – Channelised slope tensor for prelu

  • axis (int) – The axis where the channel data needs to be applied

Returns

  • y (tvm.te.Tensor) – The result.

  • Links

  • —–

  • [http (//arxiv.org/pdf/1502.01852v1.pdf])

tvm.topi.nn.relu(x)

Take relu of input x.

Parameters

x (tvm.te.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.te.Tensor

tvm.topi.nn.scale_shift_nchw(Input, Scale, Shift)

Batch normalization operator in inference.

Parameters
Returns

Output – Output tensor, layout is NCHW

Return type

tvm.te.Tensor

tvm.topi.nn.scale_shift_nhwc(Input, Scale, Shift)

Batch normalization operator in inference.

Parameters
Returns

Output – Output tensor, layout is NHWC

Return type

tvm.te.Tensor

tvm.topi.nn.simplify(expr)

Simplify the expression if it is Expr, directly return if it is int.

Parameters

expr (Expr or int) – The input.

Returns

out – The simplified output

Return type

Expr or int

tvm.topi.nn.softmax(x, axis=- 1)

Perform softmax activation on the data

Parameters
Returns

output – output shape is the same as input

Return type

tvm.te.Tensor

tvm.topi.nn.space_to_depth(data, block_size, layout='NCHW')

Perform space to depth transformation on the data

Parameters
  • data (tvm.te.Tensor) – 4-D tensor in either NCHW or NHWC layout.

  • block_size (int) – Size of blocks to decompose into channel dimension.

  • layout (string) – Either NCHW or NHWC, indicating data layout.

Returns

output – Output of shape [N, C * block_size**2, H / block_size, W / block_size]

Return type

tvm.te.Tensor

tvm.topi.nn.sparse_dense(data, weight_data, weight_indices, weight_indptr)

Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T

Parameters
  • data (tvm.te.Tensor) – 2-D with shape [M, K], float32

  • weight_data (tvm.te.Tensor) – 1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)

  • weight_indices (tvm.te.Tensor) – 1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)

  • weight_indptr (tvm.te.Tensor) – 1-D with shape [N + 1] (CSR) or 1-D with shape [(N + 1) // bs_r] (BSR)

Returns

output – 2-D with shape [M, N]

Return type

tvm.te.Tensor

tvm.topi.nn.sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type)

Change Sparse Dense layout.

This is used for modifying the inputs weights so they are more amenable for the target.

Parameters
  • attrs (tvm.ir.Attrs) – Attributes of current convolution

  • inputs (tvm.relay.Expr) – Grouped input symbols

  • tinfos (list) – Input shape and dtype

  • out_type (type) – The output type

Note

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.sparse_transpose(sparse_data, sparse_indices, sparse_indptr)

Transpose a square sparse matrix, A is an n-by-n sparse matrix in the CSR format. ** Currently only support Square Matrices **

Parameters
  • sparse_data (tvm.te.Tensor) – 1-D with shape [nonzeros], dtype of ‘float32’

  • sparse_indices (tvm.te.Tensor) – 1-D with shape [nonzeros], dtype of ‘int32’

  • sparse_indptr (tvm.te.Tensor) – 1-D with shape [n+1], dtype of ‘int32’

Returns

  • out_data (tvm.te.Tensor) – 1-D with shape [nonzeros], dtype of ‘float32’

  • out_indices (tvm.te.Tensor) – 1-D with shape [nonzeros], dtype of ‘int32’

  • out_indptr (tvm.te.Tensor) – 1-D with shape [n+1], dtype of ‘int32’

tvm.topi.nn.strided_slice(a, begin, end, strides=None, slice_mode='end')

Slice of an array.

Parameters
  • a (tvm.te.Tensor) – The tensor to be sliced.

  • begin (list of int) – The indices to begin with in the slicing.

  • end (list of int) – Indicies indicating end of the slice.

  • strides (list of int, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

  • slice_mode (str, optional) – The slice mode [end, size]. end - The ending indices for the slice [default]. size - The input strides will be ignored, input end in this mode indicates the sizeof a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.

Returns

ret

Return type

tvm.te.Tensor

tvm.topi.nn.unpack_NCHWc_to_nchw(packed_out, out_dtype)

Unpack conv2d_NCHWc output from layout NCHWc to NCHW

Parameters
  • packed_out (tvm.te.Tensor) – The output tensor of conv2d_NCHWc.

  • out_dtype (str) – The output dtype.

Returns

unpacked_out – The unpacked output tensor in NCHW layout.

Return type

tvm.te.Tensor

tvm.topi.nn.upsampling(data, scale_h, scale_w, layout='NCHW', method='nearest_neighbor', align_corners=False, output_shape=None)
Perform upsampling on the data.

Nearest neighbor and bilinear upsampling are supported.

Parameters
  • inputs (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • scale_h (float) – Scaling factor for height

  • scale_w (float) – Scaling factor for width

  • layout (string, optional) – either “NCHW” or “NHWC”

  • method ({"bilinear", "nearest_neighbor", "bicubic"}) – Method to be used for upsampling.

  • output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns

output – 4-D with shape [batch, channel, in_height*scale_h, in_width*scale_w] or [batch, in_height*scale, in_width*scale, channel]

Return type

tvm.te.Tensor

tvm.topi.nn.upsampling3d(data, scale_d, scale_h, scale_w, layout='NCDHW', method='nearest_neighbor', coordinate_transformation_mode='half_pixel', output_shape=None)
Perform upsampling on the data.

Nearest neighbor and bilinear upsampling are supported.

Parameters
  • inputs (tvm.te.Tensor) – inputs is a 5-D tensor with shape [batch, channel, in_depth, in_height, in_width] or [batch, in_depth, in_height, in_width, channel]

  • scale_d (float) – Scaling factor for depth

  • scale_h (float) – Scaling factor for height

  • scale_w (float) – Scaling factor for width

  • layout (string, optional) – either “NCDHW” or “NDHWC”

  • method ({"trilinear", "nearest_neighbor"}) – Method to be used for upsampling.

  • coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

  • output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns

output – 5-D with shape [batch, channel, in_depth*scale, in_height*scale, in_width*scale] or [batch, in_depth*scale, in_height*scale, in_width*scale, channel]

Return type

tvm.te.Tensor

tvm.topi.nn.winograd_transform_matrices(tile_size, kernel_size, out_dtype)

Compute the A, B, and G transform matrices for tile_size as a tvm.Expr.

tvm.topi.image

IMAGE network operators

Functions:

affine_grid(data, target_shape)

affine_grid operator that generates 2D sampling grid.

crop_and_resize(data, boxes, box_indices, …)

Perform crop and resize operation on the data.

dilation2d_nchw(input, filter, stride, …)

Morphological dilation operator in NCHW layout.

dilation2d_nhwc(input, filter, stride, …)

Morphological 2d dilation NHWC layout.

get_2d_indices(indices[, layout])

Get 2d indices

get_2d_pixel(data, layout, boxes, …)

Get 2d pixel

get_pad_tuple(padding, kernel)

Common code to get the pad option

grid_sample(data, grid[, method, layout])

Applies bilinear sampling to input feature map.

nchw_pack_layout(layout_info)

Check whether the layout type is NCHWinic

nchw_xc_layout(layout_info)

Check whether the layout type is NCHWxc

pad(data, pad_before[, pad_after, …])

Pad Input with zeros.

resize(data, size[, layout, method, …])

Perform resize operation on the data.

resize3d(data, size[, layout, method, …])

Perform resize operation on the data.

resize_bicubic(indices, data, image_height, …)

Perform resize operation with bicubic method on the data.

resize_bilinear(indices, data, image_height, …)

Perform resize operation with bilinear method on the data.

resize_nearest_neighbor(indices, data, …)

Perform resize operation with nearest neighbor method on the data.

simplify(expr)

Simplify the expression if it is Expr, directly return if it is int.

tvm.topi.image.affine_grid(data, target_shape)

affine_grid operator that generates 2D sampling grid.

This operation is described in https://arxiv.org/pdf/1506.02025.pdf. It generates a uniform sampling grid within the target shape and normalizes it to [-1, 1]. The provided affine transformation is then applied on the sampling grid.

Parameters
  • data (tvm.Tensor) – 3-D with shape [batch, 2, 3]. The affine matrix.

  • target_shape (list/tuple of two int) – Specifies the output shape (H, W).

Returns

Output – 4-D with shape [batch, 2, target_height, target_width]

Return type

tvm.Tensor

tvm.topi.image.crop_and_resize(data, boxes, box_indices, crop_size, layout='NCHW', method='bilinear', extrapolation_value=0, out_dtype=None)

Perform crop and resize operation on the data.

Parameters
  • data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • boxes (tvm.te.Tensor) – A 2-D tensor of shape [num_boxes, 4]. Each row of the tensor specifies the coordinates of a box.

  • box_indices (tvm.te.Tensor) – A 1-D tensor of shape [num_boxes], box_indices[i] specifies the data that the i-th box refers to.

  • crop_size (Tuple) – The target size of each box.

  • layout (string, optional) – “NCHW”, “NHWC”

  • method ({"bilinear", "nearest_neighbor"}) – Method to be used for resizing.

  • extrapolation_value (float, optional) – Value used for extrapolation, when applicable.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

Returns

output – 4-D with shape [num_boxes, channel, crop_height, crop_width] or [num_boxes, crop_height, crop_width, channel]

Return type

tvm.te.Tensor

tvm.topi.image.dilation2d_nchw(input, filter, stride, padding, dilations, out_dtype=None)

Morphological dilation operator in NCHW layout.

Parameters
  • input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • filter (tvm.te.Tensor) – 3-D with shape [ in_channel, filter_height, filter_width]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or str) – Padding size

  • dilations (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • out_dtype (Optional[str]) – Specifies the output data type.

Returns

Output – 4-D with shape [batch, in_channel, out_height, out_width]

Return type

tvm.te.Tensor

tvm.topi.image.dilation2d_nhwc(input, filter, stride, padding, dilations, out_dtype=None)

Morphological 2d dilation NHWC layout.

Parameters
  • input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • filter (tvm.te.Tensor) – 3-D with shape [filter_height, filter_width, in_channel]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int) – Padding size

  • dilations (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • out_dtype (Optional[str]) – Specifies the output data type.

Returns

Output – 4-D with shape [batch, out_height, out_width, in_channel]

Return type

tvm.te.Tensor

tvm.topi.image.get_2d_indices(indices, layout='NCHW')

Get 2d indices

tvm.topi.image.get_2d_pixel(data, layout, boxes, image_height, image_width, n, c, y, x, cc, ib, ic)

Get 2d pixel

tvm.topi.image.get_pad_tuple(padding, kernel)

Common code to get the pad option

Parameters
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • kernel (tuple of int) – Conv kernel size

Returns

  • pad_top (int) – Padding size on top

  • pad_left (int) – Padding size on left

  • pad_down (int) – Padding size on down.

  • pad_right (int) – Padding size on right.

tvm.topi.image.grid_sample(data, grid, method='bilinear', layout='NCHW')

Applies bilinear sampling to input feature map.

Given \(data\) and \(grid\), assuming NCHW layout, then the output is computed by

\[x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \ y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \ output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src})\]

\(x_{dst}\), \(y_{dst}\) enumerate all spatial locations in \(output\), and \(G()\) denotes the interpolation method. The out-boundary points will be padded with zeros. The shape of the output will be (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]).

The operator assumes that \(grid\) has been normalized to [-1, 1].

grid_sample often cooperates with affine_grid which generates sampling grids for grid_sample.

Parameters
  • data (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • grid (tvm.Tensor) – 4-D with shape [batch, 2, out_height, out_width]

  • method (str) – The interpolation method. Only ‘bilinear’ is supported.

  • layout (str) – The layout of input data and the output.

Returns

Output – 4-D with shape [batch, 2, out_height, out_width]

Return type

tvm.Tensor

tvm.topi.image.nchw_pack_layout(layout_info)

Check whether the layout type is NCHWinic

tvm.topi.image.nchw_xc_layout(layout_info)

Check whether the layout type is NCHWxc

tvm.topi.image.pad(data, pad_before, pad_after=None, pad_value=0.0, name='PadInput')

Pad Input with zeros.

Parameters
  • data (tvm.te.Tensor) – n-D input, can be any layout.

  • pad_before (list / tuple of n ints) – Pad width on each dimension to pad the before the axis begin.

  • pad_after (list / tuple of n ints, optional) – Pad width each dimension to pad the after the axis end.

  • pad_value (float, optional) – The value to be padded.

  • name (str, optional) – The name prefix operators generated

Returns

Output – n-D, the same layout as Input.

Return type

tvm.te.Tensor

tvm.topi.image.resize(data, size, layout='NCHW', method='bilinear', coordinate_transformation_mode='half_pixel', out_dtype=None, output_shape=None)

Perform resize operation on the data.

Parameters
  • data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • size (Tuple) – Output resolution scale to

  • layout (string, optional) – “NCHW”, “NHWC”, or “NCHWc”.

  • coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

  • method ({"bilinear", "nearest_neighbor", "bicubic"}) – Method to be used for resizing.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

  • output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns

output – 4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_height*scale, in_width*scale, channel-minor]

Return type

tvm.te.Tensor

tvm.topi.image.resize3d(data, size, layout='NCDHW', method='nearest_neighbor', coordinate_transformation_mode='align_corners', out_dtype=None)

Perform resize operation on the data.

Parameters
  • inputs (tvm.te.Tensor) – inputs is a 5-D tensor with shape [batch, channel, in_depth, in_height, in_width] or [batch, in_depth, in_height, in_width, channel]

  • size (Tuple) – Output resolution scale to

  • layout (string, optional) – “NCDHW”, “NDHWC”, or “NCDHWc”.

  • coordinate_transformation_mode (string, optional) –

    Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details.

    Available options are “half_pixel”, “align_corners” and “asymmetric”.

  • method ({"trilinear", "nearest_neighbor"}) – Method to be used for resizing.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

Returns

output – 5-D with shape [batch, channel, in_depth*scale, in_height*scale, in_width*scale] or [batch, in_depth*scale, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_depth*scale, in_height*scale, in_width*scale, channel-minor]

Return type

tvm.te.Tensor

tvm.topi.image.resize_bicubic(indices, data, image_height, image_width, target_height, target_width, boxes=None, box_indices=None, extrapolation_value=None, layout='NCHW', coordinate_transformation_mode='align_corners', out_dtype=None)

Perform resize operation with bicubic method on the data. More details about Bicubic interpolation please refer to https://en.wikipedia.org/wiki/Bicubic_interpolation.

Parameters
  • indices (tuple) – The indices of input data

  • data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • image_height (integer) – Input image height

  • image_width (integer) – Input image width

  • target_height (integer) – The target resized image height

  • target_width (integer) – The target resized image width

  • boxes (tvm.te.Tensor, optional) – A 2-D tensor of shape [num_boxes, 4]. Each row of the tensor specifies the coordinates of a box.

  • box_indices (tvm.te.Tensor, optional) – A 1-D tensor of shape [num_boxes], box_indices[i] specifies the data that the i-th box refers to.

  • extrapolation_value (float, optional) – Value used for extrapolation, when applicable.

  • layout (string, optional) – “NCHW”, “NHWC”, or “NCHWc”.

  • coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

Returns

output – The computed result with type out_dtype

Return type

out_dtype

tvm.topi.image.resize_bilinear(indices, data, image_height, image_width, target_height, target_width, boxes=None, box_indices=None, extrapolation_value=None, layout='NCHW', coordinate_transformation_mode='align_corners', out_dtype=None)

Perform resize operation with bilinear method on the data. For details about Bilinear interpolation please refer to https://en.wikipedia.org/wiki/Bilinear_interpolation.

Parameters
  • indices (tuple) – The indices of input data

  • data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • image_height (integer) – Input image height

  • image_width (integer) – Input image width

  • target_height (integer) – The target resized image height

  • target_width (integer) – The target resized image width

  • boxes (tvm.te.Tensor, optional) – A 2-D tensor of shape [num_boxes, 4]. Each row of the tensor specifies the coordinates of a box.

  • box_indices (tvm.te.Tensor, optional) – A 1-D tensor of shape [num_boxes], box_indices[i] specifies the data that the i-th box refers to.

  • extrapolation_value (float, optional) – Value used for extrapolation, when applicable.

  • layout (string, optional) – “NCHW”, “NHWC”, or “NCHWc”.

  • coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

Returns

output – The computed result with type out_dtype

Return type

out_dtype

tvm.topi.image.resize_nearest_neighbor(indices, data, image_height, image_width, target_height, target_width, boxes=None, box_indices=None, extrapolation_value=None, layout='NCHW', coordinate_transformation_mode='align_corners', out_dtype=None)

Perform resize operation with nearest neighbor method on the data. For details about Nearest-neighbor interpolation please refer to https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation.

Parameters
  • indices (tuple) – The indices of input data

  • data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • image_height (integer) – Input image height

  • image_width (integer) – Input image width

  • target_height (integer) – The target resized image height

  • target_width (integer) – The target resized image width

  • boxes (tvm.te.Tensor, optional) – A 2-D tensor of shape [num_boxes, 4]. Each row of the tensor specifies the coordinates of a box.

  • box_indices (tvm.te.Tensor, optional) – A 1-D tensor of shape [num_boxes], box_indices[i] specifies the data that the i-th box refers to.

  • extrapolation_value (float, optional) – Value used for extrapolation, when applicable.

  • layout (string, optional) – “NCHW”, “NHWC”, or “NCHWc”.

  • coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

Returns

output – The computed result with type out_dtype

Return type

out_dtype

tvm.topi.image.simplify(expr)

Simplify the expression if it is Expr, directly return if it is int.

Parameters

expr (Expr or int) – The input.

Returns

out – The simplified output

Return type

Expr or int

tvm.topi.sparse

Sparse operators

Functions:

csrmm(a, b[, c])

The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format.

csrmv(a, x[, y])

The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format.

dense(data, weight[, bias])

Applies a linear transformation: \(Y = XW^T + b\).

tvm.topi.sparse.csrmv(a, x, y=None)

The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format.

Parameters
Returns

output – 2-D dense matrix with shape [m, 1]

Return type

tvm.te.Tensor

tvm.topi.sparse.csrmm(a, b, c=None)

The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format.

Parameters
Returns

output – 2-D with shape [m, n]

Return type

tvm.te.Tensor

tvm.topi.sparse.dense(data, weight, bias=None)

Applies a linear transformation: \(Y = XW^T + b\). Either data or weight should be tvm.contrib.sparse.CSRNDArray.

Parameters
Returns

output – 2-D with shape [batch, out_dim]

Return type

tvm.te.Tensor