tvm.topi
TVM Operator Inventory.
TOPI is the operator collection library for TVM, to provide sugars for constructing compute declaration as well as optimized schedules.
Some of the schedule function may have been specially optimized for a specific workload.
Classes:
|
Integer arithmetic analyzer |
|
Cast expression. |
|
Base class of all primitive expressions. |
Functions:
|
Take absolute value of the input of x, element-wise. |
|
Take arc cos of input x. |
|
Take arc cosh of input x. |
|
Addition with auto-broadcasting |
|
Numpy style indexing with tensors. |
|
Logical AND of array elements over a given axis or a list of axes |
|
Logical OR of array elements over a given axis or a list of axes |
|
Creates a tensor with evenly spaced values within a given interval. |
|
Returns the indices of the maximum values along an axis. |
|
Returns the indices of the minimum values along an axis. |
|
Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order. |
|
Find the indices of elements of a tensor that are non-zero. |
|
Take arc sin of input x. |
|
Take arc sinh of input x. |
|
Take atan of input x. |
|
Take atanh of input x. |
|
Common IR generator for binary search used by CPU and GPU backends. |
|
Compute element-wise bitwise and of data. |
|
Compute element-wise bitwise not of data. |
|
Compute element-wise bitwise or of data. |
|
Compute element-wise bitwise xor of data. |
|
Broadcast the src to the target shape |
|
Cast input to specified data type. |
|
Take ceil of input x. |
|
Compute integer ceil log2 with a special code path for vulkan SPIR-V does not support log2 on fp64. |
|
Clip (limit) the values in an array. |
|
Return a summation of data to the given shape. |
|
Join a sequence of arrays along an existing axis. |
|
convert a const numpy 1-dimensional vector to tvm tensor |
|
Take cos of input x. |
|
Take cosh of input x. |
|
Numpy style cumprod op. |
|
Numpy style cumsum op. |
|
Declare a new symbolic buffer. |
|
Computes the discrete Fourier transform of input (calculation along the last axis). |
|
Compute a / b as in C/C++ semantics. |
|
Division with auto-broadcasting |
|
Slice of an array. |
|
Evaluates the Einstein summation convention on the operands. |
|
Perform element-wise sum on inputs |
|
Compute (lhs==rhs) with auto-broadcasting |
|
Take gauss error function of input x. |
|
Legalizes ERF op. |
|
Take exponential of input x. |
|
Expand the shape of an array. |
|
Expand an input array with the shape of second array. |
|
Compute several tensors via an extern function. |
|
Generate an identity matrix or a matrix with ones on the k-th diagonal. |
|
Take gauss error function of input x using fast_erf implementation. |
|
Take exponential of input x using fast_exp implementation |
|
Take hyperbolic tangent of input x using fast_tanh implementation |
|
Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits |
|
Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits |
|
Flip/reverse elements of an array in a particular axis. |
|
Take floor of input x. |
|
Floor division with auto-broadcasting |
|
Floor modulus with auto-broadcasting |
|
Compute the floordiv of two expressions. |
|
Compute the floormod of two expressions. |
|
Fill tensor with fill_value |
|
Construct a tensor with same shape as input tensor, |
|
Gather values along given axis from given indices. |
|
Gather elements from a n-dimension array.. |
|
Verifies input tuple is IntImm or Var, returns tuple of int or Var. |
|
Compute (lhs>rhs) with auto-broadcasting |
|
Compute (lhs>=rhs) with auto-broadcasting |
|
Find the indices of elements of a 1-D tensor that are non-zero. |
|
Find the indices of elements of a 2-D tensor that are non-zero. |
|
Find the indices of elements of a 3-D tensor that are non-zero. |
|
Find the indices of elements of a 4-D tensor that are non-zero. |
|
Find the indices of elements of a 5-D tensor that are non-zero. |
|
Take identity of input x. |
|
Computes the inverse permutation of data. |
|
Check if value of x is finite, element-wise. |
|
Check if value of x is infinite, element-wise. |
|
Check if value of x is NaN, element-wise. |
|
Transform the layout according to src_layout and dst_layout |
|
Left shift with auto-broadcasting |
|
Compute (lhs<rhs) with auto-broadcasting |
|
Compute (lhs<=rhs) with auto-broadcasting |
|
Take logarithm of input x. |
|
Take logarithm to the base 10 of input x. |
|
Take logarithm to the base 2 of input x. |
|
Compute element-wise logical and of data. |
|
Compute element-wise logical not of data. |
|
Compute element-wise logical or of data. |
|
Compute element-wise logical xor of data. |
|
Return the array position in the selection that corresponds to an array position in the full array. |
|
Creates an operation that calculates a matrix multiplication (row-major notation): A(i, k) * B(k, j) if trans_a == trans_b, the usual transposed combinations, otherwise |
|
Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values. |
|
Maximum of array elements over a given axis or a list of axes |
|
Take element-wise maximum of two tensors with auto-broadcasting |
|
Create coordinate matrices from coordinate vectors. |
|
Minimum of array elements over a given axis or a list of axes |
|
Take element-wise maximum of two tensors with auto-broadcasting |
|
Modulus with auto-broadcasting |
|
Multiplication with auto-broadcasting |
|
Get the number of elements of input array |
|
Take negation of input x. |
|
Compute (lhs!=rhs) with auto-broadcasting |
|
Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value. |
|
Power with auto-broadcasting |
|
Product of array elements over a given axis or a list of axes |
|
Reinterpret input to specified data type. |
|
Repeats elements of an array. |
|
Reshape the array |
|
Reverse the tensor for variable length slices. |
|
Right shift with auto-broadcasting |
|
Round elements of x to nearest integer. |
|
Take inverse square root of input x. |
|
Cumulative binary operator (scan) with similar axis behavior as np.cumsum and np.cumprod. |
|
Scatter elements from updates to corresponding indices of copied data. |
|
Scatter elements from a n-dimension array. |
|
Find indices where elements should be inserted to maintain order. |
|
Sets all elements outside the expected length of the sequence to a constant value. |
|
Get the shape of input array |
|
Take sigmoid tanh of input x. |
|
Returns -1, 0, 1 based on sign of x. |
|
Take sin of input x. |
|
Take sinh of input x. |
|
Slide a window over the data tensor. |
|
Performs sorting along the given axis and returns an array in sorted order. |
|
Reshape a Sparse Tensor |
|
Converts a sparse representation into a dense tensor. |
|
Split an array into multiple sub-arrays. |
|
Take square root of input x. |
|
Remove single-dimensional entries from the shape of an array. |
|
Repeats the whole array multiple times. |
|
The STFT computes the Fourier transform of short overlapping windows of the input. |
|
Set slice of an array. |
|
Slice of an array. |
|
Subtraction with auto-broadcasting |
|
Sum of array elements over a given axis or a list of axes |
|
Take elements from an array along an axis. |
|
Legalizes dyn.topk op. |
|
Take tan of input x. |
|
Take hyperbolic tanh of input x. |
|
A generalization of matrix multiplication to tensor. |
|
Repeats the whole array multiple times. |
|
Get the top k elements in an input tensor along the given axis. |
|
Permute the dimensions of an array. |
|
Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor. |
|
Take truncated value of the input of x, element-wise. |
|
Find the unique elements of a 1-D tensor. |
|
Convert a flat index or array of flat indices into a tuple of coordinate arrays. |
|
Get the elements, either from x or y, depending on the condition. |
|
Return a boolean value that indicates if i is within the given index. |
Exceptions:
|
Invalid shape for a topi function. |
- class tvm.topi.Analyzer
Integer arithmetic analyzer
This is a stateful analyzer class that can be used to perform various symbolic integer analysis.
Methods:
const_int_bound
(expr)Find constant integer bound for expr.
modular_set
(expr)Find a modular set that expr belongs to.
simplify
(expr[, steps])Simplify expression via both rewrite and canonicalization.
rewrite_simplify
(expr)Simplify expression via rewriting rules.
canonical_simplify
(expr)Simplify expression via canonicalization.
int_set
(expr, dom_map)Compute a symbolic IntSet that covers expr for all values in dom_map.
can_prove
(expr[, strength])Check whether we can prove expr to be true.
bind
(var, expr)Bind a variable to the expression.
constraint_scope
(constraint)Create a constraint scope.
update
(var, info[, override])Update infomation about var
can_prove_equal
(lhs, rhs)Whether we can prove that lhs == rhs
Attributes:
enabled_extensions
Return the currently enabled extensions
- const_int_bound(expr)
Find constant integer bound for expr.
- Parameters:
expr (PrimExpr) – The expression.
- Returns:
bound – The result bound
- Return type:
ConstIntBound
- modular_set(expr)
Find a modular set that expr belongs to.
- Parameters:
expr (PrimExpr) – The expression.
- Returns:
result – The result.
- Return type:
ModularSet
- simplify(expr, steps=2)
Simplify expression via both rewrite and canonicalization.
- Parameters:
expr (PrimExpr) – The expression.
steps (The simplification runs in the order of) – rewrite_simplify (step 1) -> canonical_simplify (step 2) -> rewrite_simplify (step 3) -> canonical_simplify (step 4) -> … param steps controls how many steps to run. Default is 2, i.e., rewrite_simplify + canonical_simplify.
- Returns:
result – The result.
- Return type:
Expr
- rewrite_simplify(expr)
Simplify expression via rewriting rules.
- Parameters:
expr (PrimExpr) – The expression.
- Returns:
result – The result.
- Return type:
Expr
- canonical_simplify(expr)
Simplify expression via canonicalization.
- Parameters:
expr (PrimExpr) – The expression.
- Returns:
result – The result.
- Return type:
Expr
- int_set(expr, dom_map)
Compute a symbolic IntSet that covers expr for all values in dom_map.
- Parameters:
expr (PrimExpr) – The expression.
dom_map (Dict[tvm.tir.Var, tvm.arith.IntSet]) – The domain for variables to be relaxed.
- Returns:
result – The result.
- Return type:
IntSet
- can_prove(expr, strength=ProofStrength.DEFAULT)
Check whether we can prove expr to be true.
- Parameters:
expr (PrimExpr) – The expression.
strength (ProofStrength) – The proof strength
- Returns:
result – The result.
- Return type:
Expr
- bind(var: Var, expr: PrimExpr | Range)
Bind a variable to the expression.
- Parameters:
var (tvm.tir.Var) – The variable.
expr (Union[tir.PrimExpr, ir.Range]) – The expression or the range to bind to.
- constraint_scope(constraint)
Create a constraint scope.
- Parameters:
constraint (PrimExpr) – The constraint expression.
- Returns:
scope – The constraint scope
- Return type:
ConstraintScope
Examples
x = te.var("x") analyzer = tvm.arith.Analyzer() with analzyer.constraint_scope(x % 3 == 0): # constraint in effect assert analyzer.modular_set(x).coeff == 3 # constraint no longer in effect assert analyzer.modular_set(x).coeff != 3
- update(var, info, override=False)
Update infomation about var
- Parameters:
var (tvm.tir.Var) – The variable.
info (tvm.Object) – Related information.
override (bool) – Whether allow override.
- property enabled_extensions: Extension
Return the currently enabled extensions
- class tvm.topi.PrimExpr
Base class of all primitive expressions.
PrimExpr is used in the low-level code optimizations and integer analysis.
- tvm.topi.abs(x)
Take absolute value of the input of x, element-wise.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.acos(x)
Take arc cos of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.acosh(x)
Take arc cosh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.add(lhs, rhs)
Addition with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.adv_index(data, indices)
Numpy style indexing with tensors.
- Parameters:
data (tvm.te.Tensor) – Input data.
indices (A list of tvm.te.Tensor) – Tensor index.
- Returns:
result – Output tensor
- Return type:
- tvm.topi.all(data, axis=None, keepdims=False)
Logical AND of array elements over a given axis or a list of axes
- Parameters:
data (tvm.te.Tensor) – The input tvm boolean tensor
axis (None or int or tuple of int) – Axis or axes along which a logical AND is performed. The default, axis=None, will perform logical AND over all elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
ret
- Return type:
- tvm.topi.any(data, axis=None, keepdims=False)
Logical OR of array elements over a given axis or a list of axes
- Parameters:
data (tvm.te.Tensor) – The input tvm boolean tensor
axis (None or int or tuple of int) – Axis or axes along which a logical OR is performed. The default, axis=None, will perform logical OR over all elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
ret
- Return type:
- tvm.topi.arange(start, stop=None, step=1, dtype='float32')
Creates a tensor with evenly spaced values within a given interval.
- Parameters:
start (tvm.Expr, optional) – Start of interval. The interval includes this value. The default start value is 0.
stop (tvm.Expr) – Stop of interval. The interval does not include this value.
step (tvm.Expr, optional) – Spacing between values. The default step size is 1.
dtype (str, optional) – The target data type.
- Returns:
result – The resulting tensor.
- Return type:
- tvm.topi.argmax(data, axis=None, keepdims=False, select_last_index=False)
Returns the indices of the maximum values along an axis.
- Parameters:
data (tvm.te.Tensor) – The input tvm tensor
axis (None or int or tuple of int) – Axis or axes along which a argmax operation is performed. The default, axis=None, will find the indices of the maximum element of the elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
select_last_index (bool) – Whether to select the last index if the maximum element appears multiple times, else select the first index.
- Returns:
ret
- Return type:
- tvm.topi.argmin(data, axis=None, keepdims=False, select_last_index=False)
Returns the indices of the minimum values along an axis.
- Parameters:
data (tvm.te.Tensor) – The input tvm tensor
axis (None or int or tuple of int) – Axis or axes along which a argmin operation is performed. The default, axis=None, will find the indices of minimum element all of the elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
select_last_index (bool) – Whether to select the last index if the minimum element appears multiple times, else select the first index.
- Returns:
ret
- Return type:
- tvm.topi.argsort(data, valid_count=None, axis=-1, is_ascend=1, dtype='float32')
Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.
- Parameters:
data (tvm.te.Tensor) – The input tensor.
valid_count (tvm.te.Tensor, optional) – 1-D tensor for valid number of boxes.
axis (int, optional) – Axis along which to sort the input tensor. By default the flattened array is used.
is_ascend (boolean, optional) – Whether to sort in ascending or descending order.
dtype (string, optional) – DType of the output indices.
- Returns:
out – Sorted index tensor.
- Return type:
Example
# An example to use argsort dshape = (1, 5, 6) data = te.placeholder(dshape, name="data") axis = 0 is_ascend = False out = argsort(data, axis=axis, is_ascend=is_ascend) np_data = np.random.uniform(dshape) s = topi.generic.schedule_argsort(out) f = tvm.build(s, [data, out], "llvm") dev = tvm.cpu() tvm_data = tvm.nd.array(np_data, dev) tvm_out = tvm.nd.array(np.zeros(dshape, dtype=data.dtype), dev) f(tvm_data, tvm_out)
- tvm.topi.argwhere(output_shape, condition)
Find the indices of elements of a tensor that are non-zero.
- Parameters:
condition (tvm.te.Tensor) – Tensor with boolean values.
- Returns:
out – Indices of non-zero elements.
- Return type:
- tvm.topi.asin(x)
Take arc sin of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.asinh(x)
Take arc sinh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.atan(x)
Take atan of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.atanh(x)
Take atanh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.binary_search(ib, sequence_offset, search_range, sorted_sequence, value, right, out_dtype)
Common IR generator for binary search used by CPU and GPU backends.
sorted_sequence is a N-D Buffer whose innermost dimension we want to search for value, and search_range is the size of the innermost dimension. sequence_offset is a 1-D linearlized offset specifying which of innermost sequences to search.
So the search for value is performed over sorted_sequence[sequence_offset:(sequence_offset + search_range)]. Note that we index N-D Buffer by 1-D linearlized indices.
- tvm.topi.bitwise_and(lhs, rhs)
Compute element-wise bitwise and of data.
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.bitwise_not(data)
Compute element-wise bitwise not of data.
- Parameters:
data (tvm.te.Tensor or Expr)
- Returns:
ret – Returns Expr if the operand are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.bitwise_or(lhs, rhs)
Compute element-wise bitwise or of data.
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.bitwise_xor(lhs, rhs)
Compute element-wise bitwise xor of data.
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.broadcast_to(data, shape)
Broadcast the src to the target shape
We follows the numpy broadcasting rule. See also https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
- Parameters:
data (tvm.te.Tensor) – The input data
- Returns:
ret
- Return type:
- tvm.topi.cast(x, dtype, span=None)
Cast input to specified data type.
- Parameters:
x (tvm.te.Tensor or Expr) – Input argument.
dtype (str) – Data type.
span (Optional[Span]) – The location of the cast in the source.
- Returns:
y – The result.
- Return type:
- tvm.topi.ceil(x)
Take ceil of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.ceil_log2(x)
Compute integer ceil log2 with a special code path for vulkan SPIR-V does not support log2 on fp64. Instead, we compute integer ceil_log2 via clz intrinsic when the target is vulkan.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.clip(x, a_min, a_max)
Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges.
- Parameters:
x (tvm.te.Tensor) – Input argument.
a_min (tvm.tir.PrimExpr) – Minimum value.
a_max (tvm.tir.PrimExpr) – Maximum value.
- Returns:
y – The result.
- Return type:
- tvm.topi.collapse_sum(data, target_shape)
Return a summation of data to the given shape.
collapse_sum is intended as the backward operator of topi broadcast operators in the automatic differentiation process.
We expect that data is the result of broadcasting some tensor of target_shape in some broadcast operation. Thus target_shape and data.shape must follow broadcast rules.
During computation, the axes of data.shape and target_shape are checked from right to left. For every axis, if it either: - exist in data but not in target_shape, or - is larger than 1 in data and equals to 1 in target_shape, data will be summed over this axis.
- Parameters:
data (tvm.te.Tensor) – The input tensor.
- Returns:
ret – The result tensor after summation.
- Return type:
- tvm.topi.concatenate(a_tuple, axis=0)
Join a sequence of arrays along an existing axis.
- Parameters:
a_tuple (tuple of tvm.te.Tensor) – The arrays to concatenate
axis (int, optional) – The axis along which the arrays will be joined. Default is 0.
- Returns:
ret
- Return type:
- tvm.topi.const_vector(vector, name='const_vector')
convert a const numpy 1-dimensional vector to tvm tensor
- tvm.topi.cos(x)
Take cos of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.cosh(x)
Take cosh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.cumprod(data: Tensor, axis: int | None = None, dtype: int | None = None, exclusive: bool | None = None) Tensor
Numpy style cumprod op. Return the cumulative product of the elements along a given axis.
- Parameters:
data (tvm.te.Tensor) – The input data to the operator.
axis (int, optional) – Axis along which the cumulative product is computed. The default (None) is to compute the cumproduct over the flattened array.
dtype (string, optional) – Type of the returned array and of the accumulator in which the elements are multiplied. If dtype is not specified, it defaults to the dtype of data.
exclusive (bool, optional) – If True, will return exclusive product in which the first element is not included. In other terms, if True, the j-th output element would be the product of the first (j-1) elements. Otherwise, it would be the product of the first j elements.
- Returns:
result – The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.
- Return type:
- tvm.topi.cumsum(data: Tensor, axis: int | None = None, dtype: str | None = None, exclusive: bool | None = None) Tensor
Numpy style cumsum op. Return the cumulative sum of the elements along a given axis.
- Parameters:
data (tvm.te.Tensor) – The input data to the operator.
axis (int, optional) – Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.
dtype (string, optional) – Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of data.
exclusive (bool, optional) – If True, will return exclusive sum in which the first element is not included. In other terms, if True, the j-th output element would be the sum of the first (j-1) elements. Otherwise, it would be the sum of the first j elements.
- Returns:
result – The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.
- Return type:
- tvm.topi.decl_buffer(shape, dtype=None, name='buffer', data=None, strides=None, elem_offset=None, scope='', data_alignment=-1, offset_factor=0, buffer_type='', axis_separators=None, span=None)
Declare a new symbolic buffer.
Normally buffer is created automatically during lower and build. This is only needed if user want to specify their own buffer layout.
See the note below for detailed discussion on usage of buffer.
- Parameters:
shape (tuple of Expr) – The shape of the buffer.
dtype (str, optional) – The data type of the buffer.
name (str, optional) – The name of the buffer.
data (tir.Var, optional) – The data pointer in the buffer.
strides (array of Expr) – The stride of the buffer.
elem_offset (Expr, optional) – The beginning offset of the array to data. In terms of number of elements of dtype.
scope (str, optional) – The storage scope of the buffer, if not global. If scope equals empty string, it means it is global memory.
data_alignment (int, optional) – The alignment of data pointer in bytes. If -1 is passed, the alignment will be set to TVM’s internal default.
offset_factor (int, optional) – The factor of elem_offset field, when set, elem_offset is required to be multiple of offset_factor. If 0 is pssed, the alignment will be set to 1. if non-zero is passed, we will created a Var for elem_offset if elem_offset is not None.
buffer_type (str, optional, {"", "auto_broadcast"}) – auto_broadcast buffer allows one to implement broadcast computation without considering whether dimension size equals to one. TVM maps buffer[i][j][k] -> buffer[i][0][k] if dimension j’s shape equals 1.
axis_separators (list of int, optional) – If passed, a list of separators between groups of axes, each of which is flattened to an output axis. For flat memory spaces, should either be None, or an empty list.
span (Optional[Span]) – The location of the decl_buffer creation in the source.
- Returns:
buffer – The created buffer
- Return type:
Example
Here’s an example of how broadcast buffer can be used to define a symbolic broadcast operation,
m0, m1, m2 = te.var("m0"), te.var("m1"), te.var("m2") n0, n1, n2 = te.var("n0"), te.var("n1"), te.var("n2") o0, o1, o2 = te.var("o0"), te.var("o1"), te.var("o2") A = te.placeholder((m0, m1, m2), name='A') B = te.placeholder((n0, n1, n2), name='B') C = te.compute((o0, o1, o2), lambda i, j, k: A[i, j, k] + B[i, j, k], name='C') Ab = tvm.tir.decl_buffer(A.shape, A.dtype, name="Ab", buffer_type="auto_broadcast") Bb = tvm.tir.decl_buffer(B.shape, B.dtype, name="Bb", buffer_type="auto_broadcast") s = te.create_schedule(C.op) fadd = tvm.build(s, [A, B, C], target='llvm', name='bcast_add', binds={A:Ab, B:Bb}) dev = tvm.cpu(0) a = tvm.nd.array(np.random.uniform(size=(2, 4, 3)).astype(A.dtype), dev) b = tvm.nd.array(np.random.uniform(size=(2, 1, 3)).astype(B.dtype), dev) c = tvm.nd.array(np.zeros((2, 4, 3), dtype=C.dtype), dev) fadd(a, b, c) tvm.testing.assert_allclose(c.numpy(), a.numpy() + b.numpy())
Note
Buffer data structure reflects the DLTensor structure in dlpack. While DLTensor data structure is very general, it is usually helpful to create function that only handles specific case of data structure and make compiled function benefit from it.
If user pass strides and elem_offset is passed as None when constructing the function, then the function will be specialized for the DLTensor that is compact and aligned. If user pass a fully generic symbolic array to the strides, then the resulting function becomes fully generic.
- tvm.topi.dft(re_data: Tensor, im_data: Tensor, inverse: IntImm)
Computes the discrete Fourier transform of input (calculation along the last axis). This gives frequency components of the signal as they change over time.
- Parameters:
re_data (relay.Expr) – N-D tensor, real part of the input signal.
im_data (relay.Expr) – N-D tensor, imaginary part of the input signal. If the signal is real, then the values of this tensor are zeros.
inverse (bool) – Whether to perform the inverse discrete fourier transform.
- Returns:
re_output (relay.Expr) – The Fourier Transform of the input (Real part).
im_output (relay.Expr) – The Fourier Transform of the input (Imaginary part).
- tvm.topi.div(a, b, span=None)
Compute a / b as in C/C++ semantics.
- Parameters:
- Returns:
res – The result expression.
- Return type:
Note
When operands are integers, returns truncdiv(a, b, span).
- tvm.topi.divide(lhs, rhs)
Division with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.dynamic_strided_slice(a, begin, end, strides, output_shape)
Slice of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be sliced.
begin (tvm.te.Tensor) – The indices to begin with in the slicing.
end (tvm.te.Tensor) – Indices indicating end of the slice.
strides (tvm.te.Tensor) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.
output_shape (list of PrimExpr) – Specifies the output shape
- Returns:
ret
- Return type:
- tvm.topi.einsum(subscripts, *operand)
Evaluates the Einstein summation convention on the operands.
- Parameters:
subscripts (string) – Specifies the subscripts for summation as comma separated list of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator ‘->’ is included as well as subscript labels of the precise output form.
a_tuple (tuple of tvm.te.Tensor) – These are the Tensors for the operation. The only difference of einsum between in tvm and numpy is it needs an extra brackets for the tensors. For example, topi.einsum(“ij, jk -> ik”, (A, B)).
- Returns:
out – The calculation based on the Einstein summation convention.
- Return type:
- tvm.topi.elemwise_sum(xs)
Perform element-wise sum on inputs
- Parameters:
xs (list of tvm.te.Tensor) – Input arguments.
- Returns:
y – The result.
- Return type:
- tvm.topi.equal(lhs, rhs)
Compute (lhs==rhs) with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.erf(x)
Take gauss error function of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.erf_legalize(attrs, inputs, types)
Legalizes ERF op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr.
- Return type:
tvm.relay.Expr
- tvm.topi.exp(x)
Take exponential of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.expand_dims(a, axis, num_newaxis=1)
Expand the shape of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be expanded.
num_newaxis (int, optional) – Number of newaxis to be inserted on axis
- Returns:
ret
- Return type:
- tvm.topi.expand_like(a, shape_like, axis)
Expand an input array with the shape of second array. This operation can always be composed of unsqueezing and expanding dims on those unsqueezed axes.
Examples
input = [ 12. 19. 27.] input.shape = (3,) new_shape_array = [[[1,2],[2,3],[1,3]], [[1,4],[4,3],[5,2]], [[7,1],[7,2],[7,3]]] new_shape_array.shape = (3, 3, 2) expand_like(input, [1,2], new_shape_array) = [[[12,12],[12,12],[12,12]], [[19,19],[19,19],[19,19]], [[27,27],[27,27],[27,27]]]
- Parameters:
a (tvm.te.Tensor) – The tensor to be expanded.
shape_like (tvm.te.Tensor) – The tensor to with target shape.
- Returns:
ret
- Return type:
- tvm.topi.extern(shape, inputs, fcompute, name='extern', dtype=None, in_buffers=None, out_buffers=None, tag='', attrs=None)
Compute several tensors via an extern function.
- Parameters:
shape (tuple or list of tuples.) – The shape of the outputs.
fcompute (lambda function of inputs, outputs-> stmt) –
Specifies the IR statement to do the computation. See the following note for function signature of fcompute
Note
Parameters
ins (list of
tvm.tir.Buffer
) - Placeholder for each inputsouts (list of
tvm.tir.Buffer
) - Placeholder for each outputs
Returns
stmt (
tvm.tir.Stmt
) - The statement that carries out array computation.
name (str, optional) – The name hint of the tensor
dtype (str or list of str, optional) – The data types of outputs, by default dtype will be same as inputs.
in_buffers (tvm.tir.Buffer or list of tvm.tir.Buffer, optional) – Input buffers.
out_buffers (tvm.tir.Buffer or list of tvm.tir.Buffer, optional) – Output buffers.
- tag: str, optional
Additonal tag information about the compute.
- attrs: dict, optional
The additional auxiliary attributes about the compute.
- Returns:
tensor – The created tensor or tuple of tensors contains multiple outputs.
- Return type:
Example
In the code below, C is generated by calling external PackedFunc tvm.contrib.cblas.matmul
A = te.placeholder((n, l), name="A") B = te.placeholder((l, m), name="B") C = te.extern((n, m), [A, B], lambda ins, outs: tvm.tir.call_packed( "tvm.contrib.cblas.matmul", ins[0], ins[1], outs[0], 0, 0), name="C")
- tvm.topi.eye(n: int, m: int | None = None, k: int = 0, dtype: str = 'float32') Tensor
Generate an identity matrix or a matrix with ones on the k-th diagonal.
- Parameters:
n (int) – Number of rows
m (int, optional) – Number of columns. If None, defaults to n.
k (int, optional) – Index of the diagonal. 0 (default) refers to the main diagonal. A positive value refers to an upper diagonal, and a negative value to a lower diagonal.
dtype (str, optional) – Data type of the returned array.
- Returns:
y – The result.
- Return type:
- tvm.topi.fast_erf(x)
Take gauss error function of input x using fast_erf implementation.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.fast_exp(x)
Take exponential of input x using fast_exp implementation
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.fast_tanh(x)
Take hyperbolic tangent of input x using fast_tanh implementation
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.fixed_point_multiply(x, multiplier, shift)
Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits
- Parameters:
x (tvm.te.Tensor or Expr) – Input argument.
multiplier (int) – Multiplier of a fixed floating point number described as multiplier*2^(-shift).
shift (int) – Shift of a fixed floating point number described as multiplier*2^(-shift).
- Returns:
y – The result.
- Return type:
- tvm.topi.fixed_point_multiply_per_axis(x: Tensor, y: Tensor, lshift: Tensor, rshift: Tensor, is_lshift_required: int, is_rshift_required: int, axes)
Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits
- Parameters:
x (tvm.te.Tensor) – Input argument.
y (tvm.te.Tensor) – Multiplier of a fixed floating point number described as multiplier*2^(-shift).
lshift (tvm.te.Tensor) – Left shifts of a fixed floating point number described as multiplier*2^(-shift).
rshift (tvm.te.Tensor) – Right shifts of a fixed floating point number described as multiplier*2^(-shift).
is_lshift_required (int) – Whether we need to do left shift or not.
is_rshift_required (int) – Whether we need to do right shift or not.
- Returns:
z – The result.
- Return type:
- tvm.topi.flip(a, axis=0)
Flip/reverse elements of an array in a particular axis.
- Parameters:
a (tvm.te.Tensor) – The tensor to be expanded.
axis (int, optional) – The axis along which the tensors will be reveresed.
- Returns:
ret
- Return type:
- tvm.topi.floor(x)
Take floor of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.floor_divide(lhs, rhs)
Floor division with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.floor_mod(lhs, rhs)
Floor modulus with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.floordiv(a, b, span=None)
Compute the floordiv of two expressions.
- tvm.topi.floormod(a, b, span=None)
Compute the floormod of two expressions.
- tvm.topi.full(shape, dtype, fill_value)
Fill tensor with fill_value
- Parameters:
- Returns:
y – The result.
- Return type:
- tvm.topi.full_like(x, fill_value)
- Construct a tensor with same shape as input tensor,
then fill tensor with fill_value.
- Parameters:
x (tvm.te.Tensor) – Input argument.
fill_value (float) – Value to be filled
- Returns:
y – The result.
- Return type:
- tvm.topi.gather(data, axis, indices)
Gather values along given axis from given indices.
E.g. for a 3D tensor, output is computed as:
out[i][j][k] = data[indices[i][j][k]][j][k] # if axis == 0 out[i][j][k] = data[i][indices[i][j][k]][k] # if axis == 1 out[i][j][k] = data[i][j][indices[i][j][k]] # if axis == 2
indices
must have same shape asdata
, except at dimensionaxis
which must just be not null. Output will have same shape asindices
.- Parameters:
data (tvm.te.Tensor) – The input data to the operator.
axis (int) – The axis along which to index.
indices (tvm.te.Tensor) – The indices of the values to extract.
- Returns:
ret
- Return type:
- tvm.topi.gather_nd(a, indices, batch_dims=0)
Gather elements from a n-dimension array..
- Parameters:
a (tvm.te.Tensor) – The source array.
indices (tvm.te.Tensor) – The indices of the values to extract.
- Returns:
ret
- Return type:
- tvm.topi.get_const_tuple(in_tuple)
Verifies input tuple is IntImm or Var, returns tuple of int or Var.
- tvm.topi.greater(lhs, rhs)
Compute (lhs>rhs) with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.greater_equal(lhs, rhs)
Compute (lhs>=rhs) with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.hybrid_argwhere_1d(output_shape, condition)
Find the indices of elements of a 1-D tensor that are non-zero.
- Parameters:
condition (tvm.te.Tensor) – 1-D tensor with boolean values.
- Returns:
out – Indices of non-zero elements.
- Return type:
- tvm.topi.hybrid_argwhere_2d(output_shape, condition)
Find the indices of elements of a 2-D tensor that are non-zero.
- Parameters:
condition (tvm.te.Tensor) – 2-D tensor with boolean values.
- Returns:
out – Indices of non-zero elements.
- Return type:
- tvm.topi.hybrid_argwhere_3d(output_shape, condition)
Find the indices of elements of a 3-D tensor that are non-zero.
- Parameters:
condition (tvm.te.Tensor) – 3-D tensor with boolean values.
- Returns:
out – Indices of non-zero elements.
- Return type:
- tvm.topi.hybrid_argwhere_4d(output_shape, condition)
Find the indices of elements of a 4-D tensor that are non-zero.
- Parameters:
condition (tvm.te.Tensor) – 4-D tensor with boolean values.
- Returns:
out – Indices of non-zero elements.
- Return type:
- tvm.topi.hybrid_argwhere_5d(output_shape, condition)
Find the indices of elements of a 5-D tensor that are non-zero.
- Parameters:
condition (tvm.te.Tensor) – 5-D tensor with boolean values.
- Returns:
out – Indices of non-zero elements.
- Return type:
- tvm.topi.identity(x)
Take identity of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.invert_permutation(data)
Computes the inverse permutation of data.
- Parameters:
data (tvm.te.Tensor) – Input data
- Returns:
result – Output tensor
- Return type:
Examples
data = [3, 4, 0, 2, 1] topi.invert_permutation(data) = [2, 4, 3, 0, 1]
- tvm.topi.isfinite(x)
Check if value of x is finite, element-wise.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.isinf(x)
Check if value of x is infinite, element-wise.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.isnan(x)
Check if value of x is NaN, element-wise.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.layout_transform(array, src_layout, dst_layout, schedule_rule='None')
Transform the layout according to src_layout and dst_layout
- Parameters:
array (tvm.te.Tensor) – The source array.
src_layout (str) – the source layout.
dst_layout (str) – the destination layout.
schedule_rule (str) – the schedule rule to apply if any
- tvm.topi.left_shift(lhs, rhs)
Left shift with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.less(lhs, rhs)
Compute (lhs<rhs) with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.less_equal(lhs, rhs)
Compute (lhs<=rhs) with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.log(x)
Take logarithm of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.log10(x)
Take logarithm to the base 10 of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.log2(x)
Take logarithm to the base 2 of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.logical_and(lhs, rhs)
Compute element-wise logical and of data.
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.logical_not(data)
Compute element-wise logical not of data.
- Parameters:
data (tvm.te.Tensor or Expr)
- Returns:
ret – Returns Expr if the operand are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.logical_or(lhs, rhs)
Compute element-wise logical or of data.
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.logical_xor(lhs, rhs)
Compute element-wise logical xor of data.
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.make_idx(b, e, s, z, i)
Return the array position in the selection that corresponds to an array position in the full array.
The returned value is only meaningful if within_index() returns True for the same set of parameters.
- Parameters:
b (Expr) – beginning of the index
e (Expr) – end of the index
s (Expr) – strides of index
z (Expr) – size of the indexed dimension
i (Expr) – array position
- Returns:
position – int expression that corresponds to an array position in the selection.
- Return type:
Expr
- tvm.topi.matmul(a, b, transp_a=False, transp_b=False)
Creates an operation that calculates a matrix multiplication (row-major notation): A(i, k) * B(k, j) if trans_a == trans_b, the usual transposed combinations, otherwise
- Parameters:
a (The matrix A)
b (The matrix B)
trans_a (Is A's layout transposed?)
trans_b (Is B's layout transposed?)
- Return type:
A Tensor whose op member is the matmul operation
- tvm.topi.matrix_set_diag(data, diagonal, k=0, align='RIGHT_LEFT')
Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.
- Parameters:
data (relay.Expr) – Input Tensor.
diagonal (relay.Expr) – Values to be filled in the diagonal.
k (int or tuple of int, optional) – Diagonal Offset(s). The diagonal or range of diagonals to set. (0 by default) Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1].
align (string, optional) – Some diagonals are shorter than max_diag_len and need to be padded. align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: “RIGHT_LEFT” (default), “LEFT_RIGHT”, “LEFT_LEFT”, and “RIGHT_RIGHT”. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses “LEFT_RIGHT”, which is the opposite alignment.
- Returns:
result – New tensor with given diagonal values.
- Return type:
relay.Expr
Examples
data = [[[7, 7, 7, 7], [7, 7, 7, 7], [7, 7, 7, 7]], [[7, 7, 7, 7], [7, 7, 7, 7], [7, 7, 7, 7]]] diagonal = [[1, 2, 3], [4, 5, 6]] topi.matrix_set_diag(input, diagonal) = [[[1, 7, 7, 7], [7, 2, 7, 7], [7, 7, 3, 7]], [[4, 7, 7, 7], [7, 5, 7, 7], [7, 7, 6, 7]]]
- tvm.topi.max(data, axis=None, keepdims=False)
Maximum of array elements over a given axis or a list of axes
- Parameters:
data (tvm.te.Tensor) – The input tvm tensor
axis (None or int or tuple of int) – Axis or axes along which the max operation is performed. The default, axis=None, will find the max element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
ret
- Return type:
- tvm.topi.maximum(lhs, rhs)
Take element-wise maximum of two tensors with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.meshgrid(a_tuple, indexing)
Create coordinate matrices from coordinate vectors.
- Parameters:
a_tuple (tuple of tvm.te.Tensor) – The coordinate vectors or scalars.
indexing (str) – Indexing mode, either “ij” or “xy”.
- Returns:
result – The resulting grids for each axis.
- Return type:
- tvm.topi.min(data, axis=None, keepdims=False)
Minimum of array elements over a given axis or a list of axes
- Parameters:
data (tvm.te.Tensor) – The input tvm tensor
axis (None or int or tuple of int) – Axis or axes along which a minimum operation is performed. The default, axis=None, will find the minimum element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
ret
- Return type:
- tvm.topi.minimum(lhs, rhs)
Take element-wise maximum of two tensors with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.mod(lhs, rhs)
Modulus with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.multiply(lhs, rhs)
Multiplication with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.ndarray_size(array, dtype='int32')
Get the number of elements of input array
- Parameters:
array (tvm.te.Tensor) – The source tensor.
dtype (str, optional) – The target data type.
- Returns:
result – The resulting tensor.
- Return type:
- tvm.topi.negative(x)
Take negation of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.not_equal(lhs, rhs)
Compute (lhs!=rhs) with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.one_hot(indices, on_value, off_value, depth, axis, dtype)
Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value. Final dimension is <indices outer dimensions> x depth x <indices inner dimensions>.
- Parameters:
indices (tvm.te.Tensor) – Locations to set to on_value.
on_value (tvm.te.Tensor) – Value to fill at indices.
off_value (tvm.te.Tensor) – Value to fill at all other positions besides indices.
depth (int) – Depth of the one-hot dimension.
axis (int) – Axis to fill.
dtype (relay.DataType) – Data type of the output tensor.
- Returns:
ret – The one-hot tensor.
- Return type:
relay.Expr
Examples
indices = [0, 1, 2] relay.one_hot(indices, 3) = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
- tvm.topi.power(lhs, rhs)
Power with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.prod(data, axis=None, keepdims=False)
Product of array elements over a given axis or a list of axes
- Parameters:
data (tvm.te.Tensor) – The input tvm tensor
axis (None or int or tuple of int) – Axis or axes along which a prod operation is performed. The default, axis=None, will get the prod element over all of the elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
ret
- Return type:
- tvm.topi.reinterpret(x, dtype)
Reinterpret input to specified data type.
- Parameters:
x (tvm.te.Tensor) – Input argument.
dtype (str) – Data type.
- Returns:
y – The result.
- Return type:
- tvm.topi.repeat(a, repeats, axis)
Repeats elements of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be repeated.
repeats (int, required) – Number of repetitions for each element
axis (int, optional) – The axis along which to repeat values
- Returns:
ret
- Return type:
- tvm.topi.reshape(a, newshape)
Reshape the array
- Parameters:
a (tvm.te.Tensor) – The tensor to be reshaped
newshape (tuple of ints) – The new shape
- Returns:
ret
- Return type:
- tvm.topi.reverse_sequence(a, seq_lengths, seq_axis=1, batch_axis=0)
Reverse the tensor for variable length slices. Input is first sliced along batch axis and then elements are reversed along seq axis.
- Parameters:
a (tvm.te.Tensor) – The tensor to be reversed.
seq_lengths (tvm.te.Tensor) – A 1D Tensor with length a.dims[batch_axis] Must be one of the following types: int32, int64 if seq_lengths[i] > a.dims[seq_axis], it is rounded to a.dims[seq_axis] if seq_lengths[i] < 1, it is rounded to 1
seq_axis (int, optional) – The axis along which the elements will be reversed. Default is 1.
batch_axis (int, optional) – The axis along which the tensor will be sliced. Default is 0.
- Returns:
ret – The computed result of same shape and type as of input.
- Return type:
- tvm.topi.right_shift(lhs, rhs)
Right shift with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.round(x)
Round elements of x to nearest integer.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.rsqrt(x)
Take inverse square root of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.scanop(data: Tensor, binop: Callable[[tvm.Expr, tvm.Expr], tvm.Expr], identity_value: tvm.Expr, op_name: str, axis: int | None = None, dtype: str | None = None, exclusive: bool | None = None) Tensor
Cumulative binary operator (scan) with similar axis behavior as np.cumsum and np.cumprod.
See cumprod and cumsum for an example of use.
E.g. if * is your binary operator and the input tensor is [1, 2, 3, 4] the output may be [1, 1 * 2, 1 * 2 * 3, 1 * 2 * 3 * 4]
- Parameters:
data (tvm.te.Tensor) – The input data to the operator.
binop (Callable (tvm.Expr, tvm.Expr) -> tvm.Expr) – A binary operator which should be associative and commutative. E.g. if * is your operator then a * (b * c) = (a * b) * c and a * b = b * a
identity_value (tvm.Expr) – A value for the binary operation which provides the identity property. E.g. if * is your operator and i is the identity_value then a * i = a for all a in the domain of your operation.
axis (int, optional) – Axis along which the operation is computed. The default (None) is to compute the cumulative operation over the flattened array.
dtype (string, optional) – Type of the returned array and of the accumulator in which the elements are computed. If dtype is not specified, it defaults to the dtype of data.
exclusive (bool, optional) – If True will return exclusive cumulative operation in which the first element is not included. In other terms, if True, the j-th output element would be the cumulative operation of the first (j-1) elements. Otherwise, it would be the cumulative operation of the first j elements. The cumulative operation of zero elements is assumed to be the identity_value.
- Returns:
result – The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.
- Return type:
- tvm.topi.scatter_elements(data, indices, updates, axis=0, reduction='update')
Scatter elements from updates to corresponding indices of copied data.
Data, indices, updates and output have the same shape. Indices can not have duplicates (if idx1 != idx2, then indices[idx1] != indices[idx2]) if reduction == “update”.
output[indices[i][j]][j] = f(output[indices[i][j]][j], updates[i][j]) if axis = 0 output[i][indices[i][j]] = f(output[i][indices[i][j]], updates[i][j]) if axis = 1
where the update function f is determined by the reduction. Five types of the function are supported: “update”, “add”, “mul”, “min” and “max” (see below)
- Parameters:
data (tvm.te.Tensor) – The source array.
indices (tvm.te.Tensor) – The indices of the values to extract.
updates (tvm.te.Tensor) – The updates to apply at the Indices
axis (optional, int) – The axis to scatter on. It is zero by default.
reduction (optional, string) – The update mode for the algorithm, either “update”, “add”, “mul”, “min” or “max” If update, the update values will replace the input data If add, the update values will be added to the input data If mul, the input data will be multiplied on the update values If mean, the input data will be mean between the update values and the input data If min, there is choice of minimal between the update values and the input data If max, there is choice of maximal between the update values and the input data It is “update” by default
- Returns:
ret
- Return type:
- tvm.topi.scatter_nd(data, indices, updates, mode)
Scatter elements from a n-dimension array.
Given updates with shape (Y_0, …, Y_{K-1}, X_M, …, X_{N-1}), indices with shape (M, Y_0, …, Y_{K-1}), and output copied from data with shape (X_0, X_1, …, X_{N-1}), scatter_nd computes
output[indices[0, y_0, ..., y_{K-1}], ..., indices[M-1, y_0, ..., y_{K-1}], x_M, ..., x_{N-1} ] = f(output[...], updates[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}])
where the update function f is determinted by the mode.
- Parameters:
data (tvm.te.Tensor) – The source array.
indices (tvm.te.Tensor) – The indices of the values to extract.
updates (tvm.te.Tensor) – The updates to apply at the Indices
mode (string) – The update mode for the algorithm, either “update” or “add” If update, the update values will replace the input data If add, the update values will be added to the input data
- Returns:
ret
- Return type:
- tvm.topi.searchsorted(sorted_sequence, values, right=False, out_dtype='int64')
- Find indices where elements should be inserted to maintain order.
If sorted_sequence is N-dimensional, the innermost dimension of values are searched in the corresponding dimension of sorted_sequence.
- Parameters:
sorted_sequence (te.Tensor) – N-D or 1-D Tensor, containing monotonically increasing sequence on the innermost dimension.
values (te.Tensor) – N-D Tensor containing the search values. When sorted_sequence is 1-D, the shape of values can be arbitrary. Otherwise, ranks of sorted_sequence and values must be the same, and outer N-1 axes must have the same size.
right (bool, optional) – Controls which index is returned if a value lands exactly on one of sorted values. If False, the index of the first suitable location found is given. If true, return the last such index. If there is no suitable index, return either 0 or N (where N is the size of the innermost dimension).
dtype (string, optional) – The data type of the output indices.
- Returns:
indices – Tensor with same shape as values, representing the indices of elements of values if they are inserted in sorted_sequence.
- Return type:
- tvm.topi.sequence_mask(data, valid_length, mask_value=0, axis=0)
Sets all elements outside the expected length of the sequence to a constant value.
This function takes an n-dimensional input array of the form [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] and returns an array of the same shape.
axis means the axis of the length dimension and can only be 0 or 1. If axis is 0, the data must have shape [MAX_LENGTH, batch_size, …]. Otherwise (axis=1), the data must have shape [batch_size, MAX_LENGTH, …].
valid_length gives the length of each sequence. valid_length should be a 1D int array with positive ints and has dimension [batch_size,].
- Parameters:
data (tvm.te.Tensor) – N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.
valid_length (tvm.te.Tensor) – 1-D with shape [batch_size,]
mask_value (float, optional) – The masking value, default 0
axis (int, optional) – axis of the length dimension, must be 0 or 1, default 0
- Returns:
output – N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.
- Return type:
- tvm.topi.shape(array, dtype='int32')
Get the shape of input array
- Parameters:
array (tvm.te.Tensor) – The source tensor.
dtype (str, optional) – The target data type.
- Returns:
result – The resulting tensor.
- Return type:
- tvm.topi.sigmoid(x)
Take sigmoid tanh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.sign(x)
Returns -1, 0, 1 based on sign of x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.sin(x)
Take sin of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.sinh(x)
Take sinh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.sliding_window(data, axis, window_shape, strides)
Slide a window over the data tensor.
- Parameters:
data (relay.Expr) – The input data to the operator.
axis (int) – What axis the window begins sliding over. Window will be slid over this axis and all following axes. The axis value determines the window shape (and thus, the number of strides): window shape and strides must both be of length data.ndim-axis.
window_shape (List[int]) – The window shape to form over the input. Window shape must be of length data.ndim-axis.
strides (List[int]) – How to stride the window along each dimension. Strides must be of length data.ndim-axis.
- Returns:
result – The resulting tensor.
- Return type:
relay.Expr
- tvm.topi.sort(data, axis=-1, is_ascend=1)
Performs sorting along the given axis and returns an array in sorted order.
- Parameters:
data (tvm.te.Tensor) – The input tensor.
axis (int, optional) – Axis along which to sort the input tensor. By default the flattened array is used.
is_ascend (boolean, optional) – Whether to sort in ascending or descending order.
dtype (string, optional) – DType of the output indices.
- Returns:
out – Sorted index tensor.
- Return type:
- tvm.topi.sparse_reshape(sparse_indices, prev_shape, new_shape, new_sparse_indices_shape, new_shape_shape)
Reshape a Sparse Tensor
- Parameters:
sparse_indices (relay.Expr) – A 2-D tensor[N, n_dim] of integers containing location of sparse values, where N is the number of sparse values and n_dim is the number of dimensions of the dense_shape
prev_shape (relay.Expr) – A 1-D tensor containing the previous shape of the dense tensor
new_shape (relay.Expr) – A 1-D tensor containing the new shape of the dense tensor
- Returns:
result – Output tensor.
- Return type:
relay.Expr
Examples
sparse_indices = [[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 2, 3]] prev_shape = [2, 3, 4] new_shape = [9, -1] new_sparse_indices, new_shape = relay.sparse_reshape( sparse_indices, prev_shape, new_shape) new_sparse_indices = [[0, 0], [0, 1], [1, 2], [4, 2], [8, 1]] new_shape = [9, 4]
- tvm.topi.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0)
Converts a sparse representation into a dense tensor.
Example:: - sparse_to_dense([[0, 0], [1, 1]], [2, 2], [3, 3], 0) = [[3, 0], [0, 3]]
- Parameters:
sparse_indices (tvm.te.Tensor) – A 0-D, 1-D, or 2-D tensor of integers containing location of sparse values.
output_shape (A list of integers) – Shape of the dense output tensor.
sparse_values (tvm.te.Tensor) – A 0-D or 1-D tensor containing the sparse values for the sparse indices.
default_value (tvm.te.Tensor) – A 0-D tensor containing the default value for the remaining locations. Defaults to 0.
- Returns:
result – Dense tensor of shape output_shape. Has the same type as sparse_values.
- Return type:
- tvm.topi.split(ary, indices_or_sections, axis=0)
Split an array into multiple sub-arrays.
- Parameters:
ary (tvm.te.Tensor)
indices_or_sections (int or 1-D array)
axis (int)
- Returns:
ret
- Return type:
- tvm.topi.sqrt(x)
Take square root of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.squeeze(a, axis=None)
Remove single-dimensional entries from the shape of an array.
- Parameters:
a (tvm.te.Tensor)
axis (None or int or tuple of ints, optional) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.
- Returns:
squeezed
- Return type:
- tvm.topi.stack(a, axis)
Repeats the whole array multiple times.
- Parameters:
a (tvm.te.Tensor) – The tensor to be stacked.
axis (int, optional) – The axis in the result array along which the input arrays are stacked.
- Returns:
ret
- Return type:
- tvm.topi.stft(data, n_fft, hop_length, win_length, window, normalized, onesided, output_shape)
The STFT computes the Fourier transform of short overlapping windows of the input. This gives frequency components of the signal as they change over time. :param data: Either a 1-D tensor or a 2-D batch tensor. :type data: relay.Expr :param n_fft: The size of Fourier transform :type n_fft: int :param hop_length: The distance between neighboring sliding window frames :type hop_length: int :param win_length: The size of window frame and STFT filter :type win_length: int :param window: A 1-D tensor window frame :type window: relay.Expr :param normalized: Whether to return the normalized STFT results :type normalized: bool :param onesided: Whether to return onesided result or fill with conjugate symmetry :type onesided: bool
- Returns:
output – Tensor containing the STFT result
- Return type:
relay.Expr
Examples
data = [1, 2, 3, 4, 5, 6] window = [4, 3, 2] [n_fft, hop_length, win_length, normalized, onesided] = [3, 3, 3, False, True] relay.stft(data, n_fft, hop_length, win_length, window, normalized, onesided) -> [[[15.0000, 0.0000], [34.0000, 0.0000]], [[ 4.5000, 0.8660], [ 1.0000, -1.7321]]]
- tvm.topi.strided_set(a, v, begin, end, strides=None)
Set slice of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be sliced.
v (tvm.te.Tensor) – The values to set
begin (tvm.te.Tensor) – The indices to begin with in the slicing.
end (tvm.te.Tensor) – Indices indicating end of the slice.
strides (tvm.te.Tensor, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.
- Returns:
ret
- Return type:
- tvm.topi.strided_slice(a, begin, end, strides=None, axes=None, slice_mode='end', assume_inbound=True)
Slice of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be sliced.
begin (list of int) – The indices to begin with in the slicing.
strides (list of int, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.
axes (list of int, optional) – Axes along which slicing is applied. When it is specified, begin, end strides, and axes need to a list of integers of the same length.
slice_mode (str, optional) – The slice mode [end, size]. end - The ending indices for the slice [default]. size - The input strides will be ignored, input end in this mode indicates the sizeof a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.
assume_inbound (bool, optional) – A flag to indicate if all indices are assumed to be inbound
- Returns:
ret
- Return type:
- tvm.topi.subtract(lhs, rhs)
Subtraction with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.sum(data, axis=None, keepdims=False)
Sum of array elements over a given axis or a list of axes
- Parameters:
data (tvm.te.Tensor) – The input tvm tensor
axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- Returns:
ret
- Return type:
- tvm.topi.take(a, indices, axis=None, batch_dims=0, mode='clip')
Take elements from an array along an axis.
- Parameters:
a (tvm.te.Tensor) – The source array.
indices (tvm.te.Tensor) – The indices of the values to extract.
axis (int, optional) – The axis over which to select values. By default, the flattened input array is used.
batch_dims (int) – The number of batch dimensions. By default is 0.
mode (str, optional) – Specifies how out-of-bound indices will behave. clip - clip to the range (default) wrap - wrap around the indices fast - no clip or wrap around (user must make sure indices are in-bound)
- Returns:
ret
- Return type:
- tvm.topi.take_legalize(attrs, inputs, types)
Legalizes dyn.topk op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current op
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.tan(x)
Take tan of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.tanh(x)
Take hyperbolic tanh of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.tensordot(a, b, axes)
A generalization of matrix multiplication to tensor.
- Parameters:
a (The tensor A)
b (The tensor B)
axes (The number of dimensions to reduce over)
- Return type:
A Tensor computing the result
- tvm.topi.tile(a, reps)
Repeats the whole array multiple times.
- Parameters:
a (tvm.te.Tensor) – The tensor to be tiled.
reps (tuple of ints, required) – The number of times for repeating the tensor
- Returns:
ret
- Return type:
- tvm.topi.topk(data, k=1, axis=-1, ret_type='both', is_ascend=False, dtype='int64')
Get the top k elements in an input tensor along the given axis.
- Parameters:
data (tvm.te.Tensor) – The input tensor.
k (int or tvm.te.Tensor, optional) – Number of top elements to select. Return all elements if k < 1.
axis (int, optional) – Axis long which to sort the input tensor.
ret_type (str, optional) – The return type [both, values, indices]. “both”: return both top k data and indices. “values”: return top k data only. “indices”: return top k indices only.
is_ascend (boolean, optional) – Whether to sort in ascending or descending order.
dtype (string, optional) – The data type of the indices output.
- Returns:
out – The computed result.
- Return type:
tvm.te.Tensor or List[tvm.te.Tensor]
- tvm.topi.transpose(a, axes=None)
Permute the dimensions of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be expanded.
axes (tuple of ints, optional) – By default, reverse the dimensions.
- Returns:
ret
- Return type:
- tvm.topi.trilu(data, k, upper)
Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor.
- Parameters:
data (tvm.te.Tensor) – The tensor that trilu will be applied to. Must be either a 2D matrix or a tensor of batches of 2D matrices.
k (tvm.te.Tensor) – The number of diagonals above or below the main diagonal to exclude or include.
upper (bool) – If True, only upper triangular values of input are kept, if False, the lower triangular values are kept.
- Returns:
ret – The new tensor with appropriate diagonals set to zero.
- Return type:
relay.Expr
Examples
x = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] relay.trilu(x, True, 0) = [[0, 1, 2], [0, 4, 5], [0, 0, 8]]
- tvm.topi.trunc(x)
Take truncated value of the input of x, element-wise.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.unique(data, is_sorted=True, return_counts=False)
Find the unique elements of a 1-D tensor. Please note output and counts are all padded to have the same length of data and element with index >= num_unique[0] has undefined value.
- Parameters:
data (tvm.te.Tensor) – A 1-D tensor of integers.
sorted (bool) – Whether to sort the unique elements in ascending order before returning as output.
return_counts (bool) – Whether to return the count of each unique element.
- Returns:
unique (tvm.te.Tensor) – A 1-D tensor containing the unique elements of the input data tensor. The same size as the input data. If there are less unique elements than input data, the end of the tensor is padded with zeros.
indices (tvm.te.Tensor) – A 1-D tensor. The same size as output. For each entry in output, it contains the index of its first occurence in the input data. The end of the tensor is padded with the length of the input data.
inverse_indices (tvm.te.Tensor) – A 1-D tensor. For each entry in data, it contains the index of that data element in the unique array. (Note that inverse_indices is very similar to indices if output is not sorted.)
num_unique (tvm.te.Tensor) – A 1-D tensor with size=1 containing the number of unique elements in the input data tensor.
counts (optional) (tvm.te.Tensor) – A 1-D tensor containing the count of each unique element in the output.
Examples
[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, False) output = [4, 5, 1, 2, 3, _, _, _] indices = [0, 1, 2, 3, 4, _, _, _] inverse_indices = [0, 1, 2, 3, 4, 4, 0, 1] num_unique = [5] [output, indices, num_unique, counts] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, True) output = [4, 5, 1, 2, 3, _, _, _] indices = [0, 1, 2, 3, 4, _, _, _] inverse_indices = [0, 1, 2, 3, 4, 4, 0, 1] num_unique = [5] counts = [2, 2, 1, 1, 2, _, _, _] [output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], True) output = [1, 2, 3, 4, 5, _, _, _] indices = [2, 3, 4, 0, 1, _, _, _] inverse_indices = [3, 4, 0, 1, 2, 2, 3, 4] num_unique = [5]
- tvm.topi.unravel_index(indices, shape)
Convert a flat index or array of flat indices into a tuple of coordinate arrays.
Example:: - unravel_index([22, 41, 37], [7, 6]) = [[3, 6, 6], [4, 5, 1]]
- Parameters:
indices (relay.Expr) – An integer array containing indices.
shape (relay.Expr) – The shape of the array.
- Returns:
result – The tuple of coordinate arrays.
- Return type:
relay.Expr
- tvm.topi.where(condition, x, y)
Get the elements, either from x or y, depending on the condition.
- Parameters:
condition (tvm.te.Tensor) – The condition array.
x (tvm.te.Tensor) – First array to be selected.
y (tvm.te.Tensor) – Second array to be selected.
- Returns:
result – A Tensor selected from x or y depending on condition.
- Return type:
- tvm.topi.within_index(b, e, s, i)
Return a boolean value that indicates if i is within the given index.
- Parameters:
b (Expr) – beginning of the index
e (Expr) – end of the index
s (Expr) – strides of index
i (Expr) – array position
- Returns:
selected – bool expression that is True is the array position would be selected by the index and False otherwise
- Return type:
Expr
- exception tvm.topi.InvalidShapeError
Invalid shape for a topi function. i.e. call winograd template for non-3x3 kernel)
tvm.topi.nn
Neural network operators
Classes:
|
Functions:
|
Perform pooling on height and width dimension of data. |
|
Perform pooling on three dimensional data. |
|
Perform pooling on three dimensional data. |
|
Addition with auto-broadcasting |
|
Change add layout. |
|
Compute batch matrix multiplication of tensor_a and tensor_b. |
|
Legalizes batch_matmul op. |
|
Batch normalization layer (Ioffe and Szegedy, 2014). |
|
Perform space to batch transformation on the data |
|
Legalize bias_add layout. |
|
Binarization and bit-packing along a certain axis. |
|
Binary matrix multiplication using xor and bit-count. |
|
Packs data into format necessary for bitserial computation |
|
Legalizes Bitserial Conv2D op. |
|
Bitserial Conv2D operator. |
|
Bitserial Conv2D operator. |
|
The default implementation of bitserial dense in topi. |
|
Join a sequence of arrays along an existing axis. |
|
Convolution operator in NCHW or NHWC layout. |
|
1D convolution forward operator. |
|
1D convolution in NCW layout. |
|
1D convolution in NWC layout. |
|
Transposed 1D convolution ncw forward operator. |
|
Conv2D operator. |
|
Conv2D operator for nChw[x]c layout. |
|
Conv2D operator for nChw[x]c layout. |
|
Change Conv2D layout. |
|
Weight transformation for winograd |
|
Convolution operator in HWCN layout. |
|
Infer input/output shapes and layouts from a workload and cfg. |
|
Legalizes Conv2D op. |
|
Convolution operator in NCHW layout. |
|
Convolution operator in NHWC layout. |
|
Change Conv2D_Transpose layout. |
|
Legalizes Transposed 2D convolution op. |
|
Transposed 2D convolution nchw forward operator. |
|
Preprocess data and kernel to make the compute pattern of conv2d_transpose the same as conv2d |
|
Conv2D Winograd in NCHW layout. |
Conv2D Winograd without layout transform in NCHW layout. |
|
|
Conv2D Winograd in NHWC layout. |
Conv2D Winograd without layout transform in NHWC layout. |
|
Weight transformation for winograd |
|
|
Weight transformation for winograd |
|
Change Conv3D layout. |
|
Conv3D operator in NCDHW layout. |
|
Convolution operator in NDHWC layout. |
|
Legalizes Transposed 3D convolution op. |
|
Transposed 3D convolution ncdhw forward operator. |
|
Preprocess data and kernel to make the compute pattern of conv3d_transpose the same as conv3d |
|
Weight transformation for 3D winograd |
|
Correlation operator in NCHW layout. |
|
Implementation of conv2d transpose |
|
Implementation of conv3d transpose |
|
Deformable conv2D operator in NCHW layout. |
|
Deformable conv2D operator in NHWC layout. |
|
The default implementation of dense in topi. |
|
Change dense layout. |
|
Legalizes dense op. |
|
The default implementation of dense_pack in topi. |
|
Perform depth to space transformation on the data |
|
Depthwise convolution NCHW[x]c forward operator. |
|
Depthwise convolution nhwc backward wrt input operator. |
|
Depthwise convolution nhwc backward wrt weight operator. |
|
Infer input/output shapes and layouts from a workload and cfg. |
|
Depthwise convolution nchw forward operator. |
|
Depthwise convolution nhwc forward operator. |
|
Dilate data with given dilation value (0 by default). |
|
Returns if expr equals value. |
|
Perform softmax activation on the data. |
|
FIFO buffer to enable computation reuse in CNNs with sliding indow input |
|
Flattens the input array into a 2-D array by collapsing the higher dimensions. |
|
Verifies expr is integer and get the constant value. |
|
Verifies input tuple is IntImm or Var, returns tuple of int or Var. |
|
Common code to get the pad option |
|
Common code to get the pad option |
|
Common code to get the pad option |
|
Common code to get the pad option |
|
Perform global pooling on height and width dimension of data. |
|
1D convolution forward operator for NCW layout. |
|
1D convolution forward operator for NWC layout. |
|
Transposed 1D group convolution ncw forward operator. |
|
Group convolution operator in NCHW layout. |
|
Group convolution operator in NHWC layout. |
|
Group convolution operator in NCHW layout. |
|
Transposed group 3D convolution ncdhw forward operator. |
|
Transform a tensor with the current layout to the desired layout. |
|
Take leaky relu of input x. |
|
Perform log softmax activation on the data |
|
Perform the across channels local response normalisation on the input data. |
|
General LSTM implemented using TE scan. |
|
The default implementation of matmul in topi. |
|
Legalizes matmul op. |
|
Pad Input with mirroring either symmetric or reflected. |
|
Returns a new subclass of tuple with named fields. |
|
Negative log likelihood loss on the input data. |
|
Pad Input with zeros. |
|
Perform pooling on width dimension of data. |
|
Perform pooling on height and width dimension of data. |
|
Perform pooling on depth, height and width dimension of data. |
|
Gradient of pooling on height and width dimension of data. |
|
PReLU. |
|
Change qnn.conv2d layout. |
|
Change qnn.dense layout. |
|
Change requantize layout. |
|
Apply a function of two arguments cumulatively to the items of a sequence, from left to right, so as to reduce the sequence to a single value. |
|
Take relu of input x. |
|
Batch normalization operator in inference. |
|
Batch normalization operator in inference. |
|
Batch normalization operator in inference. |
|
Simplify the expression if it is Expr, directly return if it is int. |
|
Simulated QNN dequantize operator that mimics QNN outputs without changing datatype. |
|
Simulated QNN quantize operator that mimics QNN outputs without changing datatype. |
|
Perform softmax activation on the data. |
|
The common part of softmax and fast_softmax |
|
Perform batch to space transformation on the data |
|
Perform space to depth transformation on the data |
|
Computes sparse-dense addition |
|
Computes sparse-conv2d(1*1) of |
|
Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T, if sparse_lhs=False or Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T, if sparse_lhs=True |
|
Change Sparse Dense layout. |
|
Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T |
|
Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T |
|
Transpose a square sparse matrix, A is an n-by-n sparse matrix in the CSR format. |
|
Slice of an array. |
Analyze the input data from the given args. |
|
|
Analyze the input data from the given args. |
|
Unpack conv2d_NCHWc output from layout NCHWc to NCHW |
|
Perform upsampling on the data. |
|
Perform upsampling on the data. |
|
Compute the A, B, and G transform matrices for tile_size as a tvm.Expr. |
|
Instance normalization operator. |
|
Layer normalization operator. |
|
Group normalization operator. |
|
Root mean square normalization operator. |
- class tvm.topi.nn.Workload(in_dtype, out_dtype, height, width, in_filter, out_filter, kernel_h, kernel_w, padt, padl, padb, padr, dilation_h, dilation_w, stride_h, stride_w)
Attributes:
Alias for field number 12
Alias for field number 13
Alias for field number 2
Alias for field number 0
Alias for field number 4
Alias for field number 6
Alias for field number 7
Alias for field number 1
Alias for field number 5
Alias for field number 10
Alias for field number 9
Alias for field number 11
Alias for field number 8
Alias for field number 14
Alias for field number 15
Alias for field number 3
- dilation_h
Alias for field number 12
- dilation_w
Alias for field number 13
- height
Alias for field number 2
- in_dtype
Alias for field number 0
- in_filter
Alias for field number 4
- kernel_h
Alias for field number 6
- kernel_w
Alias for field number 7
- out_dtype
Alias for field number 1
- out_filter
Alias for field number 5
- padb
Alias for field number 10
- padl
Alias for field number 9
- padr
Alias for field number 11
- padt
Alias for field number 8
- stride_h
Alias for field number 14
- stride_w
Alias for field number 15
- width
Alias for field number 3
- tvm.topi.nn.adaptive_pool(data, output_size, pool_type, layout='NCHW')
- Perform pooling on height and width dimension of data.
The pooling kernel and stride sizes are automatically chosen for desired output sizes. It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.
- Parameters:
data (tvm.te.Tensor) – n-D with shape of layout
pool_type (str) – Pool type, ‘max’ or ‘avg’
layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
- Returns:
output – n-D in the same layout
- Return type:
- tvm.topi.nn.adaptive_pool1d(data, output_size, pool_type, layout='NCW')
Perform pooling on three dimensional data. See the two dimensional version above for details.
- tvm.topi.nn.adaptive_pool3d(data, output_size, pool_type, layout='NCDHW')
Perform pooling on three dimensional data. See the two dimensional version above for details.
- tvm.topi.nn.add(lhs, rhs)
Addition with auto-broadcasting
- Parameters:
lhs (tvm.te.Tensor or Expr) – The left operand
rhs (tvm.te.Tensor or Expr) – The right operand
- Returns:
ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.
- Return type:
tvm.te.Tensor or Expr
- tvm.topi.nn.add_alter_layout(_attrs, _inputs, _tinfos, _out_type)
Change add layout.
Add is not a QNN-specific function, but this generic exists so that bias add operations can be fused with input zero point add optimizations, which only happens if the previous operator is quantized.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.batch_matmul(tensor_a, tensor_b, oshape=None, out_dtype=None, transpose_a=False, transpose_b=True, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
Compute batch matrix multiplication of tensor_a and tensor_b.
Both tensor_a and tensor_b can be transposed. For legacy reason, we use NT format (transpose_a=False, transpose_b=True) by default.
- Parameters:
tensor_a (tvm.te.Tensor) – 3-D with shape [batch, M, K] or [batch, K, M].
tensor_b (tvm.te.Tensor) – 3-D with shape [batch, K, N] or [batch, N, K].
oshape (List[Optional]) – Explicit intended output shape of the computation. Can be useful in cases with dynamic input shapes.
out_dtype (Optional[str]) – Specifies the output data type for mixed precision batch matmul.
transpose_a (Optional[bool] = False) – Whether the first tensor is in transposed format.
transpose_b (Optional[bool] = True) – Whether the second tensor is in transposed format.
auto_scheduler_rewritten_layout (Optional[str] = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the tensor
- Returns:
output – 3-D with shape [batch, M, N]
- Return type:
- tvm.topi.nn.batch_matmul_legalize(attrs, inputs, types)
Legalizes batch_matmul op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current batch_matmul
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.batch_norm(data: Tensor, gamma: Tensor, beta: Tensor, moving_mean: Tensor, moving_var: Tensor, axis: int | None = None, epsilon: float | None = None, center: bool | None = None, scale: bool | None = None, training: bool | None = None, momentum: float | None = None) List[Tensor]
Batch normalization layer (Ioffe and Szegedy, 2014).
Normalizes the input at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
- Parameters:
data (tvm.te.Tensor) – Input to be batch-normalized.
gamma (tvm.te.Tensor) – Scale factor to be applied to the normalized tensor.
beta (tvm.te.Tensor) – Offset to be applied to the normalized tensor.
moving_mean (tvm.te.Tensor) – Running mean of input.
moving_var (tvm.te.Tensor) – Running variance of input.
axis (int, optional, default=1) – Specify along which shape axis the normalization should occur.
epsilon (float, optional, default=1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, optional, default=True) – If True, add offset of beta to normalized tensor, If False, beta is ignored.
scale (bool, optional, defualt=True) – If True, scale normalized tensor by gamma. If False, gamma is ignored.
training (bool, optional, defualt=False) – Indicating whether it is in training mode. If True, update moving_mean and moving_var.
momentum (float, optional, default=0.1) – The value used for the moving_mean and moving_var update.
- Returns:
output (list of tvm.te.Tensor) – Normalized data with same shape as input
moving_mean (tvm.te.Tensor) – Running mean of input.
moving_var (tvm.te.Tensor) – Running variance of input.
- tvm.topi.nn.batch_to_space_nd(data, block_shape, crop_begin_list, crop_end_list)
Perform space to batch transformation on the data
- Parameters:
data (tvm.te.Tensor) – N-D Tensor with shape [batch, spatial_shape, remaining_shapes], where spatial_shape has M dimensions.
block_size (list of ints) – list of size [M] where M is number of spatial dims, specifies block size for each spatial dimension.
crop_begin_list (list of ints) – list of shape [M] where M is number of spatial dims, specifies begin crop size for each spatial dimension.
crop_end_list (list of ints) – list of shape [M] where M is number of spatial dims, specifies end crop size for each spatial dimension.
- Returns:
output
- Return type:
- tvm.topi.nn.bias_add_legalize(_attrs, _inputs, _tinfos)
Legalize bias_add layout.
Bias add is not a QNN-specific function, but this generic exists so that empty channels can be excised from quantized conv2d operators and folded into bias adds.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
- tvm.topi.nn.binarize_pack(data, axis=None, name='PackedInput')
Binarization and bit-packing along a certain axis.
- Parameters:
data (tvm.te.Tensor) – n-D input, can be any layout.
axis (None or int) – The axis along which to do binarization and bit-packing, default is the last axis.
name (str, optional) – The name prefix operators generate.
- Returns:
output – n-D, the same layout as input, dtype is uint32.
- Return type:
- tvm.topi.nn.binary_dense(data, weight)
Binary matrix multiplication using xor and bit-count.
- Parameters:
data (tvm.te.Tensor) – 2-D with shape [batch, in_dim], dtype is uint32.
weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim], dtype is uint32.
- Returns:
output – 2-D with shape [batch, out_dim], dtype is float32.
- Return type:
- tvm.topi.nn.bitpack(data, bits, pack_axis, bit_axis, pack_type, name='QuantizeInput')
Packs data into format necessary for bitserial computation
- tvm.topi.nn.bitserial_conv2d_legalize(attrs, inputs, types)
Legalizes Bitserial Conv2D op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.bitserial_conv2d_nchw(data, kernel, stride, padding, activation_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)
Bitserial Conv2D operator.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
kernel (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]
stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two or four ints) – padding size, [pad_height, pad_width], [pad_top, pad_left, pad_down, pad_right]
activation_bits (int) – number of bits used for activations/input elements
weight_bits (int) – number of bits used for weight elements
out_dtype (str) – return type of convolution
pack_dtype (str) – bit packing type
unipolar (bool) – if binarization style is in unipolar 1/0 format, instead of bipolar -1/+1 format
- Returns:
output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.bitserial_conv2d_nhwc(data, kernel, stride, padding, activation_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)
Bitserial Conv2D operator.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
kernel (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two or four ints) – padding size, [pad_height, pad_width], [pad_top, pad_left, pad_down, pad_right]
activation_bits (int) – number of bits used for activations/input elements
weight_bits (int) – number of bits used for weight elements
out_dtype (str) – return type of convolution
pack_dtype (str) – bit packing type
unipolar (bool) – if binarization style is in unipolar 1/0 format, instead of bipolar -1/+1 format
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.bitserial_dense(data, weight, data_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)
The default implementation of bitserial dense in topi.
- Parameters:
data (tvm.te.Tensor) – 2-D with shape [batch, in_dim]
weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim] or 3-D with shape [out_dim, weight_bits, in_dim]
- Returns:
output – 2-D with shape [batch, out_dim]
- Return type:
- tvm.topi.nn.concatenate(a_tuple, axis=0)
Join a sequence of arrays along an existing axis.
- Parameters:
a_tuple (tuple of tvm.te.Tensor) – The arrays to concatenate
axis (int, optional) – The axis along which the arrays will be joined. Default is 0.
- Returns:
ret
- Return type:
- tvm.topi.nn.conv(inp: Tensor, filt: Tensor, stride: int | Sequence[int], padding: int | Sequence[int], dilation: int | Sequence[int], groups: int, data_layout: str, kernel_layout: str = '', out_dtype: str | None = None, auto_scheduler_rewritten_layout: str | None = None, meta_schedule_original_shape=None, auto_scheduler_should_rewrite_layout: bool = False)
Convolution operator in NCHW or NHWC layout.
Supports 1D, 2D, 3D, … and grouping.
- Parameters:
inp (tvm.te.Tensor) – N-D with shape [batch, in_channel, in_height, in_width, …] in data_layout
filt (tvm.te.Tensor) – N-D with shape [num_filter, in_channel // groups, filter_height, filter_width, …] in kernel_layout
stride (int or a list/tuple of dim ints) – (where dim=2 for NCHW, dim=1 for NCH, etc.) Stride size, or [stride_height, stride_width, …]
padding (int or a list/tuple of dim or 2*dim ints) – (where dim=2 for NCHW, dim=1 for NCH, etc.) padding size, or [pad_height, pad_width, …] for dim ints, or [pad_top, pad_left, pad_bottom, pad_right] for 2*dim ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
groups (int) – number of groups
data_layout (str) – Layout of the input. N indicates batch dimension, C indicates channels, any other character indicates HW (or H or HWD for 1D and 3D).
kernel_layout (Optional[str]) – Layout of the filter. I indicates input channels, O indicates output channels, any other character indicates HW dimension of the filter (or H or HWD for 1D and 3D). If kernel_layout is empty, use data_layout to infer the default kernel_layout. Default kernel_layout is OIHW for NCHW data layout, HWIO for NHWC data layout.
out_dtype (str) – Elements are converted to this type before elementwise multiplication and summation.
auto_scheduler_rewritten_layout (str) – Layout from autoscheduler’s layout rewritting.
meta_schedule_original_shape (Optional[List[PrimExpr]]) – The original shape of the input tensor.
auto_scheduler_should_rewrite_layout (bool) – Should auto scheduler be allowed to rewrite the layout of the filter tensor. Defaults to false. This can cause errors if used with grouped convs.
- Returns:
Output – N-D with shape [batch, out_channel, out_height, out_width, …] in data_layout
- Return type:
- tvm.topi.nn.conv1d(data, kernel, strides=1, padding='VALID', dilation=1, groups=1, data_layout='NCW', kernel_layout='', out_dtype=None)
1D convolution forward operator.
- Parameters:
data (tvm.te.Tensor) – 3-D input shape [batch, in_channel, in_width] for data_layout == ‘NCW’ and [batch, in_width, in_channel] for data_layout == ‘NWC’
kernel (tvm.te.Tensor) – 3-D kernel with shape [num_filter, in_channel, filter_size] for kernel_layout == ‘OIW’ and [filter_size, in_channel, num_filter] for kernel_layout == ‘WIO’
dilation (int or tuple) – Dilation rate if convolution should be dilated.
data_layout (str) – How input data is laid out, must be one of [‘NCW’, ‘NWC’]
kernel_layout (Optiona[str]) – The layout of the kernel. If unspecified, use default layout. “OIW” if data_layout == “NCW”, “WIO” if data_layout == “NWC”.
out_dtype (str) – The output data type. If None then output is same type as input.
- tvm.topi.nn.conv1d_ncw(data, kernel, strides=1, padding='VALID', dilation=1, out_dtype=None)
1D convolution in NCW layout. See
conv()
for details on parameters
- tvm.topi.nn.conv1d_nwc(data, kernel, strides=1, padding='VALID', dilation=1, out_dtype=None)
1D convolution in NWC layout. See
conv()
for details on parameters
- tvm.topi.nn.conv1d_transpose_ncw(data, kernel, stride, padding, out_dtype, output_padding)
Transposed 1D convolution ncw forward operator.
- Parameters:
data (tvm.te.Tensor) – 3-D with shape [batch, in_channel, in_width]
kernel (tvm.te.Tensor) – 3-D with shape [in_channel, num_filter, filter_width]
stride (ints) – The spatial stride along width
out_dtype (str) – The output data type. This is used for mixed precision.
output_padding (ints) – Used to recover the actual output shape in case there are more than one possible shape. Must be smaller than stride.
- Returns:
output – 3-D with shape [batch, out_channel, out_width]
- Return type:
- tvm.topi.nn.conv2d(input, filter, strides, padding, dilation, data_layout='NCHW', kernel_layout='', out_dtype=None)
Conv2D operator.
- Parameters:
input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width] in data_layout
filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width] in kernel_layout
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
data_layout (str) – layout of data
kernel_layout (Optional[str]) – layout of kernel. If unspecified, use default layout inferred from data_layout. “OIHW” if data_layout == “NCHW”, “HWIO” if data_layout == “NHWC”.
- Returns:
output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.conv2d_NCHWc(data, kernel, stride, padding, dilation, layout, out_layout, out_dtype='float32')
Conv2D operator for nChw[x]c layout.
- Parameters:
data (tvm.te.Tensor) – 5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]
kernel (tvm.te.Tensor) – 6-D with shape [num_filter_chunk, in_channel_chunk, filter_height, filter_width, in_channel_block, num_filter_block]
stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
layout (str) – Input data layout
out_layout (str) – Output data layout
out_dtype (str) – output data type
- Returns:
output – 5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]
- Return type:
- tvm.topi.nn.conv2d_NCHWc_int8(data, kernel, stride, padding, dilation, layout, out_layout, out_dtype='int32', n_elems=4)
Conv2D operator for nChw[x]c layout.
- Parameters:
data (tvm.te.Tensor) – 5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]
kernel (tvm.te.Tensor) – 7-D with shape [num_filter_chunk, in_channel_chunk, filter_height, filter_width, in_channel_block/4, num_filter_block, 4]
stride (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
layout (str) – Input data layout
out_layout (str) – Output data layout
out_dtype (str) – output data type
n_elems (int) – numer of int8 elements accumulated
- Returns:
output – 5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]
- Return type:
- tvm.topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, out_type)
Change Conv2D layout.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.conv2d_gemm_weight_transform(kernel, tile_N, tile_K, use_scalable_vectors=False, use_sme=False)
Weight transformation for winograd
- Parameters:
kernel (Tensor) – The raw kernel tensor with layout “NHWC”.
tile_N (int) – Tile size across N axis of the weight transformation for ConvGemm. (N = OC)
tile_K (int) – Tile size across K axis of the weight transformation for ConvGemm. (K = KW * KH * IC)
use_scalable_vectors (bool) – determines if operations on scalable vectors are expected
use_sme (bool) – determines if SME operations on scalable vectors are expected
- Returns:
output – 2-D with shape [CI*KH*KW,CO]
- Return type:
- tvm.topi.nn.conv2d_hwcn(Input, Filter, stride, padding, dilation, out_dtype=None)
Convolution operator in HWCN layout.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [in_height, in_width, in_channel, batch]
Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
- Returns:
output – 4-D with shape [out_height, out_width, out_channel, batch]
- Return type:
- tvm.topi.nn.conv2d_infer_layout(workload, cfg)
Infer input/output shapes and layouts from a workload and cfg.
- tvm.topi.nn.conv2d_legalize(attrs, inputs, types)
Legalizes Conv2D op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)
Convolution operator in NCHW layout.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
Filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
- Returns:
Output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.conv2d_nhwc(Input, Filter, stride, padding, dilation, out_dtype='float32', auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
Convolution operator in NHWC layout.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str = "float32",) – The type of output tensor
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.conv2d_transpose_alter_layout(attrs, inputs, tinfos, out_type)
Change Conv2D_Transpose layout.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.conv2d_transpose_legalize(attrs, inputs, types)
Legalizes Transposed 2D convolution op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current Transposed 2D convolution
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.conv2d_transpose_nchw(Input, Filter, strides, padding, out_dtype, output_padding)
Transposed 2D convolution nchw forward operator.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
Filter (tvm.te.Tensor) – 4-D with shape [in_channel, num_filter, filter_height, filter_width]
strides (tuple of two ints) – The spatial stride along height and width
out_dtype (str) – The output data type. This is used for mixed precision.
output_padding (tuple of ints) – Used to get the right output shape for gradients
- Returns:
Output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.conv2d_transpose_nchw_preprocess(data, kernel, strides, padding, out_dtype, output_padding)
Preprocess data and kernel to make the compute pattern of conv2d_transpose the same as conv2d
- tvm.topi.nn.conv2d_winograd_nchw(data, weight, strides, padding, dilation, out_dtype, pre_computed=False, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
Conv2D Winograd in NCHW layout. This is a clean version to be used by the auto-scheduler for both CPU and GPU.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
weight (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str, optional) – Specifies the output data type.
pre_computed (bool) – Whether the kernel is precomputed
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.conv2d_winograd_nchw_without_weight_transform(data, weight, strides, padding, dilation, out_dtype, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
Conv2D Winograd without layout transform in NCHW layout. This is a clean version to be used by meta-schedule for both CPU and GPU.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
weight (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str, optional) – Specifies the output data type.
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.conv2d_winograd_nhwc(data, weight, strides, padding, dilation, out_dtype, pre_computed=False, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
Conv2D Winograd in NHWC layout. This is a clean version to be used by the auto-scheduler for both CPU and GPU.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
weight (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str, optional) – Specifies the output data type.
pre_computed (bool) – Whether the kernel is precomputed
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.conv2d_winograd_nhwc_without_weight_transform(data, weight, strides, padding, dilation, out_dtype, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
Conv2D Winograd without layout transform in NHWC layout. This is a clean version to be used by the auto-scheduler for both CPU and GPU.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
weight (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str, optional) – Specifies the output data type.
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.conv2d_winograd_nnpack_weight_transform(kernel, convolution_algorithm, out_dtype)
Weight transformation for winograd
- Parameters:
- Returns:
output – 4-D with shape [alpha, alpha, CO, CI]
- Return type:
- tvm.topi.nn.conv2d_winograd_weight_transform(kernel, tile_size)
Weight transformation for winograd
- Parameters:
- Returns:
output – 4-D with shape [alpha, alpha, CO, CI]
- Return type:
- tvm.topi.nn.conv3d_alter_layout(attrs, inputs, tinfos, out_type)
Change Conv3D layout.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.conv3d_ncdhw(Input, Filter, stride, padding, dilation, groups, out_dtype=None)
Conv3D operator in NCDHW layout.
- Parameters:
Input (tvm.te.Tensor) – 5-D with shape [batch, in_channel, in_depth, in_height, in_width]
Filter (tvm.te.Tensor) – 5-D with shape [num_filter, in_channel, filter_depth, filter_height, filter_width]
stride (int or a list/tuple of three ints) – Stride size, or [strid_depth, stride_height, stride_width]
dilation (int or a list/tuple of three ints) – dilation size, or [dilation_depth, dilation_height, dilation_width]
groups (int) – Number of groups.
- Returns:
Output – 5-D with shape [batch, out_channel, out_depth, out_height, out_width]
- Return type:
- tvm.topi.nn.conv3d_ndhwc(Input, Filter, stride, padding, dilation, groups, out_dtype='float32', auto_scheduler_rewritten_layout='', meta_schedule_origin_shape=None)
Convolution operator in NDHWC layout.
- Parameters:
Input (tvm.te.Tensor) – 5-D with shape [batch, in_depth, in_height, in_width, in_channel]
Filter (tvm.te.Tensor) – 5-D with shape [filter_depth, filter_height, filter_width, in_channel, num_filter]
stride (int or a list/tuple of three ints) – Stride size, or [stride_depth, stride_height, stride_width]
dilation (int or a list/tuple of three ints) – dilation size, or [dilation_depth, dilation_height, dilation_width]
groups (int) – Number of groups.
out_dtype (str = "float32",) – The type of output tensor
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_origin_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
Output – 5-D with shape [batch, out_depth, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.conv3d_transpose_legalize(attrs, inputs, types)
Legalizes Transposed 3D convolution op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current Transposed 3D convolution
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.conv3d_transpose_ncdhw(Input, Filter, strides, padding, out_dtype, output_padding)
Transposed 3D convolution ncdhw forward operator.
- Parameters:
Input (tvm.te.Tensor) – 5-D with shape [batch, in_channel, in_depth, in_height, in_width]
Filter (tvm.te.Tensor) – 5-D with shape [in_channel, num_filter, filter_depth, filter_height, filter_width]
strides (int or a list/tuple of three ints) – The spatial stride along depth,height and width
out_dtype (str) – The output data type. This is used for mixed precision.
output_padding (tuple of ints) – Used to get the right output shape for gradients
- Returns:
Output – 5-D with shape [batch, out_channel, out_depth, out_height, out_width]
- Return type:
- tvm.topi.nn.conv3d_transpose_ncdhw_preprocess(data, kernel, strides, padding, out_dtype, output_padding)
Preprocess data and kernel to make the compute pattern of conv3d_transpose the same as conv3d
- tvm.topi.nn.conv3d_winograd_weight_transform(kernel, tile_size)
Weight transformation for 3D winograd
- Parameters:
- Returns:
output – 5-D with shape [alpha, alpha, alpha, CO, CI]
- Return type:
- tvm.topi.nn.correlation_nchw(data1, data2, kernel_size, max_displacement, stride1, stride2, padding, is_multiply)
Correlation operator in NCHW layout.
- Parameters:
data1 (tvm.te.Tensor) – 4-D with shape [batch, channel, height, width]
data2 (tvm.te.Tensor) – 4-D with shape [batch, channel, height, width]
kernel_size (int) – Kernel size for correlation, must be an odd number
max_displacement (int) – Max displacement of Correlation
stride1 (int) – Stride for data1
stride2 (int) – Stride for data2 within the neightborhood centered around data1
padding (int or a list/tuple of 2 or 4 ints) – Padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
is_multiply (bool) – operation type is either multiplication or substraction
- Returns:
Output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.declaration_conv2d_transpose_impl(data, kernel, strides, padding, out_dtype, output_padding)
Implementation of conv2d transpose
- tvm.topi.nn.declaration_conv3d_transpose_impl(data, kernel, strides, padding, out_dtype, output_padding)
Implementation of conv3d transpose
- tvm.topi.nn.deformable_conv2d_nchw(data, offset, kernel, strides, padding, dilation, deformable_groups, groups, out_dtype)
Deformable conv2D operator in NCHW layout.
The deformable convolution operation is described in https://arxiv.org/abs/1703.06211
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
offset (tvm.te.Tensor) – 4-D with shape [batch, deformable_groups * filter_height * filter_width * 2, out_height, out_width].
kernel (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
deformable_groups (int) – number of deformable groups
groups (int) – number of groups
- Returns:
output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.deformable_conv2d_nhwc(data, offset, kernel, strides, padding, dilation, deformable_groups, groups, out_dtype)
Deformable conv2D operator in NHWC layout.
The deformable convolution operation is described in https://arxiv.org/abs/1703.06211
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
offset (tvm.te.Tensor) –
- 4-D with shape [batch, out_height, out_width,
deformable_groups * filter_height * filter_width * 2].
kernel (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
strides (int or a list/tuple of two ints) – stride size, or [stride_height, stride_width]
padding (int or a list/tuple of two ints) – padding size, or [pad_height, pad_width]
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
deformable_groups (int) – number of deformable groups
groups (int) – number of groups
- Returns:
output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.dense(data, weight, bias=None, out_dtype=None, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
The default implementation of dense in topi. This is an alias of matmul_nt operator for data tensor in non-transposed format and weight tensor in transposed format.
- Parameters:
data (tvm.te.Tensor) – 2-D with shape [batch, in_dim]
weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim]
bias (Optional[tvm.te.Tensor]) – 1-D with shape [out_dim]
out_dtype (Optional[str]) – The output type. This is used for mixed precision.
auto_scheduler_rewritten_layout (str = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 2-D with shape [batch, out_dim]
- Return type:
- tvm.topi.nn.dense_alter_layout(attrs, inputs, tinfos, out_type)
Change dense layout.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.dense_legalize(attrs, inputs, types)
Legalizes dense op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current dense
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.dense_pack(data, weight, bias=None, out_dtype=None)
The default implementation of dense_pack in topi.
- Parameters:
data (tvm.te.Tensor) – 2-D with shape [batch, in_dim]
weight (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim]
bias (Optional[tvm.te.Tensor]) – 1-D with shape [out_dim]
out_dtype (Optional[str]) – The output type. This is used for mixed precision.
- Returns:
output – 2-D with shape [batch, out_dim]
- Return type:
- tvm.topi.nn.depth_to_space(data, block_size, layout='NCHW', mode='DCR')
Perform depth to space transformation on the data
- Parameters:
data (tvm.te.Tensor) – 4-D tensor in either NCHW or NHWC layout.
block_size (int) – Size of blocks to compose from channel dimension.
layout (string) – Either NCHW or NHWC, indicating data layout.
mode (string) – Either DCR or CDR, indicates how channels should be accessed. In DCR, channels are interwoven in the Tensorflow style while in CDR channels are accessed sequentially as in Pytorch.
- Returns:
output – Output of shape [N, C / block_size**2, H * block_size, W * block_size]
- Return type:
- tvm.topi.nn.depthwise_conv2d_NCHWc(Input, Filter, stride, padding, dilation, layout, out_layout, out_dtype=None)
Depthwise convolution NCHW[x]c forward operator.
- Parameters:
Input (tvm.te.Tensor) – 5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]
Filter (tvm.te.Tensor) – 6-D with shape [out_channel_chunk, 1, filter_height, filter_width, 1, out_channel_block] In NCHWc depthwise convolution, we group kernel’s in_channel and channel_multiplier together then do the tiling.
stride (tuple of two ints) – The spatial stride along height and width
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
layout (str) – Input data layout
out_layout (str) – Output data layout
out_dtype (str, optional) – Output data type
- Returns:
Output – 5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]
- Return type:
- tvm.topi.nn.depthwise_conv2d_backward_input_nhwc(Filter, Out_grad, oshape, ishape, stride, padding)
Depthwise convolution nhwc backward wrt input operator.
- Parameters:
Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]
Out_grad (tvm.te.Tensor) – 4-D with shape [batch, out_height, out_width, out_channel]
stride (tuple of two ints) – The spatial stride along height and width
- Returns:
Output – 4-D with shape [batch, in_height, in_width, in_channel]
- Return type:
- tvm.topi.nn.depthwise_conv2d_backward_weight_nhwc(Input, Out_grad, oshape, fshape, stride, padding)
Depthwise convolution nhwc backward wrt weight operator.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
Out_grad (tvm.te.Tensor) – 4-D with shape [batch, out_height, out_width, out_channel]
stride (tuple of two ints) – The spatial stride along height and width
- Returns:
Output – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]
- Return type:
- tvm.topi.nn.depthwise_conv2d_infer_layout(workload, cfg)
Infer input/output shapes and layouts from a workload and cfg.
- tvm.topi.nn.depthwise_conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)
Depthwise convolution nchw forward operator.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
Filter (tvm.te.Tensor) – 4-D with shape [in_channel, channel_multiplier, filter_height, filter_width]
stride (int or a list/tuple of two ints) – The spatial stride, or (stride_height, stride_width).
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str, optional) – Output data type
- Returns:
Output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.depthwise_conv2d_nhwc(Input, Filter, stride, padding, dilation, kernel_layout='HWOI', out_dtype=None)
Depthwise convolution nhwc forward operator.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]
stride (tuple of two ints) – The spatial stride along height and width
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (str, optional) – Output data type
- Returns:
Output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.dilate(data, strides, dilation_value=0.0, name='DilatedInput')
Dilate data with given dilation value (0 by default).
- Parameters:
data (tvm.te.Tensor) – n-D, can be any layout.
strides (list / tuple of n ints) – Dilation stride on each dimension, 1 means no dilation.
dilation_value (int/float, optional) – Value used to dilate the input.
name (str, optional) – The name prefix operators generated
- Returns:
Output – n-D, the same layout as data.
- Return type:
- tvm.topi.nn.equal_const_int(expr, value)
Returns if expr equals value.
- Parameters:
expr (tvm.Expr) – The input expression.
- Returns:
equal – Whether they equals.
- Return type:
- tvm.topi.nn.fast_softmax(x, axis=-1)
Perform softmax activation on the data. Use approximation to compute exponent for faster speed.
- Parameters:
data (tvm.te.Tensor) – can be any dimension
axis (int) – channel axis
- Returns:
output – output shape is the same as input
- Return type:
- tvm.topi.nn.fifo_buffer(data, buffer, axis)
FIFO buffer to enable computation reuse in CNNs with sliding indow input
Compute equivalent of
concat(buffer, data, axis=axis) .slice_axis(axis=axis, begin=data.shape[axis], end=data.shape[axis]+buffer.shape[axis])
Useful for
Encoding explicit re-use of computation in convolution ops operated on a sliding window input
Implementing a FIFO queue to cache intermediate results, e.g. as in Fast WaveNet.
- Parameters:
data (tvm.te.Tensor) – The input data
buffer (tvm.te.Tensor) – Previous value of the FIFO buffer
axis (int) – Specify which axis should be used for buffering
- Returns:
result – Updated value for the buffer
- Return type:
- tvm.topi.nn.flatten(data)
Flattens the input array into a 2-D array by collapsing the higher dimensions.
- Parameters:
data (tvm.te.Tensor) – Input array.
- Returns:
output – 2-D array with collapsed higher dimensions.
- Return type:
- tvm.topi.nn.get_const_int(expr)
Verifies expr is integer and get the constant value.
- tvm.topi.nn.get_const_tuple(in_tuple)
Verifies input tuple is IntImm or Var, returns tuple of int or Var.
- tvm.topi.nn.get_pad_tuple(padding, kernel)
Common code to get the pad option
- tvm.topi.nn.get_pad_tuple1d(padding, kernel)
Common code to get the pad option
- tvm.topi.nn.get_pad_tuple3d(padding, kernel)
Common code to get the pad option
- Parameters:
- Returns:
pad_front (int) – Padding size on front.
pad_top (int) – Padding size on top
pad_left (int) – Padding size on left
pad_back (int) – Padding size on back.
pad_down (int) – Padding size on down.
pad_right (int) – Padding size on right.
- tvm.topi.nn.get_pad_tuple_generic(padding, kernel)
Common code to get the pad option
- tvm.topi.nn.global_pool(data, pool_type, layout='NCHW')
- Perform global pooling on height and width dimension of data.
It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.
- Parameters:
data (tvm.te.Tensor) – n-D with shape of layout
pool_type (str) – Pool type, ‘max’ or ‘avg’
layout (str) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
- Returns:
output – n-D in same layout with height and width dimension size of 1. e.g., for NCHW, the output shape will be [batch, channel, 1, 1]
- Return type:
- tvm.topi.nn.group_conv1d_ncw(data, kernel, strides=1, padding='VALID', dilation=1, groups=1, out_dtype=None)
1D convolution forward operator for NCW layout.
- Parameters:
data (tvm.te.Tensor) – 3-D with shape [batch, in_channel, in_width]
kernel (tvm.te.Tensor) – 3-D with shape [num_filter, in_channel, filter_size]
padding (int, tuple, or str) – Padding size can be an integer for equal padding, a tuple of (left, right) or a string in [‘VALID’, ‘SAME’].
dilation (int or tuple) – Dilation rate if convolution should be dilated.
groups (int) – Number of groups
out_dtype (str) – The output data type. If None then output is same type as input.
- tvm.topi.nn.group_conv1d_nwc(data, kernel, strides=1, padding='VALID', dilation=1, groups=1, out_dtype=None)
1D convolution forward operator for NWC layout.
- Parameters:
data (tvm.te.Tensor) – 3-D with shape [batch, in_width, in_channel]
kernel (tvm.te.Tensor) – 3-D with shape [filter_size, in_channel, num_filter]
padding (int, tuple, or str) – Padding size can be an integer for equal padding, a tuple of (left, right) or a string in [‘VALID’, ‘SAME’].
dilation (int or tuple) – Dilation rate if convolution should be dilated.
groups (int) – Number of groups
out_dtype (str) – The output data type. If None then output is same type as input.
- tvm.topi.nn.group_conv1d_transpose_ncw(data, kernel, stride, padding, out_dtype, output_padding, groups)
Transposed 1D group convolution ncw forward operator.
- Parameters:
data (tvm.te.Tensor) – 3-D with shape [batch, in_channel, in_width]
kernel (tvm.te.Tensor) – 3-D with shape [in_channel, num_filter, filter_width]
stride (ints) – The spatial stride along width
out_dtype (str) – The output data type. This is used for mixed precision.
output_padding (ints) –
- Used to recover the actual output shape in case there are more
than one possible shape. Must be smaller than stride.
- groupsint
number of groups
- Returns:
output – 3-D with shape [batch, out_channel, out_width]
- Return type:
- tvm.topi.nn.group_conv2d_nchw(Input, Filter, stride, padding, dilation, groups, out_dtype=None)
Group convolution operator in NCHW layout.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
Filter (tvm.te.Tensor) – 4-D with shape [num_filter, in_channel // groups, filter_height, filter_width]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
groups (int) – number of groups
out_dtype (str) – The output type. This is used for mixed precision.
- Returns:
Output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.group_conv2d_nhwc(Input, Filter, stride, padding, dilation, groups, out_dtype=None)
Group convolution operator in NHWC layout.
- Parameters:
Input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel, …]
Filter (tvm.te.Tensor) – 4-D with shape [filter_height, filter_width, in_channel // groups, num_filter]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
groups (int) – number of groups
out_dtype (str) – The output type. This is used for mixed precision.
- Returns:
Output – 4-D with shape [batch, out_height, out_width, out_channel]
- Return type:
- tvm.topi.nn.group_conv2d_transpose_nchw(data, kernel, stride, padding, out_dtype, output_padding, groups)
Group convolution operator in NCHW layout.
- Parameters:
data (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
kernel (tvm.te.Tensor) – 4-D with shape [in_channel, out_channel // groups, filter_height, filter_width]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int or a list/tuple of 2 or 4 ints) – padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints
out_dtype (str) – The output data type. This is used for mixed precision.
output_padding (tuple of ints) – Used to get the right output shape for gradients
groups (int) – number of groups
out_dtype – The output type. This is used for mixed precision.
- Returns:
Output – 4-D with shape [batch, out_channel, out_height, out_width]
- Return type:
- tvm.topi.nn.group_conv3d_transpose_ncdhw(data, kernel, strides, padding, out_dtype, output_padding, groups)
Transposed group 3D convolution ncdhw forward operator.
- Parameters:
data (tvm.te.Tensor) – 5-D with shape [batch, in_channel, in_depth, in_height, in_width]
kernel (tvm.te.Tensor) – 5-D with shape [in_channel, num_filter, filter_depth, filter_height, filter_width]
strides (int or a list/tuple of three ints) – The spatial stride along depth,height and width
out_dtype (str) – The output data type. This is used for mixed precision.
output_padding (tuple of ints) – Used to get the right output shape for gradients
groups (int) – number of groups
- Returns:
Output – 5-D with shape [batch, out_channel, out_depth, out_height, out_width]
- Return type:
- tvm.topi.nn.layout_transform(tensor: RelayExpr, current_layout: str, desired_layout: str)
Transform a tensor with the current layout to the desired layout.
E.g. layout_transform(t, “NCHW”, “CNHW”) –> relay.transpose(t, [1, 0, 2, 3])
- tvm.topi.nn.leaky_relu(x, alpha)
Take leaky relu of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
alpha (float) – The slope for the small gradient when x < 0
- Returns:
y – The result.
- Return type:
- tvm.topi.nn.log_softmax(x, axis=-1)
Perform log softmax activation on the data
- Parameters:
data (tvm.te.Tensor) – N-D input data
- Returns:
output – N-D output with same shape
- Return type:
- tvm.topi.nn.lrn(data, size, axis=1, alpha=0.0001, beta=0.75, bias=2)
Perform the across channels local response normalisation on the input data.
sum_sqr_up^i{x, y} = (bias+((alpha/size)* {sum_{j=max(0, i-size/2)}^{min(N-1,i+size/2)} (data^j{x,y})^2}))^beta output^i{x, y} = data^i{x, y}/sum_sqr_up^i{x, y} N is the number for input channels
- Parameters:
- Returns:
output – 4-D output with same shape
- Return type:
- tvm.topi.nn.lstm(Xs, Wi, Wh, Bi=None, Bh=None, h_init=None, c_init=None, proj=None, p_i=None, p_f=None, p_o=None, f_act=<function sigmoid>, g_act=<function tanh>, h_act=<function tanh>, reverse=False, weight_layout: str = 'IFGO')
General LSTM implemented using TE scan.
- Parameters:
Xs (te.Tensor) – Input sequence with shape (seq_len, batch_size, in_dim)
Wi (te.Tensor) – Input weight matrix with shape (4 * hidden_dim, in_dim). The weights are packed according to weight_layout.
Wh (te.Tensor) – Hidden weight matrix with shape (4 * hidden_dim, hidden_dim or proj_dim). Packed as Wh.
Bi (te.Tensor, optional) – Input bias with shape (4 * hidden_dim,), by default None. Packed as Wh.
Bh (te.Tensor, optional) – Hidden bias with shape as Bi, by default None. Packed as Wh.
h_init (te.Tensor, optional) – Initial hidden state with shape (batch_size, hidden_dim or proj_dim), zero if None
c_init (te.Tensor, optional) – Initial cell state with same shape as h_init, zero if None
proj (te.Tensor, optional) – Projection matrix with shape (proj_dim, hidden_dim), by default None
p_i (te.Tensor, optional) – Peephole LSTM matrices with shape (batch_size, hidden_dim), by default None
p_f (te.Tensor, optional) – Peephole LSTM matrices with shape (batch_size, hidden_dim), by default None
p_o (te.Tensor, optional) – Peephole LSTM matrices with shape (batch_size, hidden_dim), by default None
f_act (F, optional) – Gate activation functions
g_act (F, optional) – Gate activation functions
h_act (F, optional) – Gate activation functions
reverse (bool, optional) – Whether to process Xs in reverse, by default False
weight_layout (str, optional) – The packed weight layout for gates, by default “IFGO”. Note: I = input, F = forget, G = cell, O = output.
- Returns:
result – Tuple of hidden states (with shape (seq_len, batch_size, hidden_dim or proj_dim)), and cell states (with shape (seq_len, batch_size, hidden_dim)).
- Return type:
- tvm.topi.nn.matmul(tensor_a, tensor_b, bias=None, out_dtype=None, transpose_a=False, transpose_b=False, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)
The default implementation of matmul in topi.
- Parameters:
tensor_a (tvm.te.Tensor) – 2-D with shape [batch, in_dim]
tensor_b (tvm.te.Tensor) – 2-D with shape [out_dim, in_dim]
bias (Optional[tvm.te.Tensor]) – 1-D with shape [out_dim]
out_dtype (Optional[str]) – The output type. This is used for mixed precision.
transpose_a (Optional[bool] = False) – Whether the tensor_a is in transposed format.
transpose_b (Optional[bool] = False) – Whether the tensor_b is in transposed format.
auto_scheduler_rewritten_layout (Optional[str] = "") – The layout after auto-scheduler’s layout rewrite pass.
meta_schedule_original_shape (Optional[List[PrimExpr]] = None) – The original shape of the input tensor.
- Returns:
output – 2-D with shape [batch, out_dim]
- Return type:
- tvm.topi.nn.matmul_legalize(attrs, inputs, types)
Legalizes matmul op.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current matmul
inputs (list of tvm.relay.Expr) – The args of the Relay expr to be legalized
types (list of types) – List of input and output types
- Returns:
result – The legalized expr
- Return type:
tvm.relay.Expr
- tvm.topi.nn.mirror_pad(data, pad_before, pad_after=None, mode='SYMMETRIC', name='MirrorPadInput')
Pad Input with mirroring either symmetric or reflected.
- Parameters:
data (tvm.te.Tensor) – n-D input, can be any layout.
pad_before (list / tuple of n ints) – Pad width on each dimension to pad the before the axis begin.
pad_after (list / tuple of n ints, optional) – Pad width each dimension to pad the after the axis end.
mode (str, optional) – Type of mirror padding to apply. Must be SYMMETRIC or REFLECT
name (str, optional) – The name prefix operators generated
- Returns:
Output – n-D, the same layout as Input.
- Return type:
- tvm.topi.nn.namedtuple(typename, field_names, *, rename=False, defaults=None, module=None)
Returns a new subclass of tuple with named fields.
>>> Point = namedtuple('Point', ['x', 'y']) >>> Point.__doc__ # docstring for the new class 'Point(x, y)' >>> p = Point(11, y=22) # instantiate with positional args or keywords >>> p[0] + p[1] # indexable like a plain tuple 33 >>> x, y = p # unpack like a regular tuple >>> x, y (11, 22) >>> p.x + p.y # fields also accessible by name 33 >>> d = p._asdict() # convert to a dictionary >>> d['x'] 11 >>> Point(**d) # convert from a dictionary Point(x=11, y=22) >>> p._replace(x=100) # _replace() is like str.replace() but targets named fields Point(x=100, y=22)
- tvm.topi.nn.nll_loss(predictions, targets, weights, reduction, ignore_index)
Negative log likelihood loss on the input data.
- output{n, i_1, i_2, …, i_k} = -p * w
- where t = target{n, i_1, i_2, …, i_k}
p = predictions{n, t, i_1, i_2, i_k} w = weights{n, i_1, i_2, …, i_k} if t != ignore_index else 0
result = reduction(output)
- Parameters:
predictions (tvm.te.Tensor) – (k+2)-D with shape (N, C, d_1, d_2, …, d_k), where C is the number of target classes
targets (tvm.te.Tensor) – (k+1)-D with shape (N, d_1, d_2, …, d_k) The target value of the input.
weights (tvm.te.Tensor) – 1-D with shape (C,) The weight of each target value.
reduction (string) – The reduction method to apply to output. Can be “mean”, “sum” or “none”.
ignore_index (int) – The target value to ignore.
- Returns:
output – a scalar if the reduction type is “mean” or “sum”, otherwise the same shape as target.
- Return type:
- tvm.topi.nn.pad(data, pad_before, pad_after=None, pad_value=0.0, name='PadInput', attrs=None)
Pad Input with zeros.
- Parameters:
data (tvm.te.Tensor) – n-D input, can be any layout.
pad_before (list / tuple of n ints) – Pad width on each dimension to pad the before the axis begin.
pad_after (list / tuple of n ints, optional) – Pad width each dimension to pad the after the axis end.
pad_value (float, optional) – The value to be padded.
name (str, optional) – The name prefix operators generated
- Returns:
Output – n-D, the same layout as Input.
- Return type:
- tvm.topi.nn.pool1d(data, kernel, stride, dilation, padding, pool_type, ceil_mode=False, layout='NCW', count_include_pad=True)
- Perform pooling on width dimension of data.
Width axis is determined according to the layout string. in which ‘w’ means width. Width dimension cannot be split. For example, NCW, NCW16c, etc. are valid for pool, while NCW16w is not. See parameter layout for more information of the layout string convention.
- Parameters:
data (tvm.te.Tensor) – n-D with shape of layout
kernel (list/tuple of one int or int) – Kernel size, [kernel_width]
stride (list/tuple of one int or int) – Stride size, [stride_width]
dilation (list/tuple of two ints) – Dilation size, [dilation_height, dilation_width]
padding (list/tuple of two ints) – Pad size, [pad_left, pad_right]
pool_type (str) – Pool type, ‘max’ or ‘avg’
ceil_mode (bool) – Whether to use ceil when calculating output size.
layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCW16c can describe a 4-D tensor of [batch_size, channel, width, channel_block], in which channel_block=16 is a split of dimension channel.
count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’
- Returns:
output – n-D in the same layout
- Return type:
- tvm.topi.nn.pool2d(data, kernel, stride, dilation, padding, pool_type, ceil_mode=False, layout='NCHW', count_include_pad=True)
- Perform pooling on height and width dimension of data.
It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.
- Parameters:
data (tvm.te.Tensor) – n-D with shape of layout
kernel (list/tuple of two ints) – Kernel size, [kernel_height, kernel_width]
stride (list/tuple of two ints) – Stride size, [stride_height, stride_width]
dilation (list/tuple of two ints) – Dilation size, [dilation_height, dilation_width]
padding (list/tuple of four ints) – Pad size, [pad_top, pad_left, pad_bottom, pad_right]]
pool_type (str) – Pool type, ‘max’ or ‘avg’
ceil_mode (bool) – Whether to use ceil when calculating output size.
layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’
- Returns:
output – n-D in the same layout
- Return type:
- tvm.topi.nn.pool3d(data, kernel, stride, dilation, padding, pool_type, ceil_mode=False, layout='NCDHW', count_include_pad=True)
- Perform pooling on depth, height and width dimension of data.
It decides the depth, height and width dimension according to the layout string, in which ‘D’, ‘W’ and ‘H’ means depth, width and height respectively. Depth, width and height dimension cannot be split. For example, NCDHW, NCDHW16c, etc. are valid for pool, while NCDHW16d, NCDHW16w, NCDHW16h are not. See parameter layout for more information of the layout string convention.
- Parameters:
data (tvm.te.Tensor) – n-D with shape of layout
kernel (list/tuple of three ints) – Kernel size, [kernel_depth, kernel_height, kernel_width]
stride (list/tuple of three ints) – Stride size, [stride_depth, stride_height, stride_width]
dilation (list/tuple of two ints) – Dilation size, [dilation_height, dilation_width]
padding (list/tuple of six ints) – Pad size, [pad_front, pad_top, pad_left, pad_back, pad_bottom, pad_right]
pool_type (str) – Pool type, ‘max’ or ‘avg’
ceil_mode (bool) – Whether to use ceil when calculating output size.
layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCDHW16c can describe a 6-D tensor of [batch_size, channel, depth, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’
- Returns:
output – n-D in the same layout
- Return type:
- tvm.topi.nn.pool_grad(grads, data, kernel, stride, padding, pool_type, ceil_mode=False, count_include_pad=True, layout='NCHW')
- Gradient of pooling on height and width dimension of data.
It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.
- Parameters:
grads (tvm.te.Tensor) – n-D with shape of layout
data (tvm.te.Tensor) – n-D with shape of layout
kernel (list/tuple of two ints) – Kernel size, [kernel_height, kernel_width]
stride (list/tuple of two ints) – Stride size, [stride_height, stride_width]
padding (list/tuple of four ints) – Pad size, [pad_top, pad_left, pad_bottom, pad_right]]
pool_type (str) – Pool type, ‘max’ or ‘avg’
ceil_mode (bool) – Whether to use ceil when calculating output size.
count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’
layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
- Returns:
output – n-D in the same layout
- Return type:
- tvm.topi.nn.prelu(x, slope, axis=1)
PReLU. It accepts two arguments: an input
x
and a weight arrayW
and computes the output as \(PReLU(x) y = x > 0 ? x : W * x\), where \(*\) is an elementwise multiplication for each sample in the batch.- Parameters:
x (tvm.te.Tensor) – Input argument.
slope (tvm.te.Tensor) – Channelised slope tensor for prelu
axis (int) – The axis where the channel data needs to be applied
- Returns:
y (tvm.te.Tensor) – The result.
Links
—–
[http (//arxiv.org/pdf/1502.01852v1.pdf])
- tvm.topi.nn.qnn_conv2d_alter_layout(_attrs, _inputs, _tinfos, _out_type)
Change qnn.conv2d layout.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.qnn_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type)
Change qnn.dense layout. Not to change by default
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current dense op
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
- tvm.topi.nn.qnn_requantize_alter_layout(_attrs, _inputs, _tinfos, _out_type)
Change requantize layout.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.reduce(function, sequence[, initial]) value
Apply a function of two arguments cumulatively to the items of a sequence, from left to right, so as to reduce the sequence to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). If initial is present, it is placed before the items of the sequence in the calculation, and serves as a default when the sequence is empty.
- tvm.topi.nn.relu(x)
Take relu of input x.
- Parameters:
x (tvm.te.Tensor) – Input argument.
- Returns:
y – The result.
- Return type:
- tvm.topi.nn.scale_shift_nchw(Input, Scale, Shift)
Batch normalization operator in inference.
- Parameters:
Input (tvm.te.Tensor) – 4-D input tensor, NCHW layout [batch, channel, height, width]
Scale (tvm.te.Tensor) – Scale tensor, 1-D of size channel number
Shift (tvm.te.Tensor) – Shift tensor, 1-D of size channel number
- Returns:
Output – Output tensor, layout is NCHW
- Return type:
- tvm.topi.nn.scale_shift_nchwc(Input, Scale, Shift)
Batch normalization operator in inference.
- Parameters:
Input (tvm.te.Tensor) – 5-D input tensor, NCHWc layout [batch, channel_chunk, height, width, channel_block]
Scale (tvm.te.Tensor) – Scale tensor, 2-D of size [channel_chunk, channel_block]
Shift (tvm.te.Tensor) – Shift tensor, 2-D of size [channel_chunk, channel_block]
- Returns:
Output – Output tensor, layout is NHWC
- Return type:
- tvm.topi.nn.scale_shift_nhwc(Input, Scale, Shift)
Batch normalization operator in inference.
- Parameters:
Input (tvm.te.Tensor) – 4-D input tensor, NHWC layout [batch, height, width, channel]
Scale (tvm.te.Tensor) – Scale tensor, 1-D of size channel number
Shift (tvm.te.Tensor) – Shift tensor, 1-D of size channel number
- Returns:
Output – Output tensor, layout is NHWC
- Return type:
- tvm.topi.nn.simplify(expr)
Simplify the expression if it is Expr, directly return if it is int.
- tvm.topi.nn.simulated_dequantize(data, in_dtype, input_scale=None, input_zero_point=None, axis=-1)
Simulated QNN dequantize operator that mimics QNN outputs without changing datatype. The benefit of this operator over true QNN dequantize is that this operator allows dynamic datatype selection and can operate on both per-channel and scalar scales and zero points while QNN dequantize requires both of these to be fixed at compile time.
- Parameters:
data (tvm.te.Tensor) – An N-D input tensor to the operator.
in_dtype (tvm.te.Tensor) – A scalar variable that indicates which datatype to simulate dequantization with. Use SQNN_DTYPE_TO_CODE to convert a dtype string into the corresponding variable value.
input_scale (tvm.te.Tensor, optional) – A scalar tensor representing the scale to use when dequantizing from integer datatypes. When it contains more than a single value, N must match the number of channels in data.
input_zero_point (tvm.te.Tensor, optional) – A 1-D tensor representing the zero point to use when dequantizing from integer datatypes. When it contains more than a single value, N must match the number of channels in data.
axis (int, optional) – The channel axis for quantization. Default value is -1 which corresponds to the last axis.
- tvm.topi.nn.simulated_quantize(data, out_dtype, output_scale=None, output_zero_point=None, axis=-1)
Simulated QNN quantize operator that mimics QNN outputs without changing datatype. The benefit of this operator over true QNN quantize is that this operator allows dynamic datatype selection and can operate on both per-channel and scalar scales and zero points while QNN quantize requires both of these to be fixed at compile time.
- Parameters:
data (tvm.te.Tensor) – An N-D input tensor to the operator.
out_dtype (tvm.te.Tensor) – A scalar variable that indicates which datatype to simulate quantization with. Use SQNN_DTYPE_TO_CODE to convert a dtype string into the corresponding variable value.
output_scale (tvm.te.Tensor, optional) – A scalar tensor representing the scale to use when quantizing to integer datatypes. When it contains more than a single value, N must match the number of channels in data.
output_zero_point (tvm.te.Tensor, optional) – A 1-D tensor representing the zero point to use when quantizing to integer datatypes. When it contains more than a single value, N must match the number of channels in data.
axis (int, optional) – The channel axis for quantization. Default value is -1 which corresponds to the last axis.
- tvm.topi.nn.softmax(x, axis=-1)
Perform softmax activation on the data.
- Parameters:
data (tvm.te.Tensor) – can be any dimension
axis (int) – channel axis
- Returns:
output – output shape is the same as input
- Return type:
- tvm.topi.nn.softmax_common(x, axis, use_fast_exp)
The common part of softmax and fast_softmax
- tvm.topi.nn.space_to_batch_nd(data, block_shape, pad_before, pad_after, pad_value=0.0)
Perform batch to space transformation on the data
- Parameters:
data (tvm.te.Tensor) – N-D Tensor with shape [batch, spatial_shape, remaining_shapes], where spatial_shape has M dimensions.
block_shape (list of ints) – list of size [M] where M is number of spatial dims, specifies block size for each spatial dimension.
pad_before (list of ints) – list of shape [M] where M is number of spatial dims, specifies zero-padding size before each spatial dimension.
pad_after (list of ints) – list of shape [M] where M is number of spatial dims, specifies zero-padding size after each spatial dimension.
pad_value (float, optional) – The value used for padding.
- Returns:
output
- Return type:
- tvm.topi.nn.space_to_depth(data, block_size, layout='NCHW')
Perform space to depth transformation on the data
- Parameters:
data (tvm.te.Tensor) – 4-D tensor in either NCHW or NHWC layout.
block_size (int) – Size of blocks to decompose into channel dimension.
layout (string) – Either NCHW or NHWC, indicating data layout.
- Returns:
output – Output of shape [N, C * block_size**2, H / block_size, W / block_size]
- Return type:
- tvm.topi.nn.sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr)
Computes sparse-dense addition
- Parameters:
dense_data (tvm.te.Tensor) – 2-D with shape [M, N]
sparse_data (tvm.te.Tensor) – 1-D with shape [nnz] (CSR)
sparse_indices (tvm.te.Tensor) – 1-D with shape [nnz] (CSR)
sparse_indptr (tvm.te.Tensor) – 1-D with shape [M + 1] (CSR)
- Returns:
output – 2-D with shape [M, N]
- Return type:
- tvm.topi.nn.sparse_conv2d(dense_data, sparse_data, sparse_indices, sparse_indptr, layout='NHWC', kernel_size=1)
Computes sparse-conv2d(1*1) of
data
and(weight_data, weight_indices, weight_indptr)
- Parameters:
dense_data (tvm.te.Tensor) –
4-D with shape
[M, H, W, K]
(layout=NHWC)4-D with shape
[M, K, H, W]
(layout=NCHW)sparse_data (tvm.te.Tensor) –
2-D with shape
[num_blocks, bs_r]
(BSR)3-D with shape
[num_blocks, bs_r, bs_c]
(BSR)sparse_indices (tvm.te.Tensor) – 1-D with shape
[num_blocks]
(BSR)sparse_indptr (tvm.te.Tensor) – 1-D with shape
[(N + 1) // bs_r]
(BSR)layout (str) – layout of data
- Returns:
output – 4-D with shape [M, H, W, N] (layout=NHWC) 4-D with shape [M, N, H ,W] (layout=NCHW)
- Return type:
- tvm.topi.nn.sparse_dense(dense_data, sparse_data, sparse_indices, sparse_indptr, sparse_lhs=False)
Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T, if sparse_lhs=False or Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T, if sparse_lhs=True
- Parameters:
dense_data (tvm.te.Tensor) – 2-D with shape [M, K]
sparse_data (tvm.te.Tensor) – 1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)
sparse_indices (tvm.te.Tensor) – 1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)
sparse_indptr (tvm.te.Tensor) – 1-D with shape [N + 1] (CSR) or 1-D with shape [(N + 1) // bs_r] (BSR)
sparse_lhs (bool, optional) – Indicates whether lhs or rhs matrix is sparse. Default value is False.
- Returns:
output – 2-D with shape [M, N]
- Return type:
- tvm.topi.nn.sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type)
Change Sparse Dense layout.
This is used for modifying the inputs weights so they are more amenable for the target.
- Parameters:
attrs (tvm.ir.Attrs) – Attributes of current convolution
inputs (tvm.relay.Expr) – Grouped input symbols
tinfos (list) – Input shape and dtype
out_type (type) – The output type
Note
Unlike other TOPI functions, this function operates on both graph level and operator level.
- tvm.topi.nn.sparse_dense_sp_lhs(data_data, data_indices, data_indptr, weight)
Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T
- Parameters:
data_data – 1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)
data_indices – 1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)
data_indptr – 1-D with shape [M + 1] (CSR) or 1-D with shape [(M + 1) // bs_r] (BSR)
weight – 2-D with shape [N, K]
- Returns:
output – 2-D with shape [M, N]
- Return type:
- tvm.topi.nn.sparse_dense_sp_rhs(data, weight_data, weight_indices, weight_indptr)
Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T
- Parameters:
data (tvm.te.Tensor) – 2-D with shape [M, K]
weight_data (tvm.te.Tensor) – 1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)
weight_indices (tvm.te.Tensor) – 1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)
weight_indptr (tvm.te.Tensor) – 1-D with shape [N + 1] (CSR) or 1-D with shape [(N + 1) // bs_r] (BSR)
- Returns:
output – 2-D with shape [M, N]
- Return type:
- tvm.topi.nn.sparse_transpose(sparse_data, sparse_indices, sparse_indptr)
Transpose a square sparse matrix, A is an n-by-n sparse matrix in the CSR format. ** Currently only support Square Matrices **
- Parameters:
sparse_data (tvm.te.Tensor) – 1-D with shape [nonzeros]
sparse_indices (tvm.te.Tensor) – 1-D with shape [nonzeros], dtype of ‘int32’
sparse_indptr (tvm.te.Tensor) – 1-D with shape [n+1], dtype of ‘int32’
- Returns:
out_data (tvm.te.Tensor) – 1-D with shape [nonzeros]
out_indices (tvm.te.Tensor) – 1-D with shape [nonzeros], dtype of ‘int32’
out_indptr (tvm.te.Tensor) – 1-D with shape [n+1], dtype of ‘int32’
- tvm.topi.nn.strided_slice(a, begin, end, strides=None, axes=None, slice_mode='end', assume_inbound=True)
Slice of an array.
- Parameters:
a (tvm.te.Tensor) – The tensor to be sliced.
begin (list of int) – The indices to begin with in the slicing.
strides (list of int, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.
axes (list of int, optional) – Axes along which slicing is applied. When it is specified, begin, end strides, and axes need to a list of integers of the same length.
slice_mode (str, optional) – The slice mode [end, size]. end - The ending indices for the slice [default]. size - The input strides will be ignored, input end in this mode indicates the sizeof a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.
assume_inbound (bool, optional) – A flag to indicate if all indices are assumed to be inbound
- Returns:
ret
- Return type:
- tvm.topi.nn.try_get_conv2d_sparse_input(args)
Analyze the input data from the given args.
- Parameters:
args (List[Tensor]) – Input/output Tensor of a TVM subgraph.
- Returns:
Map from the input Tensor to its buffer name.
- Return type:
Notes
The buffer name is specially designed, and these buffer should be provided in SearchTask(…, task_inputs={…}).
- tvm.topi.nn.try_get_sparse_input(args)
Analyze the input data from the given args.
- Parameters:
args (List[Tensor]) – Input/output Tensor of a TVM subgraph.
- Returns:
Map from the input Tensor to its buffer name.
- Return type:
Notes
The buffer name is specially designed, and these buffer should be provided in SearchTask(…, task_inputs={…}).
- tvm.topi.nn.unpack_NCHWc_to_nchw(packed_out, out_dtype)
Unpack conv2d_NCHWc output from layout NCHWc to NCHW
- Parameters:
packed_out (tvm.te.Tensor) – The output tensor of conv2d_NCHWc.
out_dtype (str) – The output dtype.
- Returns:
unpacked_out – The unpacked output tensor in NCHW layout.
- Return type:
- tvm.topi.nn.upsampling(data, scale_h, scale_w, layout='NCHW', method='nearest_neighbor', align_corners=False, output_shape=None)
- Perform upsampling on the data.
Nearest neighbor and bilinear upsampling are supported.
- Parameters:
inputs (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]
scale_h (float) – Scaling factor for height
scale_w (float) – Scaling factor for width
layout (string, optional) – either “NCHW” or “NHWC”
method ({"bilinear", "nearest_neighbor", "bicubic"}) – Method to be used for upsampling.
output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)
- Returns:
output – 4-D with shape [batch, channel, in_height*scale_h, in_width*scale_w] or [batch, in_height*scale, in_width*scale, channel]
- Return type:
- tvm.topi.nn.upsampling3d(data, scale_d, scale_h, scale_w, layout='NCDHW', method='nearest_neighbor', coordinate_transformation_mode='half_pixel', output_shape=None)
- Perform upsampling on the data.
Nearest neighbor and bilinear upsampling are supported.
- Parameters:
inputs (tvm.te.Tensor) – inputs is a 5-D tensor with shape [batch, channel, in_depth, in_height, in_width] or [batch, in_depth, in_height, in_width, channel]
scale_d (float) – Scaling factor for depth
scale_h (float) – Scaling factor for height
scale_w (float) – Scaling factor for width
layout (string, optional) – either “NCDHW” or “NDHWC”
method ({"trilinear", "nearest_neighbor"}) – Method to be used for upsampling.
coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.
output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)
- Returns:
output – 5-D with shape [batch, channel, in_depth*scale, in_height*scale, in_width*scale] or [batch, in_depth*scale, in_height*scale, in_width*scale, channel]
- Return type:
- tvm.topi.nn.winograd_transform_matrices(tile_size, kernel_size, out_dtype)
Compute the A, B, and G transform matrices for tile_size as a tvm.Expr.
- tvm.topi.nn.instance_norm(data, gamma, beta, axis, epsilon=1e-05)
Instance normalization operator.
- Parameters:
data (tvm.te.Tensor) – N-D with shape (d_0, d_1, …, d_{N-1})
gamma (tvm.te.Tensor) – K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
beta (tvm.te.Tensor) – Optional, K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
axis (list of int) – Axis over the normalization applied (the axis along which the mean and variance are computed)
epsilon (float) – The epsilon value to avoid division by zero.
- Returns:
result – N-D with shape (d_0, d_1, …, d_{N-1})
- Return type:
- tvm.topi.nn.layer_norm(data, gamma, beta, axis, epsilon=1e-05)
Layer normalization operator. It accepts fp16 and fp32 as input data type. It will cast the input to fp32 to perform the computation. The output will have the same data type as input.
- Parameters:
data (tvm.te.Tensor) – N-D with shape (d_0, d_1, …, d_{N-1})
gamma (tvm.te.Tensor) – K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
beta (tvm.te.Tensor) – Optional, K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
epsilon (float) – The epsilon value to avoid division by zero.
- Returns:
result – N-D with shape (d_0, d_1, …, d_{N-1})
- Return type:
- tvm.topi.nn.group_norm(data, gamma, beta, num_groups, channel_axis, axes, epsilon=1e-05)
Group normalization operator. It accepts fp16 and fp32 as input data type. It will cast the input to fp32 to perform the computation. The output will have the same data type as input.
- Parameters:
data (tvm.te.Tensor) – N-D with shape (d_0, d_1, …, d_{N-1})
gamma (tvm.te.Tensor) – 1-D with shape (r_0) where r_0 == d_{channel_axis}
beta (tvm.te.Tensor) – Optional, 1-D with shape (r_0) where r_0 == d_{channel_axis}
num_groups (int) – The number of groups
channel_axis (int) – The channel axis
axes (list of int) – Axis over the normalization applied, excluding the channel axis
epsilon (float) – The epsilon value to avoid division by zero.
- Returns:
result – N-D with shape (d_0, d_1, …, d_{N-1})
- Return type:
- tvm.topi.nn.rms_norm(data, weight, axis, epsilon=1e-05)
Root mean square normalization operator. The output will have the same data type as input.
- Parameters:
data (tvm.te.Tensor) – N-D with shape (d_0, d_1, …, d_{N-1})
weight (tvm.te.Tensor) – K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
epsilon (float) – The epsilon value to avoid division by zero.
- Returns:
result – N-D with shape (d_0, d_1, …, d_{N-1})
- Return type:
tvm.topi.image
IMAGE network operators
Functions:
|
affine_grid operator that generates 2D sampling grid. |
|
Check whether can convert multiplication to division |
|
Perform crop and resize operation on the data. |
|
Morphological dilation operator in NCHW layout. |
|
Morphological 2d dilation NHWC layout. |
|
Get 1d indices |
|
Get 1d pixel |
|
Get 2d indices |
|
Get 2d pixel |
|
Get 3d indices |
|
Get 3d pixel |
|
get the closest index to a value based on a certain rounding method |
|
Infer input x from output x with various coordinate transformation methods |
|
Common code to get the pad option |
|
Applies grid sampling to input feature map. |
|
Check whether the layout type is NCHWinic |
|
Check whether the layout type is NCHWxc |
|
Pad Input with zeros. |
|
Perform resize operation on the data. |
|
Perform resize operation on the data. |
|
Perform resize operation on the data. |
|
Simplify the expression if it is Expr, directly return if it is int. |
- tvm.topi.image.affine_grid(data, target_shape)
affine_grid operator that generates 2D sampling grid.
This operation is described in https://arxiv.org/pdf/1506.02025.pdf. It generates a uniform sampling grid within the target shape and normalizes it to [-1, 1]. The provided affine transformation is then applied on the sampling grid.
- Parameters:
data (tvm.Tensor) – 3-D with shape [batch, 2, 3]. The affine matrix.
target_shape (list/tuple of two int) – Specifies the output shape (H, W).
- Returns:
Output – 4-D with shape [batch, 2, target_height, target_width]
- Return type:
tvm.Tensor
- tvm.topi.image.can_convert_multiply_to_intdiv(origin_size, scaled_size)
Check whether can convert multiplication to division
- tvm.topi.image.crop_and_resize(data, boxes, box_indices, crop_size, layout='NCHW', method='bilinear', extrapolation_value=None, out_dtype=None)
Perform crop and resize operation on the data.
- Parameters:
data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]
boxes (tvm.te.Tensor) – A 2-D tensor of shape [num_boxes, 4]. Each row of the tensor specifies the coordinates of a box.
box_indices (tvm.te.Tensor) – A 1-D tensor of shape [num_boxes], box_indices[i] specifies the data that the i-th box refers to.
crop_size (Tuple) – The target size of each box.
layout (string, optional) – “NCHW”, “NHWC”
method ({"bilinear", "nearest_neighbor"}) – Method to be used for resizing.
extrapolation_value (float, optional) – Value used for extrapolation, when applicable.
out_dtype (string, optional) – Type to return. If left None will be same as input type.
- Returns:
output – 4-D with shape [num_boxes, channel, crop_height, crop_width] or [num_boxes, crop_height, crop_width, channel]
- Return type:
- tvm.topi.image.dilation2d_nchw(input, filter, stride, padding, dilations, out_dtype=None)
Morphological dilation operator in NCHW layout.
- Parameters:
input (tvm.te.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
filter (tvm.te.Tensor) – 3-D with shape [ in_channel, filter_height, filter_width]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
dilations (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (Optional[str]) – Specifies the output data type.
- Returns:
Output – 4-D with shape [batch, in_channel, out_height, out_width]
- Return type:
- tvm.topi.image.dilation2d_nhwc(input, filter, stride, padding, dilations, out_dtype=None)
Morphological 2d dilation NHWC layout.
- Parameters:
input (tvm.te.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
filter (tvm.te.Tensor) – 3-D with shape [filter_height, filter_width, in_channel]
stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
padding (int) – Padding size
dilations (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
out_dtype (Optional[str]) – Specifies the output data type.
- Returns:
Output – 4-D with shape [batch, out_height, out_width, in_channel]
- Return type:
- tvm.topi.image.get_1d_indices(indices, layout='NCW')
Get 1d indices
- tvm.topi.image.get_1d_pixel(data, layout, image_width, n, c, x, cc, ib, ic)
Get 1d pixel
- tvm.topi.image.get_2d_indices(indices, layout='NCHW')
Get 2d indices
- tvm.topi.image.get_2d_pixel(data, layout, image_height, image_width, n, c, y, x, cc, ib, ic)
Get 2d pixel
- tvm.topi.image.get_3d_indices(indices, layout='NCDHW')
Get 3d indices
- tvm.topi.image.get_3d_pixel(data, layout, image_depth, image_height, image_width, n, c, z, y, x, cc)
Get 3d pixel
- tvm.topi.image.get_closest_index(in_x, rounding_method, boxes, use_int_div=False)
get the closest index to a value based on a certain rounding method
- tvm.topi.image.get_inx(x, image_width, target_width, coordinate_transformation_mode, start_x=0, end_x=-1, use_int_div=False)
Infer input x from output x with various coordinate transformation methods
- tvm.topi.image.get_pad_tuple(padding, kernel)
Common code to get the pad option
- tvm.topi.image.grid_sample(data, grid, method='bilinear', layout='NCHW', padding_mode='zeros', align_corners=True)
Applies grid sampling to input feature map.
Given \(data\) and \(grid\), then for 4-D the output is computed by
\[x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \ y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \ output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src}])\]\(x_{dst}\), \(y_{dst}\) enumerate all spatial locations in \(output\), and \(G()\) denotes the interpolation function.
The out-boundary points will be padded with zeros if padding_mode is “zeros”, or border pixel value if padding_mode is “border”, or inner pixel value if padding_mode is “reflection”.
The left-top corner (-1, -1) and right-bottom corner (1, 1) in grid will be map to (0, 0) and (h - 1, w - 1) of data if align_corners is “True”, or (-0.5, -0.5) and (h - 0.5, w - 0.5) of data if align_corners is “False”.
The shape of the output will be 4-D (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]), or 5-D (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3], grid.shape[4]).
The operator assumes that \(grid\) has been normalized to [-1, 1].
grid_sample often cooperates with affine_grid which generates sampling grids for grid_sample.
- Parameters:
data (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width], or 5-D with shape [batch, in_channel, in_depth, in_height, in_width]
grid (tvm.Tensor) – 4-D with shape [batch, 2, out_height, out_width], or 5-D with shape [batch, 3, out_depth, out_height, out_width]
method (str) – The interpolation method, 4-D “nearest”, “bilinear”, “bicubic” and 5-D “nearest”, “bilinear”(“trilinear”) are supported.
layout (str) – The layout of input data and the output.
padding_mode (str) – The padding mode for outside grid values, “zeros”, “border”, “reflection” are supported.
align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to “True”, the extrema (“-1” and “1”) are considered as referring to the center points of the input corner pixels. If set to “False”, they are instead considered as referring to the corner points of the input corner pixels, making the sampling more resolution agnostic.
- Returns:
Output – 4-D with shape [batch, in_channel, out_height, out_width], or 5-D with shape [batch, in_channel, out_depth, out_height, out_width]
- Return type:
tvm.Tensor
- tvm.topi.image.nchw_pack_layout(layout_info)
Check whether the layout type is NCHWinic
- tvm.topi.image.nchw_xc_layout(layout_info)
Check whether the layout type is NCHWxc
- tvm.topi.image.pad(data, pad_before, pad_after=None, pad_value=0.0, name='PadInput', attrs=None)
Pad Input with zeros.
- Parameters:
data (tvm.te.Tensor) – n-D input, can be any layout.
pad_before (list / tuple of n ints) – Pad width on each dimension to pad the before the axis begin.
pad_after (list / tuple of n ints, optional) – Pad width each dimension to pad the after the axis end.
pad_value (float, optional) – The value to be padded.
name (str, optional) – The name prefix operators generated
- Returns:
Output – n-D, the same layout as Input.
- Return type:
- tvm.topi.image.resize1d(data, roi, size, layout='NCW', method='linear', coordinate_transformation_mode='half_pixel', rounding_method='', bicubic_alpha=-0.5, bicubic_exclude=0, extrapolation_value=0.0, out_dtype=None, output_shape=None)
Perform resize operation on the data.
- Parameters:
data (tvm.te.Tensor) – inputs is a 3-D tensor with shape [batch, channel in_width] or [batch in_width, channel]
roi (Tuple of Float or Expr) – The region of interest for cropping the input image. Expected to be of size 2, and format [start_w, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.
size (Tuple) – Output resolution scale to
layout (string, optional) – “NCW”, “NWC”, or “NCWc”.
coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.
method (string, optional) – method of interpolation (“nearest”, “linear”, “bicubic”)
coordinate_transformation_mode – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].
rounding_method – Method for rounding coordinate locations
bicubic_alpha (float, optional) – Bicubic spline coefficient
bicubic_exclude (bool, optional:) – Exclude values outside the image fdor bicubic interpolation
extrapolation_value (float, optional) – Value used for extrapolation, when applicable.
out_dtype (string, optional) – Type to return. If left None will be same as input type.
output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)
- Returns:
output – 4-D with shape [batch, chananel, in_width*scale] or [batch, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_width*scale, channel-minor]
- Return type:
- tvm.topi.image.resize2d(data, roi, size, layout='NCHW', method='linear', coordinate_transformation_mode='half_pixel', rounding_method='', bicubic_alpha=-0.5, bicubic_exclude=0, extrapolation_value=0.0, out_dtype=None, output_shape=None)
Perform resize operation on the data.
- Parameters:
data (tvm.te.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]
roi (Tuple of Float or Expr) – The region of interest for cropping the input image. Expected to be of size 4, and format [start_h, start_w, end_h, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.
size (Tuple) – Output resolution scale to
layout (string, optional) – “NCHW”, “NHWC”, or “NCHWc”.
method (string, optional) – method of interpolation (“nearest”, “linear”, “bicubic”)
coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].
rounding_method – Method for rounding coordinate locations
bicubic_alpha (float, optional) – Bicubic spline coefficient
bicubic_exclude (bool, optional:) – Exclude values outside the image fdor bicubic interpolation
extrapolation_value (float, optional) – Value used for extrapolation, when applicable.
out_dtype (string, optional) – Type to return. If left None will be same as input type.
output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)
- Returns:
output – 4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_height*scale, in_width*scale, channel-minor]
- Return type:
- tvm.topi.image.resize3d(data, roi, size, layout='NCDHW', method='linear', coordinate_transformation_mode='half_pixel', rounding_method='', bicubic_alpha=-0.5, bicubic_exclude=0, extrapolation_value=0.0, out_dtype=None, output_shape=None)
Perform resize operation on the data.
- Parameters:
data (tvm.te.Tensor) – inputs is a 5-D tensor with shape [batch, channel, in_depth, in_height, in_width] or [batch, in_depth, in_height, in_width, channel]
roi (Tuple of Float or Expr) – The region of interest for cropping the input image. Expected to be of size 6, and format [start_d, start_h, start_w, end_d, end_h, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.
size (Tuple) – Output resolution scale to
layout (string, optional) – “NCDHW”, “NDHWC”, or “NCDHWc”.
method (string, optional) – method of interpolation (“nearest”, “linear”, “bicubic”)
coordinate_transformation_mode (string, optional) – Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].
rounding_method – Method for rounding coordinate locations
bicubic_alpha (float, optional) – Bicubic spline coefficient
bicubic_exclude (bool, optional:) – Exclude values outside the image fdor bicubic interpolation
extrapolation_value (float, optional) – Value used for extrapolation, when applicable.
out_dtype (string, optional) – Type to return. If left None will be same as input type.
output_shape (tvm.tir.container.Array, optional) – Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)
- Returns:
output – 4-D with shape [batch, channel, in_depth*scale, in_height*scale, in_width*scale] or [batch, in_depth*scale, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_depth*scale, in_height*scale, in_width*scale, channel-minor]
- Return type:
tvm.topi.sparse
Sparse operators
Functions:
|
The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format. |
|
The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format. |
|
Applies a linear transformation: \(Y = XW^T + b\). |
- tvm.topi.sparse.csrmv(a, x, y=None)
The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format.
- Parameters:
a (tvm.contrib.sparse.CSRNDArray) – 2-D sparse matrix with shape [m, k]
x (tvm.te.Tensor) – 2-D dense matrix with shape [k, 1]
y (tvm.te.Tensor, optional) – 1-D dense vector with shape [1]
- Returns:
output – 2-D dense matrix with shape [m, 1]
- Return type:
- tvm.topi.sparse.csrmm(a, b, c=None)
The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format.
- Parameters:
a (tvm.contrib.sparse.CSRNDArray) – 2-D sparse matrix with shape [m, k]
b (tvm.te.Tensor) – 2-D dense matrix with shape [k, n]
c (tvm.te.Tensor, optional) – 1-D dense vector with shape [n]
- Returns:
output – 2-D with shape [m, n]
- Return type:
- tvm.topi.sparse.dense(data, weight, bias=None)
Applies a linear transformation: \(Y = XW^T + b\). Either data or weight should be tvm.contrib.sparse.CSRNDArray.
- Parameters:
data (tvm.contrib.sparse.CSRNDArray or te.tensor.Tensor) – 2-D with shape [batch, in_dim]
weight (te.tensor.Tensor or tvm.contrib.sparse.CSRNDArray) – 2-D with shape [out_dim, in_dim]
bias (te.tensor.Tensor, optional) – 1-D with shape [out_dim]
- Returns:
output – 2-D with shape [batch, out_dim]
- Return type: