tvm
|
Enumerations | |
enum | PoolType : int { kAvgPool , kMaxPool } |
Pooling type. More... | |
Functions | |
tvm::te::Tensor | bias_add (const tvm::te::Tensor &data, const tvm::te::Tensor &bias, int axis) |
Creates an operation that calculates data + bias. More... | |
tvm::te::Tensor | binarize_pack (const tvm::te::Tensor &data, int axis, std::string name="PackedInput", std::string tag="binarize_pack") |
Binarization and bit-packing along a certain axis. More... | |
tvm::te::Tensor | binary_dense (const tvm::te::Tensor &data, const tvm::te::Tensor &weight) |
Binary matrix multiplication using xor and bit-count. More... | |
tvm::te::Tensor | dense (const tvm::te::Tensor &data, const tvm::te::Tensor &weight, const tvm::te::Tensor &bias, const DataType &out_dtype) |
Creates an operation that calculates data * weight^T + bias. More... | |
PrimExpr | all (Array< PrimExpr > args) |
Create a new expression of the logical and of all conditions in the arguments. More... | |
Tensor | dilate (const Tensor &x, Array< PrimExpr > strides, double dilation_value, std::string name="tensor", std::string tag=kInjective) |
Dilate data with given dilation value (0 by default). More... | |
Tensor | flatten (const Tensor &x, std::string name="tensor", std::string tag=kInjective) |
Flattens the input tensor into a 2-D tensor by collapsing higher dimensions. This requires the input tensor to have constant sized dimensions. More... | |
Tensor | group_norm (const Tensor &data, const Tensor &gamma, const Tensor &beta, int num_groups, int channel_axis, const Array< Integer > &axes, double epsilon, std::string name="T_group_norm", std::string tag=kInjective) |
Tensor | instance_norm (const Tensor &data, const Tensor &gamma, const Tensor &beta, const Array< Integer > &axis, double epsilon, std::string name="T_instance_norm", std::string tag=kInjective) |
Instance normalization. More... | |
Tensor | layer_norm (const Tensor &data, const Tensor &gamma, const Tensor &beta, const Array< Integer > &axis, double epsilon, std::string name="T_layer_norm", std::string tag=kInjective) |
Layer normalization. More... | |
Tensor | lrn (const Tensor &data, int size, int axis=1, float alpha=0.0001, float beta=0.75, float bias=2, std::string name="tensor", std::string tag=kBroadcast) |
Local response normalization inference operator. More... | |
Tensor | scale_shift_nchw (const Tensor &x, const Tensor &scale, const Tensor &shift, std::string name="ScaleShift", std::string tag=kBroadcast) |
Scale and shift with NCHW order. More... | |
Tensor | scale_shift_nhwc (const Tensor &x, const Tensor &scale, const Tensor &shift, std::string name="ScaleShift", std::string tag=kBroadcast) |
Scale and shift with NHWC order. More... | |
Tensor | pool_grad_impl (const Tensor &out_grad, const Tensor &x, const Array< PrimExpr > &kernel_size, const Array< PrimExpr > &stride_size, const Array< PrimExpr > &padding_size, PoolType pool_type, bool ceil_mode, const size_t height_axis, const size_t width_axis, bool count_include_pad) |
bool | find_depth_height_width (const std::string &layout, int *depth_axis, int *height_axis, int *width_axis) |
Find index of Depth, Height or Width dimension in a layout string. More... | |
bool | find_height_width (const std::string &layout, int *height_axis, int *width_axis) |
bool | find_width (const std::string &layout, int *width_axis) |
Tensor | pool_grad (const Tensor &out_grad, const Tensor &x, const Array< PrimExpr > &kernel_size, const Array< PrimExpr > &stride_size, const Array< PrimExpr > &padding_size, PoolType pool_type, bool ceil_mode, const std::string &layout="NCHW", bool count_include_pad=true) |
Calculate gradient of pooling on height and width dimension of data. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention. More... | |
PrimExpr | start_index (const Var &out_index, const PrimExpr &odim, const PrimExpr &idim) |
PrimExpr | end_index (const Var &out_index, const PrimExpr &odim, const PrimExpr &idim) |
Tensor | adaptive_pool_impl (const Tensor &x, const Array< PrimExpr > &output_size, PoolType pool_type, const std::vector< int > &axes) |
Perform adaptive pooling on N dimensional data. More... | |
Tensor | adaptive_pool (const Tensor &x, const Array< PrimExpr > &output_size, PoolType pool_type, const std::string &layout="NCHW") |
Adaptively perform pooling on height and width dimension of data. The pooling kernel and stride sizes are automatically chosen for desired output sizes. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention. More... | |
Tensor | adaptive_pool3d (const Tensor &x, const Array< PrimExpr > &output_size, PoolType pool_type, const std::string &layout="NCDHW") |
Adaptively perform pooling on three dimensional data. See the two dimensional version above for details. More... | |
Tensor | adaptive_pool1d (const Tensor &x, const Array< PrimExpr > &output_size, PoolType pool_type, const std::string &layout="NCW") |
Adaptively perform pooling on one dimensional data. See the two dimensional version above for details. More... | |
Tensor | global_pool (const Tensor &x, PoolType pool_type, const std::string &layout="NCHW") |
Perform global pooling on height and width dimension of data. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, ... are valid for global_pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention. More... | |
Tensor | pool_impl_nd (const Tensor &x, const Array< PrimExpr > &kernel_size, const Array< PrimExpr > &stride_size, const Array< PrimExpr > &dilation_size, const Array< PrimExpr > &padding_size, PoolType pool_type, bool ceil_mode, const std::vector< int > &axis, bool count_include_pad) |
Perform pooling on N-dimension of data. More... | |
Tensor | pool1d (const Tensor &x, const Array< PrimExpr > &kernel_size, const Array< PrimExpr > &stride_size, const Array< PrimExpr > &dilation_size, const Array< PrimExpr > &padding_size, PoolType pool_type, bool ceil_mode, const std::string &layout="NCW", bool count_include_pad=true) |
Perform pooling on the width dimension of data. Width axis is determined by the layout string in which 'W' means width. Width dimension cannot be split. For example, NCW, NCW16c, etc. are valid for pool, while NCW16w is not. See layout for more information of the layout string convention. More... | |
Tensor | pool2d (const Tensor &x, const Array< PrimExpr > &kernel_size, const Array< PrimExpr > &stride_size, const Array< PrimExpr > &dilation_size, const Array< PrimExpr > &padding_size, PoolType pool_type, bool ceil_mode, const std::string &layout="NCHW", bool count_include_pad=true) |
Perform pooling on height and width dimension of data. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention. More... | |
Tensor | pool3d (const Tensor &x, const Array< PrimExpr > &kernel_size, const Array< PrimExpr > &stride_size, const Array< PrimExpr > &dilation_size, const Array< PrimExpr > &padding_size, PoolType pool_type, bool ceil_mode, const std::string &layout="NCDHW", bool count_include_pad=true) |
Perform pooling on depth, height and width dimension of data. It decides the depth, height and width dimension according to the layout string, in which 'D', 'W' and 'H' means depth, width and height respectively. Depth, Width and height dimension cannot be split. For example, NCDHW, NCDHW16c, etc. are valid for pool, while NCDHW16d, NCDHW16w or NCDHW16h are not. See layout for more information of the layout string convention. More... | |
Tensor | rms_norm (const Tensor &data, const Tensor &weight, const Array< Integer > &axis, double epsilon, std::string name="T_rms_norm", std::string tag=kInjective) |
Root mean square normalization. More... | |
Tensor | softmax (const Tensor &x, int axis=-1, std::string name="tensor", std::string tag="softmax_output") |
Softmax activation. More... | |
Tensor | log_softmax (const Tensor &x, std::string name="tensor", std::string tag="log_softmax_output") |
Log softmax activation. More... | |
enum tvm::topi::nn::PoolType : int |
|
inline |
Adaptively perform pooling on height and width dimension of data. The pooling kernel and stride sizes are automatically chosen for desired output sizes. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention.
x | The input tensor |
output_size | Vector of two ints: {output_height, output_width} |
pool_type | The type of pooling operator |
layout | The input layout. Pooling supports any layout as long as 'H' and 'W' appear. The layout is supposed to be composed of upper cases, lower cases and (optional) numbers, where upper case indicates a dimension and the corresponding lower case (with factor size) indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block]. (in which factor size 16 will not be used in pooling but for other operators, it can be used to decide the output shape). Since pooling does not care about the factor size of dimensions other than H and W , one can pass NCHWc as well. |
|
inline |
Adaptively perform pooling on one dimensional data. See the two dimensional version above for details.
x | The input tensor |
output_size | Vector of one int: {output_width} |
pool_type | The type of pooling operator |
layout | The input layout. The default is "NCW". |
|
inline |
Adaptively perform pooling on three dimensional data. See the two dimensional version above for details.
x | The input tensor |
output_size | Vector of three ints: {output_depth, output_height, output_width} |
pool_type | The type of pooling operator |
layout | The input layout. The default is "NCDHW". |
|
inline |
Perform adaptive pooling on N dimensional data.
x | The input tensor |
output_size | int vector of size in each dimension |
pool_type | The type of pooling operator |
axes | indices of each dimension |
Create a new expression of the logical and of all conditions in the arguments.
args | The arguments to find the logical conjunction of |
|
inline |
Creates an operation that calculates data + bias.
data | Tensor with shape [batch, in_dim] |
bias | Tensor with shape [batch]. |
axis | The axis to add the bias to. |
|
inline |
Binarization and bit-packing along a certain axis.
data | N-D tensor, can be any layout |
axis | The axis along which to do binarization and bit-packing. This axis must have a size equal to an integer multiple of 32. |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
Binary matrix multiplication using xor and bit-count.
data | Tensor with shape [batch, in_dim], dtype is uint32 |
weight | Tensor with shape [out_dim, in_dim], dtype is uint32 |
|
inline |
Creates an operation that calculates data * weight^T + bias.
data | Tensor with shape [batch, in_dim] |
weight | Tensor with shape [out_dim, in_dim] |
bias | Tensor with shape [out_dim]. Optional; to omit bias, pass Tensor() |
out_dtype | Output data type. Used for mixed precision. |
|
inline |
Dilate data with given dilation value (0 by default).
x | The input tensor, this can have any number of dimensions and any layout. |
strides | Dilation stride for each dimension. Stride 1 means no dilation. |
dilation_value | Value used to dilate the input. |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
|
inline |
Find index of Depth, Height or Width dimension in a layout string.
layout | The layout string |
depth_axis | set as the index of depth ('D') if not nullptr. |
height_axis | set as the index of height ('H') if not nullptr. |
width_axis | set as the index of width ('W') if not nullptr. |
|
inline |
|
inline |
|
inline |
Flattens the input tensor into a 2-D tensor by collapsing higher dimensions. This requires the input tensor to have constant sized dimensions.
x | The input tensor. |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
Perform global pooling on height and width dimension of data. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, ... are valid for global_pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention.
x | The input tensor represent as layout |
pool_type | The type of pooling operator |
layout | The input layout. global-pooling supports any layout as long as 'H' and 'W' appear. The layout is supposed to be composed of upper cases, lower cases and (optional) numbers, where upper case indicates a dimension and the corresponding lower case (with factor size) indicates the sub-dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block]. (in which factor size 16 will not be used in pooling but for other operators, it can be used to decide the output shape). Since pooling does not care about the factor size of dimensions other than H and W , one can pass NCHWc as well. |
|
inline |
|
inline |
Instance normalization.
data | N-D tensor with shape [d_0, d_1, ..., d_{N-1}] |
gamma | K-D tensor with shape [r_0, r_1, ..., r_{K-1}] where K == len(axis) and d_{axis_k} == r_k |
beta | Optional, K-D tensor with shape [r_0, r_1, ..., r_{K-1}] where d_{axis_k} == r_k |
axis | The axis to normalize over (the axis along which mean and variance are computed). |
epsilon | The epsilon value to avoid division by zero. |
name | The name of the operation. |
tag | The tag to mark the operation. |
|
inline |
Layer normalization.
data | N-D tensor with shape [d_0, d_1, ..., d_{N-1}] |
gamma | K-D tensor with shape [r_0, r_1, ..., r_{K-1}] where K == len(axis) and d_{axis_k} == r_k |
beta | Optional, K-D tensor with shape [r_0, r_1, ..., r_{K-1}] where d_{axis_k} == r_k |
axis | The axis to normalize over. |
epsilon | The epsilon value to avoid division by zero. |
name | The name of the operation. |
tag | The tag to mark the operation. |
|
inline |
Log softmax activation.
x | The input tensor. 2-D where log softmax is performed along the second dimension |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
Local response normalization inference operator.
data | The input tensor. 4-D shape NCHW or NHWC |
size | Integer to define normalisation window size |
axis | Input data layout channel axis |
alpha | Float scaling factor |
beta | Exponent value |
bias | Offset to avoid dividing by zero |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
Perform pooling on the width dimension of data. Width axis is determined by the layout string in which 'W' means width. Width dimension cannot be split. For example, NCW, NCW16c, etc. are valid for pool, while NCW16w is not. See layout for more information of the layout string convention.
x | The input tensor. |
kernel_size | Vector of one int: {kernel_width} |
stride_size | Vector of one int: {stride_width} |
dilation_size | Vector of one int: {dilation_width} |
padding_size | Vector of two ints: {head_pad_width, tail_pad_width} |
pool_type | The type of pooling operator |
ceil_mode | Whether to use ceil when calculating the output size |
layout | The input layout. Pooling supports any layout as long as 'W' appears. The layout is supposed to be composed of upper cases, lower cases and (optional) numbers, where upper case indicates a dimension and the corresponding lower case (with factor size) indicates the split dimension. For example, NCW16c can describe a 4-D tensor of [batch_size, channel, width, channel_block]. (in which factor size 16 will not be used in pooling but for other operators, it can be used to decide the output shape). Since pooling does not care about the factor size of dimensions other than W , one can pass NCWc as well. |
count_include_pad | Whether include padding in the calculation when pool_type is 'avg' |
|
inline |
Perform pooling on height and width dimension of data. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention.
x | The input tensor. |
kernel_size | Vector of two ints: {kernel_height, kernel_width} |
stride_size | Vector of two ints: {stride_height, stride_width} |
dilation_size | Vector of two ints: {dilation_height, dilation_width} |
padding_size | Vector of two ints: {padding_height, padding_width} |
pool_type | The type of pooling operator |
ceil_mode | Whether to use ceil when calculating the output size |
layout | The input layout. Pooling supports any layout as long as 'H' and 'W' appear. The layout is supposed to be composed of upper cases, lower cases and (optional) numbers, where upper case indicates a dimension and the corresponding lower case (with factor size) indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block]. (in which factor size 16 will not be used in pooling but for other operators, it can be used to decide the output shape). Since pooling does not care about the factor size of dimensions other than H and W , one can pass NCHWc as well. |
count_include_pad | Whether include padding in the calculation when pool_type is 'avg' |
|
inline |
Perform pooling on depth, height and width dimension of data. It decides the depth, height and width dimension according to the layout string, in which 'D', 'W' and 'H' means depth, width and height respectively. Depth, Width and height dimension cannot be split. For example, NCDHW, NCDHW16c, etc. are valid for pool, while NCDHW16d, NCDHW16w or NCDHW16h are not. See layout for more information of the layout string convention.
x | The input tensor. |
kernel_size | Vector of three ints: {kernel_depth, kernel_height, kernel_width} |
stride_size | Vector of three ints: {stride_depth, stride_height, stride_width} |
dilation_size | Vector of three ints: {dilation_depth, dilation_height, dilation_width} |
padding_size | Vector of six ints: {head_pad_depth, head_pad_height, head_pad_width, tail_pad_depth, tail_pad_height, tail_pad_width} |
pool_type | The type of pooling operator |
ceil_mode | Whether to use ceil when calculating the output size |
layout | The input layout. Pooling supports any layout as long as 'D', 'H' and 'W' appear. The layout is supposed to be composed of upper cases, lower cases and (optional) numbers, where upper case indicates a dimension and the corresponding lower case (with factor size) indicates the split dimension. For example, NCDHW16c can describe a 6-D tensor of [batch_size, channel, depth, height, width, channel_block]. (in which factor size 16 will not be used in pooling but for other operators, it can be used to decide the output shape). Since pooling does not care about the factor size of dimensions other than D , H and W , one can pass NCDHWc as well. |
count_include_pad | Whether include padding in the calculation when pool_type is 'avg' |
|
inline |
Calculate gradient of pooling on height and width dimension of data. It decides the height and width dimension according to the layout string, in which 'W' and 'H' means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See layout for more information of the layout string convention.
out_grad | The output gradient tensor. |
x | The input tensor. |
kernel_size | Vector of two ints: {kernel_height, kernel_width} |
stride_size | Vector of two ints: {stride_height, stride_width} |
padding_size | Vector of two ints: {padding_height, padding_width} |
pool_type | The type of pooling operator |
ceil_mode | Whether to use ceil when calculating the output size |
layout | The input layout. Pooling supports any layout as long as 'H' and 'W' appear. The layout is supposed to be composed of upper cases, lower cases and (optional) numbers, where upper case indicates a dimension and the corresponding lower case (with factor size) indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block]. (in which factor size 16 will not be used in pooling but for other operators, it can be used to decide the output shape). Since pooling does not care about the factor size of dimensions other than H and W , one can pass NCHWc as well. |
count_include_pad | Whether include padding in the calculation when pool_type is 'avg' |
|
inline |
|
inline |
Perform pooling on N-dimension of data.
x | The input tensor |
kernel_size | Vector of N ints |
stride_size | Vector of N ints |
dilation_size | Vector of N ints |
padding_size | Vector of N*2 ints [head_pad_d1, head_pad_d2, ..., head_pad_dN, tail_pad_d1, tail_pad_d2, ..., tail_pad_dN] |
pool_type | The type of pooling operator |
ceil_mode | Whether to use ceil when calculating the output size |
axis | Vector of indices for the N dimensions |
count_include_pad | Whether include padding in the calculation |
|
inline |
Root mean square normalization.
data | N-D tensor with shape [d_0, d_1, ..., d_{N-1}] |
weight | K-D tensor with shape [r_0, r_1, ..., r_{K-1}] where K == len(axis) and d_{axis_k} == r_k |
axis | The axis to normalize over. |
epsilon | The epsilon value to avoid division by zero. |
name | The name of the operation. |
tag | The tag to mark the operation. |
|
inline |
Scale and shift with NCHW order.
x | The input tensor. |
scale | Scale tensor, 1-D of size channel |
shift | Shift tensor, 1-D of size channel |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
Scale and shift with NHWC order.
x | The input tensor. |
scale | Scale tensor, 1-D of size channel |
shift | Shift tensor, 1-D of size channel |
name | The name of the operation |
tag | The tag to mark the operation |
|
inline |
Softmax activation.
x | The input tensor. Can be any dimension |
axis | The channel axis along which softmax is performed |
name | The name of the operation |
tag | The tag to mark the operation |