tvm
Classes | Typedefs | Functions
tvm::relax::transform Namespace Reference

Classes

class  FusionPatternNode
 The pattern object used as the input of FuseOpsByPattern. For bindings to be fused, it needs to be matched with pattern and the check function needs to return true. More...
 
class  FusionPattern
 
class  PatternCheckContextNode
 The input of FusionPattern::check. More...
 
class  PatternCheckContext
 

Typedefs

using Pass = tvm::transform::Pass
 
using PassInfo = tvm::transform::PassInfo
 
using PassContext = tvm::transform::PassContext
 
using Function = tvm::relax::Function
 
using DataflowBlock = tvm::relax::DataflowBlock
 

Functions

Pass LowerRuntimeBuiltin ()
 Perform builtin lowering to map most of the op to VM builtin functions. More...
 
Pass VMShapeLower ()
 Lower the shape expression in relax to VM shape heap and TIR functions. More...
 
Pass CreateFunctionPass (const runtime::TypedPackedFunc< Function(Function, IRModule, PassContext)> &pass_func, int opt_level, String name, tvm::Array< String > required, bool traceable=false)
 Create a function pass. More...
 
Pass CreateDataflowBlockPass (const runtime::TypedPackedFunc< DataflowBlock(DataflowBlock, IRModule, PassContext)> &pass_func, int opt_level, String name, tvm::Array< String > required, bool traceable=false)
 Create a dataflowblock pass. More...
 
Pass LambdaLift ()
 Perform lambda lifting to lift functions from nested into global. More...
 
Pass ToNonDataflow ()
 Transform all dataflow structure to non-dataflow version. More...
 
Pass RemovePurityChecking ()
 Activate force_pure on all pure functions in the module and unwrap all pure override ops into the normal versions. More...
 
Pass CallTIRRewrite ()
 Perform explicit tensor allocation for call_tir and call_dps_packed. More...
 
Pass RewriteDataflowReshape ()
 Convert all reshape-like call_tir whose corresponding binding vars are DataflowVars to relax.reshape operator calls. The relax.reshape calls will be lowered an external builtin function call in a subsequent pass, where the external builtin function does a CreateView operation at runtime, instead of doing real data copy. Here "reshape-like" includes reshape, expand_dims, flatten, etc. More...
 
Pass StaticPlanBlockMemory ()
 The static memory planning pass on BindingBlock level. The pass will reuse allocated memory to its best effort, in order to reduce the total amount of allocated memory size. More...
 
Pass AttachGlobalSymbol ()
 Attach global_symbol to Relax functions and TIR Primfuncs for codegen. More...
 
Pass Normalize ()
 Transform Relax IR to normal form: transform AST to A-normal form, and fill the checked_type_ and shape_ of expressions. More...
 
Pass NormalizeGlobalVar ()
 Possibly rename the GlobalVar in an IRModule to ensure these properties: More...
 
Pass CanonicalizeBindings ()
 Simplify a Relax module by folding var bindings and match shape nodes, as well as tuple indices. Best used alongside constant folding and eliminating unused bindings. More...
 
Pass EliminateCommonSubexpr (bool call_only=false)
 
Pass BindParams (String func_name, Map< ObjectRef, ObjectRef > params)
 Bind params of function of the module to constant tensors. More...
 
Pass BindSymbolicVars (Map< ObjectRef, PrimExpr > binding_map, Optional< String > func_name=NullOpt)
 Bind symbolic vars to constant shape values. More...
 
Pass FoldConstant ()
 Fold constant expressions within dataflow blocks. More...
 
Pass LegalizeOps (Optional< Map< String, PackedFunc >> cmap, bool enable_warning=false)
 Legalize high-level operator calls in Relax functions to call_tir with corresponding low-level TIR PrimFuncs. More...
 
Pass RealizeVDevice ()
 Propagate virtual device information. More...
 
Pass AttachAttrLayoutFreeBuffers ()
 Attach layout free buffers to the tir::PrimFunc. More...
 
Pass SplitLayoutRewritePreproc ()
 Split the layout rewrite preproc block to a separate tir::PrimFunc. More...
 
Pass LiftTransformParams (Variant< Bool, Array< String >> shared_transform=Bool(false))
 Lift transformation of the parameters of a function. More...
 
Pass UpdateVDevice (VDevice new_vdevice, int64_t index)
 Update virtual device. More...
 
Pass ExpandTupleArguments ()
 Expand tuple arguments to internal functions. More...
 
Pass RemoveUnusedParameters ()
 Remove unused parameters to internal functions. More...
 
Pass RemoveUnusedOutputs ()
 Remove unused outputs from internal functions. More...
 
Pass AnnotateTIROpPattern ()
 Annotate Op Pattern Kind for TIR functions, which is used in FuseOps. More...
 
Pass FuseOps (int fuse_opt_level=-1)
 This pass groups bindings in a dataflow block of Relax functions and generates a new grouped Relax function for each group, according to the fusion algorithm described in the pass implementation. By grouping bindings into new Relax functions, we substitute the bindings in the function being manipulated into function calls to the new grouped function. More...
 
Pass Gradient (String func_name, Optional< Array< Var >> require_grads=NullOpt, int target_index=0)
 Reverse-mode automatic differentiation. More...
 
Pass FuseOpsByPattern (const tvm::Array< FusionPattern > &patterns, bool bind_constants=true, bool annotate_codegen=false, const tvm::Array< String > &entry_function_names={})
 Apply pattern matching to each function in the given module, and group matched expressions into a new function. The end result is similar to FuseOps, but fusion is driven completely by the provided patterns. More...
 
Pass MergeCompositeFunctions ()
 Group one or multiple composite functions created by FuseOpsByPattern into a new function. The new function will be annotated with kCodegen and GlobalSymbol attributes, and it is intented to be offloaded to an external backend. More...
 
Pass FuseTIR ()
 Fuse relax sub-function into a larger TIR function if possible. this pass works together with FuseOps to perform operator fusion. More...
 
Pass RunCodegen (Optional< Map< String, Map< String, ObjectRef >>> target_options, Array< runtime::String > entry_functions)
 Run codegen. More...
 
Pass DecomposeOpsForInference (Optional< String > func_name)
 Decompose composite operators during inference. For example, The result of batch norm (a triple) will be simplified. Operators like Attention, Erf, etc. can be also simplified into several operators as well. More...
 
Pass DecomposeOpsForTraining (Optional< String > func_name)
 Decompose composite operators during training. For example, The result of batch norm (a triple) will be simplified. Operators like Attention, Erf, etc. can be also simplified into several operators as well. More...
 
Pass AlterOpImpl (const Map< String, tir::PrimFunc > &op_impl_map, const Map< String, Array< tir::IndexMap >> &op_buffer_transforms, const Map< String, Array< Array< IntImm >>> &axis_separators, const Map< String, Array< Array< IntImm >>> &input_axis_separators)
 Returns a pass which replaces PrimFuncs which have matching kOperatorName attribute in op_impl_map, with replacement PrimFunc that could possibly have different layouts on i/o buffers. The layout transformations on i/o buffers is present in the op_buffer_transforms. The pass inserts the layout transformations in the call sites of PrimFuncs being replaced to transform i/o buffers into expected layout. More...
 
Pass ConvertLayout (Map< String, Array< String >> desired_layouts)
 Layout conversion pass. More...
 
Pass ConvertToDataflow (int min_size=2)
 A pass that converts consecutive dataflow operations inside binding blocks into dataflow blocks. More...
 
Pass DeadCodeElimination (Array< runtime::String > entry_functions={})
 Dead code elimination. More...
 
Pass DataflowUseInplaceCalls ()
 Pass that changes calls to operators that can be done in-place (generally, these are elementwise operations) in dataflow blocks into in-place implementations. Supported operators will be replaced by calls to call_tir_inplace that invoke in-place PrimFunc implementations of those operators (which are based on the legalizations of those operators). More...
 
Pass ToMixedPrecision (const DataType &out_dtype, Optional< Array< String >> fp16_input_names=NullOpt)
 Automatic mixed precision pass. Currently the pass assumes the input module to be fp32 only, and will automatically cast fp32 to fp16 for certain ops. More...
 
Pass RewriteCUDAGraph ()
 Rewrite a Relax module for executing with CUDA graph. This pass identifies the regions that can be executed with CUDA graph and lifts them into new functions for runtime graph capturing. More...
 
Pass FewShotTuning (int valid_count, bool benchmark)
 The pass is designed for few shot tuning for static shape PrimFuncs. It examines all the blocks within the PrimFunc and conducts loop fusion, splitting, and other transformations based on MetaSchedule schedule rules but directly samples from the search space instead of using the tuning algorithm. User can specify the number of valid counts to try and whether to use runner for benchmarking. More...
 

Typedef Documentation

◆ DataflowBlock

◆ Function

◆ Pass

◆ PassContext

◆ PassInfo

Function Documentation

◆ AlterOpImpl()

Pass tvm::relax::transform::AlterOpImpl ( const Map< String, tir::PrimFunc > &  op_impl_map,
const Map< String, Array< tir::IndexMap >> &  op_buffer_transforms,
const Map< String, Array< Array< IntImm >>> &  axis_separators,
const Map< String, Array< Array< IntImm >>> &  input_axis_separators 
)

Returns a pass which replaces PrimFuncs which have matching kOperatorName attribute in op_impl_map, with replacement PrimFunc that could possibly have different layouts on i/o buffers. The layout transformations on i/o buffers is present in the op_buffer_transforms. The pass inserts the layout transformations in the call sites of PrimFuncs being replaced to transform i/o buffers into expected layout.

Parameters
op_impl_mapMap from kOperatorName attr (e.g., relax.conv2d) to replacement PrimFunc
op_buffer_transformsMap from kOperatorName attr to layout transformations on each of the PrimFunc i/o buffers.
axis_separatorsMap from kOperatorName attr to axis_separators of each buffer_transforms
input_axis_separatorsMap from kOperatorName attr to axis_separator for input buffer
Returns
The Pass.

◆ AnnotateTIROpPattern()

Pass tvm::relax::transform::AnnotateTIROpPattern ( )

Annotate Op Pattern Kind for TIR functions, which is used in FuseOps.

Note
It is an auto-detect pass for "unscheduled prim_funcs", the op_pattern will be "opaque" of we can't detect it. Users can manually annotate the attr op_pattern to prim_func.
Returns
The Pass.

◆ AttachAttrLayoutFreeBuffers()

Pass tvm::relax::transform::AttachAttrLayoutFreeBuffers ( )

Attach layout free buffers to the tir::PrimFunc.

This pass is used to attach layout free buffers to the tir::PrimFunc according to the function usage in the relax function. Currently, the layout free buffers are the model weights and relax constants.

Note
We recommend applying CanonicalizeBindings before this pass.
Returns
The Pass.

◆ AttachGlobalSymbol()

Pass tvm::relax::transform::AttachGlobalSymbol ( )

Attach global_symbol to Relax functions and TIR Primfuncs for codegen.

Returns
The Pass.

◆ BindParams()

Pass tvm::relax::transform::BindParams ( String  func_name,
Map< ObjectRef, ObjectRef params 
)

Bind params of function of the module to constant tensors.

Parameters
func_nameThe name of the function to bind parameters.
paramsThe parameters to bind.
Returns
The Pass.

◆ BindSymbolicVars()

Pass tvm::relax::transform::BindSymbolicVars ( Map< ObjectRef, PrimExpr binding_map,
Optional< String func_name = NullOpt 
)

Bind symbolic vars to constant shape values.

Parameters
binding_mapThe dictionary of symbolic variables and their constant shape values. Dictionary keys may be either a tir.Var or a string name of the variable. If the variables are referred to by name, the name must uniquely identify a symbolic variable in each function where it is used.
func_nameThe name of the function in which to bind shape values. If NullOpt, all functions in the module will be updated.
Returns
The Pass.

◆ CallTIRRewrite()

Pass tvm::relax::transform::CallTIRRewrite ( )

Perform explicit tensor allocation for call_tir and call_dps_packed.

Returns
The Pass.

◆ CanonicalizeBindings()

Pass tvm::relax::transform::CanonicalizeBindings ( )

Simplify a Relax module by folding var bindings and match shape nodes, as well as tuple indices. Best used alongside constant folding and eliminating unused bindings.

Note
If a dataflow var is used only in a binding to the dataflow block output var (i.e., a non-dataflow var), this pass will also remove the dataflow var and replaces the output var's binding with the dataflow var's direct definition.
Returns
The Pass.

◆ ConvertLayout()

Pass tvm::relax::transform::ConvertLayout ( Map< String, Array< String >>  desired_layouts)

Layout conversion pass.

Parameters
desired_layoutsThe desired layouts for some operators.
Returns
The Pass.
Note
Operates only on dataflow blocks. ConvertToDataflow may need to be called first.

◆ ConvertToDataflow()

Pass tvm::relax::transform::ConvertToDataflow ( int  min_size = 2)

A pass that converts consecutive dataflow operations inside binding blocks into dataflow blocks.

Parameters
min_sizeThe minimum number of consecutive dataflow bindings required for the pass to create a new dataflow block
Returns
The Pass.

◆ CreateDataflowBlockPass()

Pass tvm::relax::transform::CreateDataflowBlockPass ( const runtime::TypedPackedFunc< DataflowBlock(DataflowBlock, IRModule, PassContext)> &  pass_func,
int  opt_level,
String  name,
tvm::Array< String required,
bool  traceable = false 
)

Create a dataflowblock pass.

Parameters
pass_funcThe packed function that contains the optimization.
opt_levelThe optimization level of the dataflowblock pass.
nameThe name of the dataflowblock pass.
requiredThe list of the passes that the dataflowblock pass is dependent on.
traceableBoolean variable whether the dataflowblock pass is traceable.
Returns
The created dataflowblock pass.

◆ CreateFunctionPass()

Pass tvm::relax::transform::CreateFunctionPass ( const runtime::TypedPackedFunc< Function(Function, IRModule, PassContext)> &  pass_func,
int  opt_level,
String  name,
tvm::Array< String required,
bool  traceable = false 
)

Create a function pass.

Parameters
pass_funcThe packed function that contains the optimization.
opt_levelThe optimization level of the function pass.
nameThe name of the function pass.
requiredThe list of the passes that the function pass is dependent on.
traceableBoolean variable whether the dataflowblock pass is traceable.
Returns
The created function pass.

◆ DataflowUseInplaceCalls()

Pass tvm::relax::transform::DataflowUseInplaceCalls ( )

Pass that changes calls to operators that can be done in-place (generally, these are elementwise operations) in dataflow blocks into in-place implementations. Supported operators will be replaced by calls to call_tir_inplace that invoke in-place PrimFunc implementations of those operators (which are based on the legalizations of those operators).

Note
ConvertToDataflow may need to be called first to provide dataflow blocks.
Returns
The pass.

◆ DeadCodeElimination()

Pass tvm::relax::transform::DeadCodeElimination ( Array< runtime::String entry_functions = {})

Dead code elimination.

See also
RemoveAllUnused Currently it removes:
  1. Unused local VarBindings (those where the bound var is unused and no impure operation is used).
  2. Unused Relax functions in the module. We detect the call chain from the entry function, and remove all unused functions.

Any binding blocks that are left empty will be removed by the normalizer.

Parameters
entry_functionsNames of functions that should be considered as entry points, in addition to any externally exposed functions.
Returns
The Pass.

◆ DecomposeOpsForInference()

Pass tvm::relax::transform::DecomposeOpsForInference ( Optional< String func_name)

Decompose composite operators during inference. For example, The result of batch norm (a triple) will be simplified. Operators like Attention, Erf, etc. can be also simplified into several operators as well.

Parameters
func_nameThe name of the specified function. If not specified, the pass will run in all functions.

◆ DecomposeOpsForTraining()

Pass tvm::relax::transform::DecomposeOpsForTraining ( Optional< String func_name)

Decompose composite operators during training. For example, The result of batch norm (a triple) will be simplified. Operators like Attention, Erf, etc. can be also simplified into several operators as well.

Parameters
func_nameThe name of the specified function. If not specified, the pass will run in all functions.

◆ EliminateCommonSubexpr()

Pass tvm::relax::transform::EliminateCommonSubexpr ( bool  call_only = false)

Eliminate common subexpressions within functions.

Returns
The pass that eliminates common subexpressions.
Note
For nested functions, this pass performs CSE within those functions.
Parameters
call_onlyIf true, enable eliminating only call nodes.

◆ ExpandTupleArguments()

Pass tvm::relax::transform::ExpandTupleArguments ( )

Expand tuple arguments to internal functions.

Returns
The Pass

◆ FewShotTuning()

Pass tvm::relax::transform::FewShotTuning ( int  valid_count,
bool  benchmark 
)

The pass is designed for few shot tuning for static shape PrimFuncs. It examines all the blocks within the PrimFunc and conducts loop fusion, splitting, and other transformations based on MetaSchedule schedule rules but directly samples from the search space instead of using the tuning algorithm. User can specify the number of valid counts to try and whether to use runner for benchmarking.

Parameters
valid_countThe number of valid counts to try.
benchmarkWhether to use runner for benchmarking.
Returns
The Pass.

◆ FoldConstant()

Pass tvm::relax::transform::FoldConstant ( )

Fold constant expressions within dataflow blocks.

Note
ConvertToDataflow may need to be called first to provide dataflow blocks.
Returns
The Pass.

◆ FuseOps()

Pass tvm::relax::transform::FuseOps ( int  fuse_opt_level = -1)

This pass groups bindings in a dataflow block of Relax functions and generates a new grouped Relax function for each group, according to the fusion algorithm described in the pass implementation. By grouping bindings into new Relax functions, we substitute the bindings in the function being manipulated into function calls to the new grouped function.

A follow-up pass named "FuseTIR" will generate a TIR PrimFunc for each grouped function.

Parameters
fuse_opt_levelThe level of fuse optimization. -1 indicates that the level will be inferred from pass context.
Returns
The Pass.

◆ FuseOpsByPattern()

Pass tvm::relax::transform::FuseOpsByPattern ( const tvm::Array< FusionPattern > &  patterns,
bool  bind_constants = true,
bool  annotate_codegen = false,
const tvm::Array< String > &  entry_function_names = {} 
)

Apply pattern matching to each function in the given module, and group matched expressions into a new function. The end result is similar to FuseOps, but fusion is driven completely by the provided patterns.

Parameters
patternsThe patterns to detect. The order of the patterns determines the order of priority in which they are matched. Higher-priority patterns should come earlier in the list.
bind_constantsWhether or not to keep bound constants of the grouped function.
annotate_codegenIf true, wrap each created composite function with another function, whose body consists only of a call to the composite function, and annotate the outer function with kCodegen and kGlobalSymbol attributes. The kCodegen attribute is set as the prefix of the corresponding pattern name. For example, "dnnl" if the pattern name is "dnnl.conv2d_relu". This must be True if the created composite functions are intended to be offloaded to an external backend without using the MergeCompositeFunctions pass.
entry_function_namesThe names of functions that should be considered as entry points. If not specified, all externally exposed functions will be considered as entry points.
Returns
The Pass.
Note
Only operates within dataflow blocks. ConvertToDataflow may need to be called first.

◆ FuseTIR()

Pass tvm::relax::transform::FuseTIR ( )

Fuse relax sub-function into a larger TIR function if possible. this pass works together with FuseOps to perform operator fusion.

Returns
The Pass.

◆ Gradient()

Pass tvm::relax::transform::Gradient ( String  func_name,
Optional< Array< Var >>  require_grads = NullOpt,
int  target_index = 0 
)

Reverse-mode automatic differentiation.

This pass will differentiate one function in the IRModule. Now the input function must have only one dataflow block.

For a given function specified by func_name, it generates a new function with the name func_name + "_adjoint". The new function computes the gradient of the differentiation target with respect to the arguments specified by require_grads of the original function.

If the function has only one return value, the return value will be specified as target. If the function has more than one return values, the target will be specified as the target_index-th return value. The target must be a scalar (0-dim tensor).

Parameters
func_nameThe name of the specified function.
require_gradsThe relax variables whose adjoints is needed. Must be parameters of the given function and should not be duplicate. If it is not specified, adjoints of all parameters would be computed.
target_indexIf the specified function has more than one return values, specify the index of the return value as the target. If it is not specified, the first return value will be the target.
Returns
The Pass.
Note
ConvertToDataflow may need to be called first to provide dataflow blocks.

◆ LambdaLift()

Pass tvm::relax::transform::LambdaLift ( )

Perform lambda lifting to lift functions from nested into global.

Returns
The Pass.

◆ LegalizeOps()

Pass tvm::relax::transform::LegalizeOps ( Optional< Map< String, PackedFunc >>  cmap,
bool  enable_warning = false 
)

Legalize high-level operator calls in Relax functions to call_tir with corresponding low-level TIR PrimFuncs.

For each high-level operator, we register the way of legalizing it as a function, which takes a context BlockBuilder and the Call being legalized as input, and returns the legalized call. Here the input BlockBuilder is mainly used for adding the PrimFunc created by call_te into the context IRModule.

The legalization function for each operator is registered as an attribute (with attribute key FLegalize) of the operator.

For customizability, the user can pass their own legalization by an optional customized map, with the key to be the operator name and value to be the legalization function. The default legalization function will be overridden by the customized one.

Parameters
cmapThe customized operator legalization function map. The customized function will override the default one.
enable_warningA boolean value indicating if to print warnings for TIR functions not showing up in the database.
Returns
The Pass.

◆ LiftTransformParams()

Pass tvm::relax::transform::LiftTransformParams ( Variant< Bool, Array< String >>  shared_transform = Bool(false))

Lift transformation of the parameters of a function.

When some inputs of the function is marked as 'parameters' (the model weights), this pass identifies the transformation of the parameters and lifts them to a separate function called transform_params. transform_params takes a tuple of the original parameters as input and returns a tuple of the transformed parameters. The original function will be rewritten to accept a tuple of transformed parameters as input.

Users are expected to invoke the transform_params function in runtime and pass the transformed parameters to the original function as input.

Parameters
shared_transformIndicates how the parameter transformation function will be produced.
  • False (default): A separate parameter transformation function will be produced for each function with the "num_input" attribute.
  • True: A single parameter transformation function will be produced, containing the preprocessing steps common across all functions with the "num_input" attribute.
  • List[str]: A single parameter transformation function will be produced, containing the preprocessing steps common across each function whose name is in the list. Passing a list of all functions with the "num_input" attribute or an empty list is equivalent to passing True.
Returns
The Pass.

◆ LowerRuntimeBuiltin()

Pass tvm::relax::transform::LowerRuntimeBuiltin ( )

Perform builtin lowering to map most of the op to VM builtin functions.

Returns
The Pass.

◆ MergeCompositeFunctions()

Pass tvm::relax::transform::MergeCompositeFunctions ( )

Group one or multiple composite functions created by FuseOpsByPattern into a new function. The new function will be annotated with kCodegen and GlobalSymbol attributes, and it is intented to be offloaded to an external backend.

Returns
The Pass.

◆ Normalize()

Pass tvm::relax::transform::Normalize ( )

Transform Relax IR to normal form: transform AST to A-normal form, and fill the checked_type_ and shape_ of expressions.

Returns
The Pass.

◆ NormalizeGlobalVar()

Pass tvm::relax::transform::NormalizeGlobalVar ( )

Possibly rename the GlobalVar in an IRModule to ensure these properties:

  1. (Invariant) First ensure every public function has the same name as its "global_symbol" attribute;
  2. To ensure 1., we may need to rename private functions with conflicting names;
  3. Finally, the name of every GlobalVar is unique in the IRModule.

◆ RealizeVDevice()

Pass tvm::relax::transform::RealizeVDevice ( )

Propagate virtual device information.

Returns
The Pass.

◆ RemovePurityChecking()

Pass tvm::relax::transform::RemovePurityChecking ( )

Activate force_pure on all pure functions in the module and unwrap all pure override ops into the normal versions.

This effectively means that there will be no more purity tracking, useful for low-level code generation.

Returns
The Pass.
Note
Should be used after ToNonDataflow()

◆ RemoveUnusedOutputs()

Pass tvm::relax::transform::RemoveUnusedOutputs ( )

Remove unused outputs from internal functions.

Returns
The Pass

◆ RemoveUnusedParameters()

Pass tvm::relax::transform::RemoveUnusedParameters ( )

Remove unused parameters to internal functions.

Returns
The Pass

◆ RewriteCUDAGraph()

Pass tvm::relax::transform::RewriteCUDAGraph ( )

Rewrite a Relax module for executing with CUDA graph. This pass identifies the regions that can be executed with CUDA graph and lifts them into new functions for runtime graph capturing.

◆ RewriteDataflowReshape()

Pass tvm::relax::transform::RewriteDataflowReshape ( )

Convert all reshape-like call_tir whose corresponding binding vars are DataflowVars to relax.reshape operator calls. The relax.reshape calls will be lowered an external builtin function call in a subsequent pass, where the external builtin function does a CreateView operation at runtime, instead of doing real data copy. Here "reshape-like" includes reshape, expand_dims, flatten, etc.

Returns
The Pass.
Note
The pass is applied at the first stage of Relax VM build, before rewriting call_tir, as this pass requires dataflow information.

◆ RunCodegen()

Pass tvm::relax::transform::RunCodegen ( Optional< Map< String, Map< String, ObjectRef >>>  target_options,
Array< runtime::String entry_functions 
)

Run codegen.

Parameters
target_optionspairs of target name and compilation options
entry_functionslist of entry functions
Returns
The Pass.

◆ SplitLayoutRewritePreproc()

Pass tvm::relax::transform::SplitLayoutRewritePreproc ( )

Split the layout rewrite preproc block to a separate tir::PrimFunc.

This pass is used in the prepack weight after meta_schedule tuning.

Returns
The Pass.

◆ StaticPlanBlockMemory()

Pass tvm::relax::transform::StaticPlanBlockMemory ( )

The static memory planning pass on BindingBlock level. The pass will reuse allocated memory to its best effort, in order to reduce the total amount of allocated memory size.

The pass "supports" dynamic shape in the way of TIR variable upper bound annotation. We can optionally annotate the attribute "tir_var_upper_bound" to Relax functions. The attribute value is a dict from strings to integers, denoting the name of TIR variables to the upper bound values of the TIR vars. Note: The annotated upper bound attribute only applies to TIR vars in the function signature for clarity.

For example, we can annotate a Relax function with R.func_attr({"tir_var_upper_bound": {"n": 1024}}). It means the maximum value of variable that names "n" in the function signature will have upper bound 1024. And we will use 1024 as its value during memory planning.

Returns
The pass.

◆ ToMixedPrecision()

Pass tvm::relax::transform::ToMixedPrecision ( const DataType out_dtype,
Optional< Array< String >>  fp16_input_names = NullOpt 
)

Automatic mixed precision pass. Currently the pass assumes the input module to be fp32 only, and will automatically cast fp32 to fp16 for certain ops.

Parameters
out_dtypeThe output data type of gemm/conv, which is the data type of the accumulator.
fp16_input_namesThe names of function parameters whose dtype should become fp16. The function signature would change accordingly.
Returns
The Pass.
Note
Mainly operates within dataflow blocks. ConvertToDataflow may need to be called first.

◆ ToNonDataflow()

Pass tvm::relax::transform::ToNonDataflow ( )

Transform all dataflow structure to non-dataflow version.

Returns
The Pass.

◆ UpdateVDevice()

Pass tvm::relax::transform::UpdateVDevice ( VDevice  new_vdevice,
int64_t  index 
)

Update virtual device.

Parameters
new_vdeviceThe new virtual device.
indexThe device index indicates the device on which the update will be performed.
Returns
The Pass.

◆ VMShapeLower()

Pass tvm::relax::transform::VMShapeLower ( )

Lower the shape expression in relax to VM shape heap and TIR functions.

Returns
The Pass.