tvm
Typedefs | Functions
tvm::relay::transform Namespace Reference

Typedefs

using Pass = tvm::transform::Pass
 
using PassNode = tvm::transform::PassNode
 
using PassInfo = tvm::transform::PassInfo
 
using PassInfoNode = tvm::transform::PassInfoNode
 
using PassContext = tvm::transform::PassContext
 
using PassContextNode = tvm::transform::PassContextNode
 
using Sequential = tvm::transform::Sequential
 

Functions

Pass CreateFunctionPass (const runtime::TypedPackedFunc< Function(Function, IRModule, PassContext)> &pass_func, int opt_level, String name, tvm::Array< String > required)
 
Pass DeadCodeElimination (bool inline_once=false)
 Remove expressions which does not effect the program result. More...
 
Pass LazyGradientInit ()
 Convert all expressions of TensorType into GradCell, an algebraic data type defined in gradient.rly. More...
 
Pass FoldConstant ()
 Fold constant expressions. More...
 
Pass SplitArgs (int max_function_args)
 Split function with huge number of arguments to smaller pieces. More...
 
Pass FuseOps (int fuse_opt_level=-1)
 Fuse operations into expr into seperate functions. More...
 
Pass DefuseOps ()
 The inverse operation of FuseOps. It transforms a fused program returned by FuseOps into the program before FuseOps. (i.e. x == DefuseOps(FuseOps(x))) More...
 
Pass RewriteAnnotatedOps (int fallback_device)
 Rewrite the annotated program. More...
 
Pass ToBasicBlockNormalForm ()
 Turn an expression to Basic Block Normal Form. More...
 
Pass ToANormalForm ()
 turn a dataflow graph into Administrative Normal Form, or A-Normal Form (ANF). More...
 
Expr ToANormalForm (const Expr &expr)
 ToANormalForm but on incomplete graph. More...
 
Pass ToCPS ()
 Turn an expression into continuation passing style(CPS). More...
 
Pass ToGraphNormalForm ()
 Remove let binding and directly share via pointer instead. More...
 
Pass PartialEval ()
 Aggressive constant propagation/constant folding/inlining. More...
 
Pass SimplifyInference ()
 Simplify certain operators during inference. For example, the result of a batch norm which is indexed at tuple index 0 will be unpacked into a number of simplified operators. More...
 
Pass FastMath ()
 Replaces non linear activation functions with their fast but approximate counterparts. More...
 
Pass DynamicToStatic ()
 Find Dynamic ops and make them static. More...
 
Pass InferType ()
 Infer the type of an expression. More...
 
Pass EliminateCommonSubexpr (runtime::PackedFunc fskip=nullptr)
 Search and eliminate common subexpression. For example, if there are two expressions evaluated to an identical value, a single variable is created and these two expressions are replaced by this variable. More...
 
Pass CombineParallelConv2D (uint64_t min_num_branches=3)
 Combine parallel 2d convolutions into a single convolution if the number of branches of this conv2d operator is not less than min_num_branch. More...
 
Pass CombineParallelDense (uint64_t min_num_branches=3, bool to_batch_matmul=true)
 Combine parallel dense ops into a single batch_matmul if the number of branches of this dense operator is not less than min_num_branch. More...
 
Pass CombineParallelBatchMatmul (uint64_t min_num_branches=3)
 Combine parallel batch_matmul ops into a single batch_matmul if the number of branches of this dense operator is not less than min_num_branch. More...
 
Pass BackwardFoldScaleAxis ()
 Backward fold axis scaling into weights of conv/dense operators. More...
 
Pass ForwardFoldScaleAxis ()
 Forward fold axis scaling into weights of conv/dense operators. More...
 
Pass FoldScaleAxis ()
 A sequential pass that executes ForwardFoldScaleAxis and BackwardFoldScaleAxis passes. More...
 
Pass CanonicalizeOps ()
 Canonicalize some operators to the simplified operators. For example, bias_add can be canonicalized to expand_dims and broadcast_add. More...
 
Pass AlterOpLayout ()
 Alternate the layouts of operators or replace primitive operators with other expressions. More...
 
Pass AutoSchedulerLayoutRewrite ()
 Do layout rewrite according to the tile structure created by auto-scheduler. More...
 
Pass ConvertLayout (const Map< String, Array< String >> &desired_layouts)
 Given a dest layout, this pass transforms the expr such that most of the ops input data layout is changed to the dest layout. In ideal situation, there are only 2 layout transforms, one at the start and one at the end. More...
 
Pass Legalize (const String &legalize_map_attr_name="FTVMLegalize")
 Legalizes an expr with another expression. More...
 
Pass CanonicalizeCast ()
 Canonicalize cast expressions to make operator fusion more efficient. More...
 
Pass EtaExpand (bool expand_constructor, bool expand_global_var)
 Add abstraction over a constructor or global variable bound to a function. More...
 
Pass PartitionGraph ()
 Partition a Relay program into regions that can be executed on different backends. More...
 
Pass Inline ()
 Inline the global functions marked as inline in a given Relay IRModule. More...
 
Pass RemoveUnusedFunctions (Array< runtime::String > entry_functions)
 Remove the unused functions in the Relay IRModule. More...
 
Pass SimplifyExpr ()
 Simplify the Relay expression. More...
 
Pass RelayToTIRTargetHook ()
 Run any registered RelayToTIR passes registered on the functions in a module. More...
 
Pass ManifestAlloc (Target target_host, Map< tvm::Integer, tvm::Target > targets)
 A pass for manifesting explicit memory allocations and rewriting specific dialects. More...
 
Pass PlanDevices (DLDeviceType default_device_type)
 Uses existing "on_device" and "device_copy" CallNodes to infer the device on which every Relay sub-expression should run (and the result stored). Captures the result of that analysis using new "on_device" and "device_copy" CallNodes. See tvm::relay::transform::{LexicalOnDeviceMixin,DeviceAwareExprVisitor,DeviceAwareExprMutator} for help recovering the device for an arbitrary sub-expression in downstream transformations. More...
 

Typedef Documentation

◆ Pass

◆ PassContext

◆ PassContextNode

◆ PassInfo

◆ PassInfoNode

◆ PassNode

◆ Sequential

Function Documentation

◆ AlterOpLayout()

Pass tvm::relay::transform::AlterOpLayout ( )

Alternate the layouts of operators or replace primitive operators with other expressions.

Returns
The pass.

◆ AutoSchedulerLayoutRewrite()

Pass tvm::relay::transform::AutoSchedulerLayoutRewrite ( )

Do layout rewrite according to the tile structure created by auto-scheduler.

Returns
The pass

◆ BackwardFoldScaleAxis()

Pass tvm::relay::transform::BackwardFoldScaleAxis ( )

Backward fold axis scaling into weights of conv/dense operators.

Returns
The pass.

◆ CanonicalizeCast()

Pass tvm::relay::transform::CanonicalizeCast ( )

Canonicalize cast expressions to make operator fusion more efficient.

Returns
The pass.

◆ CanonicalizeOps()

Pass tvm::relay::transform::CanonicalizeOps ( )

Canonicalize some operators to the simplified operators. For example, bias_add can be canonicalized to expand_dims and broadcast_add.

Returns
The pass.

◆ CombineParallelBatchMatmul()

Pass tvm::relay::transform::CombineParallelBatchMatmul ( uint64_t  min_num_branches = 3)

Combine parallel batch_matmul ops into a single batch_matmul if the number of branches of this dense operator is not less than min_num_branch.

Parameters
min_num_branchesThe minimun number of branches.
Returns
The pass.

◆ CombineParallelConv2D()

Pass tvm::relay::transform::CombineParallelConv2D ( uint64_t  min_num_branches = 3)

Combine parallel 2d convolutions into a single convolution if the number of branches of this conv2d operator is not less than min_num_branch.

Parameters
min_num_branchesThe minimun number of branches.
Returns
The pass.

◆ CombineParallelDense()

Pass tvm::relay::transform::CombineParallelDense ( uint64_t  min_num_branches = 3,
bool  to_batch_matmul = true 
)

Combine parallel dense ops into a single batch_matmul if the number of branches of this dense operator is not less than min_num_branch.

Parameters
min_num_branchesThe minimun number of branches.
to_batch_matmulWhether to combine parallel dense ops to batch matmul. If set false, combine dense ops to single dense op.
Returns
The pass.

◆ ConvertLayout()

Pass tvm::relay::transform::ConvertLayout ( const Map< String, Array< String >> &  desired_layouts)

Given a dest layout, this pass transforms the expr such that most of the ops input data layout is changed to the dest layout. In ideal situation, there are only 2 layout transforms, one at the start and one at the end.

This pass is not a part of relay.build and is expected to be called between framework-relay parser and relay.build call. This is very helpful for hardware backends that support/prefer only type of data layout.

RFC - https://discuss.tvm.ai/t/layout-conversion-pass/4009

This pass uses most of the AlterOpLayout and InferCorrectLayout infrastructure. We can define new layouts for conv2d ops for now. Most of the other operators try to adapt to their input layout using the InferCorrectLayout infrastructure.

Parameters
desired_layoutsSpecify mapping of op_name to array of desired layouts for each input. For example: Map("nn.conv2d", Array("NHWC", "OHWI")), this specifies the desired layout for data then kernel for nn.conv2d.
Returns
The pass.

◆ CreateFunctionPass()

Pass tvm::relay::transform::CreateFunctionPass ( const runtime::TypedPackedFunc< Function(Function, IRModule, PassContext)> &  pass_func,
int  opt_level,
String  name,
tvm::Array< String required 
)

◆ DeadCodeElimination()

Pass tvm::relay::transform::DeadCodeElimination ( bool  inline_once = false)

Remove expressions which does not effect the program result.

It will remove let bindings which are not referenced, and inline let bindings that are only used once.

For example, this pass should turn let a = 1 in 2 into 2, as the value of the expression does not depend on a.

As another example, let a = 1 in a will be optimized into 1.

Parameters
inline_oncewhether or not to inline binding used one.
Returns
the pass.

◆ DefuseOps()

Pass tvm::relay::transform::DefuseOps ( )

The inverse operation of FuseOps. It transforms a fused program returned by FuseOps into the program before FuseOps. (i.e. x == DefuseOps(FuseOps(x)))

Returns
The pass.

◆ DynamicToStatic()

Pass tvm::relay::transform::DynamicToStatic ( )

Find Dynamic ops and make them static.

Searches the graph for dynamic ops. If the dynamic inputs to those ops are constants, it replaces them with static ops and re-performs type inference and constant folding. The pass repeats itself until the graph stops changing or we run too many iterations.

Returns
The pass.

◆ EliminateCommonSubexpr()

Pass tvm::relay::transform::EliminateCommonSubexpr ( runtime::PackedFunc  fskip = nullptr)

Search and eliminate common subexpression. For example, if there are two expressions evaluated to an identical value, a single variable is created and these two expressions are replaced by this variable.

Parameters
fskipThe callback argument that allows to skip certain expressions.
Returns
The pass.

◆ EtaExpand()

Pass tvm::relay::transform::EtaExpand ( bool  expand_constructor,
bool  expand_global_var 
)

Add abstraction over a constructor or global variable bound to a function.

For example: square is transformed to fn (x: int32) -> int32 { square(x) }.

See https://en.wikipedia.org/wiki/Lambda_calculus#%CE%B7-conversion for more details.

Parameters
expand_constructorWhether to expand constructors.
expand_global_varWhether to expand global variables.
Returns
The pass.

◆ FastMath()

Pass tvm::relay::transform::FastMath ( )

Replaces non linear activation functions with their fast but approximate counterparts.

Returns
The Pass.

◆ FoldConstant()

Pass tvm::relay::transform::FoldConstant ( )

Fold constant expressions.

Returns
The pass.

◆ FoldScaleAxis()

Pass tvm::relay::transform::FoldScaleAxis ( )

A sequential pass that executes ForwardFoldScaleAxis and BackwardFoldScaleAxis passes.

Returns
The pass.

◆ ForwardFoldScaleAxis()

Pass tvm::relay::transform::ForwardFoldScaleAxis ( )

Forward fold axis scaling into weights of conv/dense operators.

Returns
The pass.

◆ FuseOps()

Pass tvm::relay::transform::FuseOps ( int  fuse_opt_level = -1)

Fuse operations into expr into seperate functions.

Parameters
fuse_opt_levelOptimization level. If it is -1 it will be inferred from pass context.
Returns
The pass.

◆ InferType()

Pass tvm::relay::transform::InferType ( )

Infer the type of an expression.

The result of type checking is a new expression with unambigous type information filled in, as well as it's checked type field populated with the result type.

Returns
The pass.

◆ Inline()

Pass tvm::relay::transform::Inline ( )

Inline the global functions marked as inline in a given Relay IRModule.

Returns
The pass.

◆ LazyGradientInit()

Pass tvm::relay::transform::LazyGradientInit ( )

Convert all expressions of TensorType into GradCell, an algebraic data type defined in gradient.rly.

This will delay or decrease memory usage. All calls to ones, ones_like, zeros, zeros_like will not immediately instantiate a tensor in memory, rather only instantiate if needed. It also defines + and * operation between GradCell types which can increase performance when using zero-filled or one-filled tensors, which is the case in reverse mode ad.

Returns
the pass

◆ Legalize()

Pass tvm::relay::transform::Legalize ( const String legalize_map_attr_name = "FTVMLegalize")

Legalizes an expr with another expression.

Parameters
legalize_map_attr_nameThe Op's attr name which corresponds to the legalize rule function. One can collect and isolate similar type of legalize transformations using this param. For example, transformations that only apply to Dialects can be isolated into a FTVMDialectLegalize string. This pass calls only those transformations that have been registered using the supplied legalize_map_attr_name.
Returns
The pass.

◆ ManifestAlloc()

Pass tvm::relay::transform::ManifestAlloc ( Target  target_host,
Map< tvm::Integer, tvm::Target targets 
)

A pass for manifesting explicit memory allocations and rewriting specific dialects.

Parameters
target_hostThe target used by the host for compilation.
targetsThe device type and target pairs for compilation.
Returns
The pass.

◆ PartialEval()

Pass tvm::relay::transform::PartialEval ( )

Aggressive constant propagation/constant folding/inlining.

It will do as much computation in compile time as possible. It has two benefit: remove runtime overhead, and allow more optimization (typically fusion). As a side effect, code size will explode.

Returns
the optimized expression.

◆ PartitionGraph()

Pass tvm::relay::transform::PartitionGraph ( )

Partition a Relay program into regions that can be executed on different backends.

Returns
The pass.

◆ PlanDevices()

Pass tvm::relay::transform::PlanDevices ( DLDeviceType  default_device_type)

Uses existing "on_device" and "device_copy" CallNodes to infer the device on which every Relay sub-expression should run (and the result stored). Captures the result of that analysis using new "on_device" and "device_copy" CallNodes. See tvm::relay::transform::{LexicalOnDeviceMixin,DeviceAwareExprVisitor,DeviceAwareExprMutator} for help recovering the device for an arbitrary sub-expression in downstream transformations.

Parameters
default_device_typeDLDeviceType for default device.

◆ RelayToTIRTargetHook()

Pass tvm::relay::transform::RelayToTIRTargetHook ( )

Run any registered RelayToTIR passes registered on the functions in a module.

Returns
The pass.

◆ RemoveUnusedFunctions()

Pass tvm::relay::transform::RemoveUnusedFunctions ( Array< runtime::String entry_functions)

Remove the unused functions in the Relay IRModule.

Parameters
entry_functionsThe entry functions used to search the functions that are being used.
Returns
The pass.

◆ RewriteAnnotatedOps()

Pass tvm::relay::transform::RewriteAnnotatedOps ( int  fallback_device)

Rewrite the annotated program.

Parameters
fallback_deviceThe fallback device which is the default device for operators without annotation.
Returns
The pass.

◆ SimplifyExpr()

Pass tvm::relay::transform::SimplifyExpr ( )

Simplify the Relay expression.

Returns
The pass.

◆ SimplifyInference()

Pass tvm::relay::transform::SimplifyInference ( )

Simplify certain operators during inference. For example, the result of a batch norm which is indexed at tuple index 0 will be unpacked into a number of simplified operators.

Returns
The Pass.

◆ SplitArgs()

Pass tvm::relay::transform::SplitArgs ( int  max_function_args)

Split function with huge number of arguments to smaller pieces.

Returns
The pass.

◆ ToANormalForm() [1/2]

Pass tvm::relay::transform::ToANormalForm ( )

turn a dataflow graph into Administrative Normal Form, or A-Normal Form (ANF).

It will turn an expression that is in a graph form (with sharing implicit), to an expression with explicit sharing (A-Normal Form).

The scope of the root expression is the global scope.

The scope of any non root expression is the least common ancestor of all it's scope.

Values are ordered by post-DFS order in each scope.

Returns
The pass.

◆ ToANormalForm() [2/2]

Expr tvm::relay::transform::ToANormalForm ( const Expr expr)

ToANormalForm but on incomplete graph.

Parameters
exprthe graph.
Returns
The transformed program.

◆ ToBasicBlockNormalForm()

Pass tvm::relay::transform::ToBasicBlockNormalForm ( )

Turn an expression to Basic Block Normal Form.

We define a block as a group of expressions implied by the scope structure.

Each graph node can only belong to a single block.

For any value that is being used in multiple blocks, it has to be referred by a Var which is defined in a block, whose scope is the least common ancestor of blocks this value is used.

Returns
The pass.

◆ ToCPS()

Pass tvm::relay::transform::ToCPS ( )

Turn an expression into continuation passing style(CPS).

CPS mean that every function will, instead of returning the result directly, be passed down an extra function (called the continuation) as argument, and pass the result to the continuation instead.

Thus, every function call has to be passed an extra argument that represent the rest of the computation (Hence the name of continuation).

Similarly, all other compute will be wrapped and call the continuation as well.

Returns
the pass.

◆ ToGraphNormalForm()

Pass tvm::relay::transform::ToGraphNormalForm ( )

Remove let binding and directly share via pointer instead.

It will remove all let binding, and turn all of the variable bound by let into direct pointer reference.

Returns
the expression in graph normal form.