tvm
Namespaces | Classes | Typedefs | Enumerations | Functions
tvm::s_tir Namespace Reference

Namespaces

 attr
 
 backend
 
 meta_schedule
 
 transform
 

Classes

struct  MemCpyDetails
 Helper struct for return value of IdentifyMemCpy. More...
 
class  InstructionKindNode
 Kind of an instruction, e.g. Split, Reorder, etc. Besides the name, every kind of instruction has its own properties, including: 1) A boolean indicating if the instruction is pure, i.e. change nothing in the schedule state 2) A functor that applies the instruction to a TensorIR schedule 3) A functor that converts the instruction to a statement in python syntax 4) A functor that serialize its attributes to JSON 5) A functor that deserialize its attributes from JSON. More...
 
class  InstructionKind
 Managed reference to InstructionKindNode. More...
 
class  InstructionNode
 Schedule instructions each corresponds to a schedule primitive. More...
 
class  Instruction
 Managed reference to InstructionNode. More...
 
class  InstructionKindRegEntry
 An entry in the registry of InstructionKind. More...
 
class  SBlockRVNode
 A random variable that evaluates to a TensorIR block. More...
 
class  SBlockRV
 Managed reference to SBlockRVNode. More...
 
class  LoopRVNode
 A random variable that evaluates to a TensorIR for loop. More...
 
class  LoopRV
 Managed reference to LoopRVNode. More...
 
class  ScheduleNode
 The user-facing schedule class. More...
 
class  Schedule
 Managed reference to ScheduleNode. More...
 
struct  SBlockInfo
 The information about a TensorIR block, it contains two categories of information 1) Info on the block scope rooted at a specific block, including dependency tracking, flags indicating if the scope is a stage pipeline, etc. 2) Info on the block itself, including if the block has a quasi-affine binding, if the regions it reads are completely covered by their producers, etc. More...
 
class  ScheduleStateNode
 The state of scheduling, which exposes a Replace method as the primary interface for all the scheduling primitives to manipulate the TensorIR. More...
 
class  ScheduleState
 Managed reference to ScheduleStateNode. More...
 
class  TraceNode
 An execution trace of a scheduling program. More...
 
class  Trace
 Managed reference to TraceNode. More...
 

Typedefs

using FInstructionApply = ffi::TypedFunction< ffi::Array< Any >(Schedule sch, const ffi::Array< Any > &inputs, const ffi::Array< Any > &attrs, const Any &decision)>
 Type of the functor that applies the instruction to a TensorIR schedule. More...
 
using FInstructionAsPython = ffi::TypedFunction< ffi::String(const ffi::Array< Any > &inputs, const ffi::Array< Any > &attrs, const Any &decision, const ffi::Array< ffi::String > &outputs)>
 Type of the functor that converts the instruction to a statement in python syntax. More...
 
using FInstructionAttrsAsJSON = ffi::TypedFunction< ObjectRef(ffi::Array< Any > attrs)>
 Type of the functor that serialize its attributes to JSON. More...
 
using FInstructionAttrsFromJSON = ffi::TypedFunction< ffi::Array< Any >(ObjectRef json_attrs)>
 Type of the functor that deserialize its attributes from JSON. More...
 
using ExprRV = PrimExpr
 An expr random variable. More...
 
using ExprRVNode = PrimExprNode
 
using FTraceDecisionProvider = ffi::TypedFunction< Any(const Instruction &inst, const ffi::Array< Any > &inputs, const ffi::Array< Any > &attrs, const Any &decision)>
 A callback that allows users to mutate decisions on the fly when applying instructions. The signature of the callback is: More...
 

Enumerations

enum class  ScheduleErrorRenderLevel : int32_t { kDetail = 0 , kFast = 1 , kNone = 2 }
 The level of detailed error message rendering. More...
 
enum class  BufferIndexType : int32_t { kRead = 0 , kWrite = 1 }
 Type of buffer index. More...
 
enum  ScheduleDebugMask : uint32_t { kVerifySRefTree = 1 , kVerifyCachedFlags = 2 }
 The bitmask of the debug flag in the ScheduleStateNode. More...
 

Functions

double EstimateTIRFlops (const Stmt &stmt)
 Estimate the FLOPs of a TIR fragment. More...
 
double EstimateTIRFlops (const IRModule &mod)
 Estimate the FLOPs of TIRs in an IRModule. More...
 
bool IsPureFunction (const PrimFunc &func, bool assert_on_error=false)
 Analyze the side effect of a function. More...
 
bool VerifyGPUCode (const PrimFunc &func, ffi::Map< ffi::String, PrimExpr > constraints)
 Verify the correctness of a GPU code. More...
 
std::optional< MemCpyDetailsIdentifyMemCpy (const For &loop, arith::Analyzer *analyzer)
 Identify whether a For loop is semantically equivalent to MemCpy. More...
 
ffi::Map< ffi::String, ffi::Map< ffi::String, Integer > > CalculateAllocatedBytes (const PrimFunc &func)
 Calculate the allocated memory per scope in bytes needed inside the TIR PrimFunc. More...
 
ffi::Map< ffi::String, ffi::Map< ffi::String, Integer > > CalculateAllocatedBytes (const IRModule &mod)
 Calculate the allocated memory per scope in bytes for each function inside the module. More...
 
ffi::Array< tvm::transform::PassGetVTCMCompactionPasses ()
 Get the list of lowering passes to calculate the compacted VTCM allocation size. More...
 
bool VerifyVTCMLimit (const IRModule &mod, Integer limit)
 Verifies that the VTCM usage for all prim_funcs in the given IRModule. More...
 
bool VerifyVTCMLimit (const PrimFunc &func, Integer limit)
 Verifies that the VTCM usage of the given prim_func is within the provided limit. More...
 
tirx::PrimFunc RenewDefs (const tirx::PrimFunc &func)
 Renew the definition nodes for a TIR, including Var, Buffer and IterVar. This pass works as a simple DeepCopy to duplicate a function with different Vars and Buffers but the same behavior. More...
 

Typedef Documentation

◆ ExprRV

using tvm::s_tir::ExprRV = typedef PrimExpr

An expr random variable.

◆ ExprRVNode

◆ FInstructionApply

using tvm::s_tir::FInstructionApply = typedef ffi::TypedFunction<ffi::Array<Any>(Schedule sch, const ffi::Array<Any>& inputs, const ffi::Array<Any>& attrs, const Any& decision)>

Type of the functor that applies the instruction to a TensorIR schedule.

Parameters
schThe schedule to be applied on
inputsThe input random variables
attrsInstruction attributes
decisionDecisions made on the instruction
Returns
The functor returns an array of output random variables

◆ FInstructionAsPython

using tvm::s_tir::FInstructionAsPython = typedef ffi::TypedFunction<ffi::String(const ffi::Array<Any>& inputs, const ffi::Array<Any>& attrs, const Any& decision, const ffi::Array<ffi::String>& outputs)>

Type of the functor that converts the instruction to a statement in python syntax.

Parameters
inputsNames of the input random variables
attrsInstruction attributes
decisionsDecisions made on the instruction
outputsNames of the output random variables
Returns
A string representing the python api call

◆ FInstructionAttrsAsJSON

using tvm::s_tir::FInstructionAttrsAsJSON = typedef ffi::TypedFunction<ObjectRef(ffi::Array<Any> attrs)>

Type of the functor that serialize its attributes to JSON.

Parameters
attrsThe attributes to be serialized
Returns
An array, serialized attributes
Note
This functor is nullable

◆ FInstructionAttrsFromJSON

using tvm::s_tir::FInstructionAttrsFromJSON = typedef ffi::TypedFunction<ffi::Array<Any>(ObjectRef json_attrs)>

Type of the functor that deserialize its attributes from JSON.

Parameters
json_attrsThe attributes to be serialized
Returns
An array, deserialized attributes
Note
This functor is nullable

◆ FTraceDecisionProvider

using tvm::s_tir::FTraceDecisionProvider = typedef ffi::TypedFunction<Any(const Instruction& inst, const ffi::Array<Any>& inputs, const ffi::Array<Any>& attrs, const Any& decision)>

A callback that allows users to mutate decisions on the fly when applying instructions. The signature of the callback is:

Parameters
instThe instruction
inputsThe input random variables
attrsThe attributes
decisionThe original decision
Returns
A new decision

Enumeration Type Documentation

◆ BufferIndexType

enum tvm::s_tir::BufferIndexType : int32_t
strong

Type of buffer index.

Enumerator
kRead 

Index of a read buffer.

kWrite 

Index of a written buffer.

◆ ScheduleDebugMask

The bitmask of the debug flag in the ScheduleStateNode.

See also
ScheduleStateNode
Enumerator
kVerifySRefTree 

Verify the correctness of the sref tree.

kVerifyCachedFlags 

Verify the correctness of affine_binding, region_cover and stage_pipeline.

◆ ScheduleErrorRenderLevel

enum tvm::s_tir::ScheduleErrorRenderLevel : int32_t
strong

The level of detailed error message rendering.

Enumerator
kDetail 

Render a detailed error message.

kFast 

Render the error in fast mode.

kNone 

No error message at all.

Function Documentation

◆ CalculateAllocatedBytes() [1/2]

ffi::Map<ffi::String, ffi::Map<ffi::String, Integer> > tvm::s_tir::CalculateAllocatedBytes ( const IRModule mod)

Calculate the allocated memory per scope in bytes for each function inside the module.

Parameters
modThe IRModule for which the allocated memory size has to be calculated
Returns
Allocated memory size per scope in bytes for each function.

◆ CalculateAllocatedBytes() [2/2]

ffi::Map<ffi::String, ffi::Map<ffi::String, Integer> > tvm::s_tir::CalculateAllocatedBytes ( const PrimFunc func)

Calculate the allocated memory per scope in bytes needed inside the TIR PrimFunc.

Parameters
funcThe TIR PrimFunc for which the allocated memory size to be calculated
Returns
Allocated memory size per scope in bytes.

◆ EstimateTIRFlops() [1/2]

double tvm::s_tir::EstimateTIRFlops ( const IRModule mod)

Estimate the FLOPs of TIRs in an IRModule.

Parameters
modThe IRModule to be estimated.
Returns
The estimated FLOPs.

◆ EstimateTIRFlops() [2/2]

double tvm::s_tir::EstimateTIRFlops ( const Stmt stmt)

Estimate the FLOPs of a TIR fragment.

Parameters
stmtThe TIR fragment to be estimated.
Returns
The estimated FLOPs.

◆ GetVTCMCompactionPasses()

ffi::Array<tvm::transform::Pass> tvm::s_tir::GetVTCMCompactionPasses ( )

Get the list of lowering passes to calculate the compacted VTCM allocation size.

Returns
The list of passes.

◆ IdentifyMemCpy()

std::optional<MemCpyDetails> tvm::s_tir::IdentifyMemCpy ( const For loop,
arith::Analyzer analyzer 
)

Identify whether a For loop is semantically equivalent to MemCpy.

Parameters
loopThe loop to be checked
analyzerThe analyzer with which to check any algebraic expressions
Returns
The source and destination regions being copied, if the loop is equivalent to memcpy.

◆ IsPureFunction()

bool tvm::s_tir::IsPureFunction ( const PrimFunc func,
bool  assert_on_error = false 
)

Analyze the side effect of a function.

Parameters
funcThe function to be checked.
assert_on_errorIf true, an error will be thrown for an impure function.
Returns
The purity of the function.

◆ RenewDefs()

tirx::PrimFunc tvm::s_tir::RenewDefs ( const tirx::PrimFunc func)

Renew the definition nodes for a TIR, including Var, Buffer and IterVar. This pass works as a simple DeepCopy to duplicate a function with different Vars and Buffers but the same behavior.

Parameters
funcThe input PrimFunc.
Returns
The renewed func.

◆ VerifyGPUCode()

bool tvm::s_tir::VerifyGPUCode ( const PrimFunc func,
ffi::Map< ffi::String, PrimExpr constraints 
)

Verify the correctness of a GPU code.

Parameters
funcThe function to be checked.
constraintsThe dict to specify constraints to check.
Returns
valid Whether it is a valid GPU code.

◆ VerifyVTCMLimit() [1/2]

bool tvm::s_tir::VerifyVTCMLimit ( const IRModule mod,
Integer  limit 
)

Verifies that the VTCM usage for all prim_funcs in the given IRModule.

Parameters
modThe module to be checked.
limitThe limit to check.
Returns
true if the VTCM usage is within the provided limit.

◆ VerifyVTCMLimit() [2/2]

bool tvm::s_tir::VerifyVTCMLimit ( const PrimFunc func,
Integer  limit 
)

Verifies that the VTCM usage of the given prim_func is within the provided limit.

Parameters
funcThe function to be checked.
limitThe limit to check.
Returns
true if the VTCM usage is within the provided limit.