|
| struct | MemCpyDetails |
| | Helper struct for return value of IdentifyMemCpy. More...
|
| |
| class | InstructionKindNode |
| | Kind of an instruction, e.g. Split, Reorder, etc. Besides the name, every kind of instruction has its own properties, including: 1) A boolean indicating if the instruction is pure, i.e. change nothing in the schedule state 2) A functor that applies the instruction to a TensorIR schedule 3) A functor that converts the instruction to a statement in python syntax 4) A functor that serialize its attributes to JSON 5) A functor that deserialize its attributes from JSON. More...
|
| |
| class | InstructionKind |
| | Managed reference to InstructionKindNode. More...
|
| |
| class | InstructionNode |
| | Schedule instructions each corresponds to a schedule primitive. More...
|
| |
| class | Instruction |
| | Managed reference to InstructionNode. More...
|
| |
| class | InstructionKindRegEntry |
| | An entry in the registry of InstructionKind. More...
|
| |
| class | SBlockRVNode |
| | A random variable that evaluates to a TensorIR block. More...
|
| |
| class | SBlockRV |
| | Managed reference to SBlockRVNode. More...
|
| |
| class | LoopRVNode |
| | A random variable that evaluates to a TensorIR for loop. More...
|
| |
| class | LoopRV |
| | Managed reference to LoopRVNode. More...
|
| |
| class | ScheduleNode |
| | The user-facing schedule class. More...
|
| |
| class | Schedule |
| | Managed reference to ScheduleNode. More...
|
| |
| struct | SBlockInfo |
| | The information about a TensorIR block, it contains two categories of information 1) Info on the block scope rooted at a specific block, including dependency tracking, flags indicating if the scope is a stage pipeline, etc. 2) Info on the block itself, including if the block has a quasi-affine binding, if the regions it reads are completely covered by their producers, etc. More...
|
| |
| class | ScheduleStateNode |
| | The state of scheduling, which exposes a Replace method as the primary interface for all the scheduling primitives to manipulate the TensorIR. More...
|
| |
| class | ScheduleState |
| | Managed reference to ScheduleStateNode. More...
|
| |
| class | TraceNode |
| | An execution trace of a scheduling program. More...
|
| |
| class | Trace |
| | Managed reference to TraceNode. More...
|
| |
|
| using | FInstructionApply = ffi::TypedFunction< ffi::Array< Any >(Schedule sch, const ffi::Array< Any > &inputs, const ffi::Array< Any > &attrs, const Any &decision)> |
| | Type of the functor that applies the instruction to a TensorIR schedule. More...
|
| |
| using | FInstructionAsPython = ffi::TypedFunction< ffi::String(const ffi::Array< Any > &inputs, const ffi::Array< Any > &attrs, const Any &decision, const ffi::Array< ffi::String > &outputs)> |
| | Type of the functor that converts the instruction to a statement in python syntax. More...
|
| |
| using | FInstructionAttrsAsJSON = ffi::TypedFunction< ObjectRef(ffi::Array< Any > attrs)> |
| | Type of the functor that serialize its attributes to JSON. More...
|
| |
| using | FInstructionAttrsFromJSON = ffi::TypedFunction< ffi::Array< Any >(ObjectRef json_attrs)> |
| | Type of the functor that deserialize its attributes from JSON. More...
|
| |
| using | ExprRV = PrimExpr |
| | An expr random variable. More...
|
| |
| using | ExprRVNode = PrimExprNode |
| |
| using | FTraceDecisionProvider = ffi::TypedFunction< Any(const Instruction &inst, const ffi::Array< Any > &inputs, const ffi::Array< Any > &attrs, const Any &decision)> |
| | A callback that allows users to mutate decisions on the fly when applying instructions. The signature of the callback is: More...
|
| |
|
| double | EstimateTIRFlops (const Stmt &stmt) |
| | Estimate the FLOPs of a TIR fragment. More...
|
| |
| double | EstimateTIRFlops (const IRModule &mod) |
| | Estimate the FLOPs of TIRs in an IRModule. More...
|
| |
| bool | IsPureFunction (const PrimFunc &func, bool assert_on_error=false) |
| | Analyze the side effect of a function. More...
|
| |
| bool | VerifyGPUCode (const PrimFunc &func, ffi::Map< ffi::String, PrimExpr > constraints) |
| | Verify the correctness of a GPU code. More...
|
| |
| std::optional< MemCpyDetails > | IdentifyMemCpy (const For &loop, arith::Analyzer *analyzer) |
| | Identify whether a For loop is semantically equivalent to MemCpy. More...
|
| |
| ffi::Map< ffi::String, ffi::Map< ffi::String, Integer > > | CalculateAllocatedBytes (const PrimFunc &func) |
| | Calculate the allocated memory per scope in bytes needed inside the TIR PrimFunc. More...
|
| |
| ffi::Map< ffi::String, ffi::Map< ffi::String, Integer > > | CalculateAllocatedBytes (const IRModule &mod) |
| | Calculate the allocated memory per scope in bytes for each function inside the module. More...
|
| |
| ffi::Array< tvm::transform::Pass > | GetVTCMCompactionPasses () |
| | Get the list of lowering passes to calculate the compacted VTCM allocation size. More...
|
| |
| bool | VerifyVTCMLimit (const IRModule &mod, Integer limit) |
| | Verifies that the VTCM usage for all prim_funcs in the given IRModule. More...
|
| |
| bool | VerifyVTCMLimit (const PrimFunc &func, Integer limit) |
| | Verifies that the VTCM usage of the given prim_func is within the provided limit. More...
|
| |
| tirx::PrimFunc | RenewDefs (const tirx::PrimFunc &func) |
| | Renew the definition nodes for a TIR, including Var, Buffer and IterVar. This pass works as a simple DeepCopy to duplicate a function with different Vars and Buffers but the same behavior. More...
|
| |