tvm
|
Global schedule container For operations and all the operations they depend on. The schedule per Operation is named as stage. More...
#include <schedule.h>
Public Types | |
using | ContainerType = ScheduleNode |
Public Types inherited from tvm::runtime::ObjectRef | |
using | ContainerType = Object |
type indicate the container type. More... | |
Public Member Functions | |
Schedule () | |
Schedule (ObjectPtr< Object > n) | |
Schedule (Array< Operation > ops) | |
Create a schedule for array of ops(and their dependencies). More... | |
Schedule | copy () const |
Get a copy of current schedule. More... | |
Stage | operator[] (const Operation &op) |
Get the stage corresponds to the op. More... | |
Stage | operator[] (const Tensor &tensor) |
Short hand for getting the stage of tensor's operation. More... | |
Stage | create_group (const Array< Tensor > &outputs, const Array< Tensor > &inputs, bool include_inputs=false) |
Create a new stage group for all intermediate operations between inputs and outputs. More... | |
Tensor | cache_read (const Tensor &tensor, const std::string &scope, const Array< Operation > &readers) |
create a cache read of original tensor for readers. This will mutate the body of the readers. A new stage will be created for the tensor. More... | |
Array< Tensor > | cache_write (const Array< Tensor > &tensor, const std::string &scope) |
Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op. More... | |
Tensor | cache_write (const Tensor &tensor, const std::string &scope) |
Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op. More... | |
Array< Tensor > | rfactor (const Tensor &tensor, const IterVar &axis, int factor_axis=0) |
Factor a reduction axis in tensor's schedule to be an explicit axis. This will create a new stage that generated the new tensor with axis as the first dimension. The tensor's body will be rewritten as a reduction over the factored tensor. More... | |
Schedule | normalize () |
Normalize the schedule. This is needed before bound inference. Insert necessary RebaseNode to make sure all leaf_iter_vars are in form [0, extent) More... | |
Schedule | normalize_for_feature_extraction () |
Normalize the schedule for feature extraction in auto-scheduler. This is similar to Schedule::normalize , but we do aggressive simplification to the TE compute with const_matrix=True for faster compilation and feature extraction. The resulted schedule may be wrong, but it is good enough for feature extraction purposes. More... | |
const ScheduleNode * | operator-> () const |
access the internal node container More... | |
ScheduleNode * | operator-> () |
access the internal node container More... | |
Public Member Functions inherited from tvm::runtime::ObjectRef | |
ObjectRef ()=default | |
default constructor More... | |
ObjectRef (ObjectPtr< Object > data) | |
Constructor from existing object ptr. More... | |
bool | same_as (const ObjectRef &other) const |
Comparator. More... | |
bool | operator== (const ObjectRef &other) const |
Comparator. More... | |
bool | operator!= (const ObjectRef &other) const |
Comparator. More... | |
bool | operator< (const ObjectRef &other) const |
Comparator. More... | |
bool | defined () const |
const Object * | get () const |
const Object * | operator-> () const |
bool | unique () const |
int | use_count () const |
template<typename ObjectType , typename = std::enable_if_t<std::is_base_of_v<Object, ObjectType>>> | |
const ObjectType * | as () const |
Try to downcast the internal Object to a raw pointer of a corresponding type. More... | |
template<typename ObjectRefType , typename = std::enable_if_t<std::is_base_of_v<ObjectRef, ObjectRefType>>> | |
Optional< ObjectRefType > | as () const |
Try to downcast the ObjectRef to a Optional<T> of the requested type. More... | |
Additional Inherited Members | |
Static Public Attributes inherited from tvm::runtime::ObjectRef | |
static constexpr bool | _type_is_nullable = true |
Protected Member Functions inherited from tvm::runtime::ObjectRef | |
Object * | get_mutable () const |
Static Protected Member Functions inherited from tvm::runtime::ObjectRef | |
template<typename T > | |
static T | DowncastNoCheck (ObjectRef ref) |
Internal helper function downcast a ref without check. More... | |
static void | FFIClearAfterMove (ObjectRef *ref) |
Clear the object ref data field without DecRef after we successfully moved the field. More... | |
template<typename ObjectType > | |
static ObjectPtr< ObjectType > | GetDataPtr (const ObjectRef &ref) |
Internal helper function get data_ as ObjectPtr of ObjectType. More... | |
Protected Attributes inherited from tvm::runtime::ObjectRef | |
ObjectPtr< Object > | data_ |
Internal pointer that backs the reference. More... | |
Global schedule container For operations and all the operations they depend on. The schedule per Operation is named as stage.
|
inline |
Create a schedule for array of ops(and their dependencies).
ops | The ops to be scheduled. |
Tensor tvm::te::Schedule::cache_read | ( | const Tensor & | tensor, |
const std::string & | scope, | ||
const Array< Operation > & | readers | ||
) |
create a cache read of original tensor for readers. This will mutate the body of the readers. A new stage will be created for the tensor.
tensor | The tensor cached. |
scope | The scope of the cache. |
readers | The readers to redirect to the tensor. |
Array<Tensor> tvm::te::Schedule::cache_write | ( | const Array< Tensor > & | tensor, |
const std::string & | scope | ||
) |
Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op.
This function can be used to do data layout transformation. If there is a split/fuse/reorder on the data parallel axis of tensor before cache_write is called. The intermediate cache stores the data in the layout as the iteration order of leave axis. The data will be transformed back to the original layout in the original tensor. User can further call compute_inline to inline the original layout and keep the data stored in the transformed layout.
tensor | The tensors to be produced. |
scope | The scope of the storage. |
Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op.
This function can be used to do data layout transformation. If there is a split/fuse/reorder on the data parallel axis of tensor before cache_write is called. The intermediate cache stores the data in the layout as the iteration order of leave axis. The data will be transformed back to the original layout in the original tensor. User can further call compute_inline to inline the original layout and keep the data stored in the transformed layout.
tensor | The tensor to be produced. |
scope | The scope of the storage. |
Schedule tvm::te::Schedule::copy | ( | ) | const |
Get a copy of current schedule.
Stage tvm::te::Schedule::create_group | ( | const Array< Tensor > & | outputs, |
const Array< Tensor > & | inputs, | ||
bool | include_inputs = false |
||
) |
Create a new stage group for all intermediate operations between inputs and outputs.
outputs | The output boundary of the group. |
inputs | The input boundary of the group. |
include_inputs | Whether include inputs if they are reachable from outputs. |
Schedule tvm::te::Schedule::normalize | ( | ) |
Normalize the schedule. This is needed before bound inference. Insert necessary RebaseNode to make sure all leaf_iter_vars are in form [0, extent)
Schedule tvm::te::Schedule::normalize_for_feature_extraction | ( | ) |
Normalize the schedule for feature extraction in auto-scheduler. This is similar to Schedule::normalize
, but we do aggressive simplification to the TE compute with const_matrix=True for faster compilation and feature extraction. The resulted schedule may be wrong, but it is good enough for feature extraction purposes.
|
inline |
access the internal node container
|
inline |
access the internal node container
Get the stage corresponds to the op.
op | The operation. |
Short hand for getting the stage of tensor's operation.
tensor | The tensor |
Array<Tensor> tvm::te::Schedule::rfactor | ( | const Tensor & | tensor, |
const IterVar & | axis, | ||
int | factor_axis = 0 |
||
) |
Factor a reduction axis in tensor's schedule to be an explicit axis. This will create a new stage that generated the new tensor with axis as the first dimension. The tensor's body will be rewritten as a reduction over the factored tensor.
P. Suriana, A. Adams and S. Kamil. Parallel associative reductions in halide. CGO'17
tensor | The tensor to be factored. |
axis | The reduction axis in tensor's schedule to be factored. |
factor_axis | The position where the new axis is placed. |