tvm
Public Types | Public Member Functions | List of all members
tvm::te::Schedule Class Reference

Global schedule container For operations and all the operations they depend on. The schedule per Operation is named as stage. More...

#include <schedule.h>

Inheritance diagram for tvm::te::Schedule:
Collaboration diagram for tvm::te::Schedule:

Public Types

using ContainerType = ScheduleNode
 
- Public Types inherited from tvm::runtime::ObjectRef
using ContainerType = Object
 type indicate the container type. More...
 

Public Member Functions

 Schedule ()
 
 Schedule (ObjectPtr< Object > n)
 
 Schedule (Array< Operation > ops)
 Create a schedule for array of ops(and their dependencies). More...
 
Schedule copy () const
 Get a copy of current schedule. More...
 
Stage operator[] (const Operation &op)
 Get the stage corresponds to the op. More...
 
Stage operator[] (const Tensor &tensor)
 Short hand for getting the stage of tensor's operation. More...
 
Stage create_group (const Array< Tensor > &outputs, const Array< Tensor > &inputs, bool include_inputs=false)
 Create a new stage group for all intermediate operations between inputs and outputs. More...
 
Tensor cache_read (const Tensor &tensor, const std::string &scope, const Array< Operation > &readers)
 create a cache read of original tensor for readers. This will mutate the body of the readers. A new stage will be created for the tensor. More...
 
Array< Tensorcache_write (const Array< Tensor > &tensor, const std::string &scope)
 Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op. More...
 
Tensor cache_write (const Tensor &tensor, const std::string &scope)
 Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op. More...
 
Array< Tensorrfactor (const Tensor &tensor, const IterVar &axis, int factor_axis=0)
 Factor a reduction axis in tensor's schedule to be an explicit axis. This will create a new stage that generated the new tensor with axis as the first dimension. The tensor's body will be rewritten as a reduction over the factored tensor. More...
 
Schedule normalize ()
 Normalize the schedule. This is needed before bound inference. Insert necessary RebaseNode to make sure all leaf_iter_vars are in form [0, extent) More...
 
Schedule normalize_for_feature_extraction ()
 Normalize the schedule for feature extraction in auto-scheduler. This is similar to Schedule::normalize, but we do aggressive simplification to the TE compute with const_matrix=True for faster compilation and feature extraction. The resulted schedule may be wrong, but it is good enough for feature extraction purposes. More...
 
const ScheduleNodeoperator-> () const
 access the internal node container More...
 
ScheduleNodeoperator-> ()
 access the internal node container More...
 
- Public Member Functions inherited from tvm::runtime::ObjectRef
 ObjectRef ()=default
 default constructor More...
 
 ObjectRef (ObjectPtr< Object > data)
 Constructor from existing object ptr. More...
 
bool same_as (const ObjectRef &other) const
 Comparator. More...
 
bool operator== (const ObjectRef &other) const
 Comparator. More...
 
bool operator!= (const ObjectRef &other) const
 Comparator. More...
 
bool operator< (const ObjectRef &other) const
 Comparator. More...
 
bool defined () const
 
const Objectget () const
 
const Objectoperator-> () const
 
bool unique () const
 
int use_count () const
 
template<typename ObjectType , typename = std::enable_if_t<std::is_base_of_v<Object, ObjectType>>>
const ObjectType * as () const
 Try to downcast the internal Object to a raw pointer of a corresponding type. More...
 
template<typename ObjectRefType , typename = std::enable_if_t<std::is_base_of_v<ObjectRef, ObjectRefType>>>
Optional< ObjectRefType > as () const
 Try to downcast the ObjectRef to a Optional<T> of the requested type. More...
 

Additional Inherited Members

- Static Public Attributes inherited from tvm::runtime::ObjectRef
static constexpr bool _type_is_nullable = true
 
- Protected Member Functions inherited from tvm::runtime::ObjectRef
Objectget_mutable () const
 
- Static Protected Member Functions inherited from tvm::runtime::ObjectRef
template<typename T >
static T DowncastNoCheck (ObjectRef ref)
 Internal helper function downcast a ref without check. More...
 
static void FFIClearAfterMove (ObjectRef *ref)
 Clear the object ref data field without DecRef after we successfully moved the field. More...
 
template<typename ObjectType >
static ObjectPtr< ObjectType > GetDataPtr (const ObjectRef &ref)
 Internal helper function get data_ as ObjectPtr of ObjectType. More...
 
- Protected Attributes inherited from tvm::runtime::ObjectRef
ObjectPtr< Objectdata_
 Internal pointer that backs the reference. More...
 

Detailed Description

Global schedule container For operations and all the operations they depend on. The schedule per Operation is named as stage.

Member Typedef Documentation

◆ ContainerType

Constructor & Destructor Documentation

◆ Schedule() [1/3]

tvm::te::Schedule::Schedule ( )
inline

◆ Schedule() [2/3]

tvm::te::Schedule::Schedule ( ObjectPtr< Object n)
inlineexplicit

◆ Schedule() [3/3]

tvm::te::Schedule::Schedule ( Array< Operation ops)
explicit

Create a schedule for array of ops(and their dependencies).

Parameters
opsThe ops to be scheduled.

Member Function Documentation

◆ cache_read()

Tensor tvm::te::Schedule::cache_read ( const Tensor tensor,
const std::string &  scope,
const Array< Operation > &  readers 
)

create a cache read of original tensor for readers. This will mutate the body of the readers. A new stage will be created for the tensor.

Parameters
tensorThe tensor cached.
scopeThe scope of the cache.
readersThe readers to redirect to the tensor.
Returns
The created tensor.

◆ cache_write() [1/2]

Array<Tensor> tvm::te::Schedule::cache_write ( const Array< Tensor > &  tensor,
const std::string &  scope 
)

Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op.

This function can be used to do data layout transformation. If there is a split/fuse/reorder on the data parallel axis of tensor before cache_write is called. The intermediate cache stores the data in the layout as the iteration order of leave axis. The data will be transformed back to the original layout in the original tensor. User can further call compute_inline to inline the original layout and keep the data stored in the transformed layout.

Parameters
tensorThe tensors to be produced.
scopeThe scope of the storage.
Returns
The created tensor.

◆ cache_write() [2/2]

Tensor tvm::te::Schedule::cache_write ( const Tensor tensor,
const std::string &  scope 
)

Create a cache write tensor for producing tensor. The tensor will take over body of original tensor op.

This function can be used to do data layout transformation. If there is a split/fuse/reorder on the data parallel axis of tensor before cache_write is called. The intermediate cache stores the data in the layout as the iteration order of leave axis. The data will be transformed back to the original layout in the original tensor. User can further call compute_inline to inline the original layout and keep the data stored in the transformed layout.

Parameters
tensorThe tensor to be produced.
scopeThe scope of the storage.
Returns
The created tensor.

◆ copy()

Schedule tvm::te::Schedule::copy ( ) const

Get a copy of current schedule.

Returns
The copied schedule.

◆ create_group()

Stage tvm::te::Schedule::create_group ( const Array< Tensor > &  outputs,
const Array< Tensor > &  inputs,
bool  include_inputs = false 
)

Create a new stage group for all intermediate operations between inputs and outputs.

Parameters
outputsThe output boundary of the group.
inputsThe input boundary of the group.
include_inputsWhether include inputs if they are reachable from outputs.
Returns
The new grouped stage.

◆ normalize()

Schedule tvm::te::Schedule::normalize ( )

Normalize the schedule. This is needed before bound inference. Insert necessary RebaseNode to make sure all leaf_iter_vars are in form [0, extent)

Returns
A normalized schedule, can be same as current one.

◆ normalize_for_feature_extraction()

Schedule tvm::te::Schedule::normalize_for_feature_extraction ( )

Normalize the schedule for feature extraction in auto-scheduler. This is similar to Schedule::normalize, but we do aggressive simplification to the TE compute with const_matrix=True for faster compilation and feature extraction. The resulted schedule may be wrong, but it is good enough for feature extraction purposes.

Returns
A normalized schedule, can be same as current one.

◆ operator->() [1/2]

ScheduleNode * tvm::te::Schedule::operator-> ( )
inline

access the internal node container

Returns
the pointer to the internal node container

◆ operator->() [2/2]

const ScheduleNode * tvm::te::Schedule::operator-> ( ) const
inline

access the internal node container

Returns
the pointer to the internal node container

◆ operator[]() [1/2]

Stage tvm::te::Schedule::operator[] ( const Operation op)

Get the stage corresponds to the op.

Parameters
opThe operation.

◆ operator[]() [2/2]

Stage tvm::te::Schedule::operator[] ( const Tensor tensor)
inline

Short hand for getting the stage of tensor's operation.

Parameters
tensorThe tensor
Returns
The stage corresponding to the tensor's op

◆ rfactor()

Array<Tensor> tvm::te::Schedule::rfactor ( const Tensor tensor,
const IterVar axis,
int  factor_axis = 0 
)

Factor a reduction axis in tensor's schedule to be an explicit axis. This will create a new stage that generated the new tensor with axis as the first dimension. The tensor's body will be rewritten as a reduction over the factored tensor.

P. Suriana, A. Adams and S. Kamil. Parallel associative reductions in halide. CGO'17

Parameters
tensorThe tensor to be factored.
axisThe reduction axis in tensor's schedule to be factored.
factor_axisThe position where the new axis is placed.
Returns
The created factored tensors.

The documentation for this class was generated from the following file: