tvm
|
An implementation to run loop in parallel. More...
Go to the source code of this file.
Namespaces | |
tvm | |
runtime implementation for LibTorch/TorchScript. | |
tvm::support | |
Typedefs | |
using | tvm::support::PartitionerFuncType = std::function< std::vector< std::vector< int > >(int, int, int, int)> |
Functions | |
std::vector< std::vector< int > > | tvm::support::rr_partitioner (int begin, int end, int step, int num_threads) |
A partitioner to split the task to each thread in Round-robin manner. More... | |
void | tvm::support::parallel_for (int begin, int end, const std::function< void(int)> &f, int step=1, const PartitionerFuncType partitioner=rr_partitioner) |
A runtime api provided to run the task function in parallel. e.g. A for loop: for (int i = 0; i < 10; i++) { a[i] = i; } should work the same as: parallel_for(0, 10, [&a](int index) { a[i] = i; });. More... | |
void | tvm::support::parallel_for_dynamic (int begin, int end, int num_threads, const std::function< void(int thread_id, int task_id)> &f) |
An API to launch fix amount of threads to run the specific functor in parallel. Different from parallel_for , the partition is determined dynamically on the fly, i.e. any time when a thread is idle, it fetches the next task to run. The behavior is similar to dynamic scheduling in OpenMP: More... | |
An implementation to run loop in parallel.