Writing Tunable Templates and Using the Auto-tuner

Author: Lianmin Zheng

This is an introduction tutorial to the auto-tuning module in TVM.

There are two steps in auto-tuning. The first step is defining a search space. The second step is running a search algorithm to explore through this space. In this tutorial, you can learn how to perform these two steps in TVM. The whole workflow is illustrated by a matrix multiplication example.

Note that this tutorial will not run on Windows or recent versions of macOS. To get it to run, you will need to wrap the body of this tutorial in a if __name__ == "__main__": block.

Install dependencies

To use autotvm package in TVM, we need to install some extra dependencies. This step (installing xgboost) can be skipped as it doesn’t need XGBoost (change “3” to “2” if you use python2):

pip3 install --user psutil xgboost

To make TVM run faster in tuning, it is recommended to use cython as FFI of TVM. In the root directory of TVM, execute (change “3” to “2” if you use python2):

pip3 install --user cython
sudo make cython3

Now return to python code. Import packages.

import logging
import sys

import numpy as np
import tvm
from tvm import te, testing

# the module is called `autotvm`
from tvm import autotvm

Step 1: Define the search space

In this section, we will rewrite a deterministic TVM schedule code to a tunable schedule template. You can regard the process of search space definition as the parameterization of our existing schedule code.

To begin with, here is how we implement a blocked matrix multiplication in TVM.

# Matmul V0: Constant tiling factor
def matmul_v0(N, L, M, dtype):
    A = te.placeholder((N, L), name="A", dtype=dtype)
    B = te.placeholder((L, M), name="B", dtype=dtype)

    k = te.reduce_axis((0, L), name="k")
    C = te.compute((N, M), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), name="C")
    s = te.create_schedule(C.op)

    # schedule
    y, x = s[C].op.axis
    k = s[C].op.reduce_axis[0]

    yo, yi = s[C].split(y, 8)
    xo, xi = s[C].split(x, 8)

    s[C].reorder(yo, xo, k, yi, xi)

    return s, [A, B, C]

Parametrize the schedule

In the previous schedule code, we use a constant “8” as tiling factor. However, it might not be the best one because the best tiling factor depends on real hardware environment and input shape.

If you want the schedule code to be portable across a wider range of input shapes and target hardware, it is better to define a set of candidate values and pick the best one according to the measurement results on target hardware.

In autotvm, we can define a tunable parameter, or a “knob” for such kind of value.

# Matmul V1: List candidate values
@autotvm.template("tutorial/matmul_v1")  # 1. use a decorator
def matmul_v1(N, L, M, dtype):
    A = te.placeholder((N, L), name="A", dtype=dtype)
    B = te.placeholder((L, M), name="B", dtype=dtype)

    k = te.reduce_axis((0, L), name="k")
    C = te.compute((N, M), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), name="C")
    s = te.create_schedule(C.op)

    # schedule
    y, x = s[C].op.axis
    k = s[C].op.reduce_axis[0]

    # 2. get the config object
    cfg = autotvm.get_config()

    # 3. define search space
    cfg.define_knob("tile_y", [1, 2, 4, 8, 16])
    cfg.define_knob("tile_x", [1, 2, 4, 8, 16])

    # 4. schedule according to config
    yo, yi = s[C].split(y, cfg["tile_y"].val)
    xo, xi = s[C].split(x, cfg["tile_x"].val)

    s[C].reorder(yo, xo, k, yi, xi)

    return s, [A, B, C]

Here we make four modifications to the previous schedule code and get a tunable “template”. We can explain the modifications one by one.

  1. Use a decorator to mark this function as a simple template.

  2. Get a config object: You can regard this cfg as an argument of this function but we obtain it in a different way. With this argument, this function is no longer a deterministic schedule code. Instead, we can pass different configurations to this function and get different schedules, so this function is a “template”.

    To make the template function more compact, we do two things in a single function. (1) define a search space and (2) schedule according to an entity in this space. To achieve this, we make cfg be either a ConfigSpace or a ConfigEntity object.

    When it is a ConfigSpace, it will collect all tunable knobs in this function and build the search space. When it is a ConfigEntity, it will ignore all space definition API (namely, cfg.define_XXXXX(...)). Instead, it stores deterministic values for all tunable knobs, and we schedule according to these values.

    During auto-tuning, we will first call this template with a ConfigSpace object to build the search space. Then we call this template with different ConfigEntity in the built space to get different schedules. Finally we will measure the code generated by different schedules and pick the best one.

  3. Define two tunable knobs. The first one is tile_y with 5 possible values. The second one is tile_x with a same list of possible values. These two knobs are independent, so they span a search space with size = 5x5 = 25

  4. Schedule according to the deterministic values in cfg

Use better space definition API

In the previous template, we manually list all possible values for a knob. This is the lowest level API to define the space. However, we also provide another set of API to make the space definition easier and smarter. It is recommended to use this set of high level API.

In the following example, we use ConfigSpace.define_split to define a split knob. It will enumerate all the possible ways to split an axis and construct the space.

We also have ConfigSpace.define_reorder for reorder knob and ConfigSpace.define_annotate for annotation like unroll, vectorization, thread binding. When the high level API cannot meet your requirement, you can always fall back to use low level API.

def matmul(N, L, M, dtype):
    A = te.placeholder((N, L), name="A", dtype=dtype)
    B = te.placeholder((L, M), name="B", dtype=dtype)

    k = te.reduce_axis((0, L), name="k")
    C = te.compute((N, M), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), name="C")
    s = te.create_schedule(C.op)

    # schedule
    y, x = s[C].op.axis
    k = s[C].op.reduce_axis[0]

    ##### define space begin #####
    cfg = autotvm.get_config()
    cfg.define_split("tile_y", y, num_outputs=2)
    cfg.define_split("tile_x", x, num_outputs=2)
    ##### define space end #####

    # schedule according to config
    yo, yi = cfg["tile_y"].apply(s, C, y)
    xo, xi = cfg["tile_x"].apply(s, C, x)

    s[C].reorder(yo, xo, k, yi, xi)

    return s, [A, B, C]


More Explanation on cfg.defile_split

In this template, cfg.define_split("tile_y", y, num_outputs=2) will enumerate all possible combinations that can split axis y into two axes with factors of the length of y. For example, if the length of y is 32 and we want to split it into two axes using factors of 32, then there are 6 possible values for (length of outer axis, length of inner axis) pair, namely (32, 1), (16, 2), (8, 4), (4, 8), (2, 16) or (1, 32). They are just the 6 possible values of tile_y.

During schedule, cfg["tile_y"] is a SplitEntity object. We stores the lengths of outer axes and inner axes in cfg['tile_y'].size (a tuple with two elements). In this template, we apply it by using yo, yi = cfg['tile_y'].apply(s, C, y). Actually, this is equivalent to yo, yi = s[C].split(y, cfg["tile_y"].size[1]) or yo, yi = s[C].split(y, nparts=cfg['tile_y"].size[0])

The advantage of using cfg.apply API is that it makes multi-level split (when num_outputs >= 3) easier.

Step 2: Search through the space

In step 1, we build the search space by extending our old schedule code into a template. The next step is to pick a tuner and explore in this space.

Auto-tuners in TVM

The job for a tuner can be described by following pseudo code

ct = 0
while ct < max_number_of_trials:
    propose a batch of configs
    measure this batch of configs on real hardware and get results
    ct += batch_size

When proposing the next batch of configs, the tuner can take different strategies. We provide four tuners with different strategies in autotvm.

  • RandomTuner: Enumerate the space in a random order

  • GridSearchTuner: Enumerate the space in a grid search order

  • GATuner: Using genetic algorithm to search through the space

  • XGBTuner: Uses a model based method. Train a XGBoost model to predict the speed of lowered IR and pick the next batch according to the prediction.

You can choose the tuner according to the size of your space, your time budget and other factors. For example, if your space is very small (less than 1000), a gridsearch tuner or a random tuner is good enough. If your space is at the level of 10^9 (this is the space size of a conv2d operator on CUDA GPU), XGBoostTuner can explore more efficiently and find better configs.

Begin tuning

Here we continue our matrix multiplication example. First we should create a tuning task. We can also inspect the initialized search space. In this case, for a 512x512 square matrix multiplication, the space size is 10x10=100

N, L, M = 512, 512, 512
task = autotvm.task.create("tutorial/matmul", args=(N, L, M, "float32"), target="llvm")


ConfigSpace (len=100, space_map=
   0 tile_y: Split(policy=factors, product=512, num_outputs=2) len=10
   1 tile_x: Split(policy=factors, product=512, num_outputs=2) len=10

Then we need to define how to measure the generated code and pick a tuner. Since our space is small, a random tuner is just okay.

We only make 10 trials in this tutorial for demonstration. In practice, you can do more trials according to your time budget. We will log the tuning results into a log file. This file can be used to get the best config later.

# logging config (for printing tuning log to the screen)

# There are two steps for measuring a config: build and run.
# By default, we use all CPU cores to compile program. Then measure them sequentially.
# We measure 5 times and take average to reduce variance.
measure_option = autotvm.measure_option(builder="local", runner=autotvm.LocalRunner(number=5))

# Begin tuning with RandomTuner, log records to file `matmul.log`
# You can use alternatives like XGBTuner.
tuner = autotvm.tuner.RandomTuner(task)


Get devices for measurement successfully!
No: 1   GFLOPS: 0.00/0.00       result: MeasureResult(costs=(RuntimeError('Traceback (most recent call last):\n  [bt] (5) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7f9267ab0531]\n  [bt] (4) /workspace/build/libtvm.so(+0x12eec62) [0x7f9267ae5c62]\n  [bt] (3) /workspace/build/libtvm.so(tvm::runtime::RPCWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x246) [0x7f9267ae8de6]\n  [bt] (2) /workspace/build/libtvm.so(tvm::runtime::RPCClientSession::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)> const&)+0x57) [0x7f9267ade057]\n  [bt] (1) /workspace/build/libtvm.so(tvm::runtime::RPCEndpoint::CallFunc(void*, TVMValue const*, int const*, int, std::function<void (tvm::runtime::TVMArgs)>)+0x39c) [0x7f9267ad52ec]\n  [bt] (0) /workspace/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x61) [0x7f9266d50611]\n  File "/workspace/src/runtime/rpc/rpc_endpoint.cc", line 815\nTVMError: \n---------------------------------------------------------------\nAn internal invariant was violated during the execution of TVM.\nPlease',),), error_no=4, all_cost=10.488742113113403, timestamp=1606247172.4452503)     [('tile_y', [-1, 64]), ('tile_x', [-1, 1])],None,6
No: 2   GFLOPS: 4.97/4.97       result: MeasureResult(costs=(0.0540106804,), error_no=0, all_cost=1.2689733505249023, timestamp=1606247173.709435)      [('tile_y', [-1, 512]), ('tile_x', [-1, 8])],None,39
No: 3   GFLOPS: 2.80/4.97       result: MeasureResult(costs=(0.0959330538,), error_no=0, all_cost=2.031170606613159, timestamp=1606247175.638364)       [('tile_y', [-1, 2]), ('tile_x', [-1, 8])],None,31
No: 4   GFLOPS: 16.27/16.27     result: MeasureResult(costs=(0.016497969600000002,), error_no=0, all_cost=0.7240803241729736, timestamp=1606247176.329956)      [('tile_y', [-1, 1]), ('tile_x', [-1, 32])],None,50
No: 5   GFLOPS: 21.25/21.25     result: MeasureResult(costs=(0.0126320166,), error_no=0, all_cost=0.7413387298583984, timestamp=1606247176.909276)      [('tile_y', [-1, 256]), ('tile_x', [-1, 64])],None,68
No: 6   GFLOPS: 20.21/21.25     result: MeasureResult(costs=(0.0132850592,), error_no=0, all_cost=0.6722569465637207, timestamp=1606247177.5262818)     [('tile_y', [-1, 256]), ('tile_x', [-1, 512])],None,98
No: 7   GFLOPS: 0.80/21.25      result: MeasureResult(costs=(0.3340438422,), error_no=0, all_cost=5.789602994918823, timestamp=1606247183.2634928)      [('tile_y', [-1, 128]), ('tile_x', [-1, 2])],None,17
No: 8   GFLOPS: 1.43/21.25      result: MeasureResult(costs=(0.1871098444,), error_no=0, all_cost=3.410900354385376, timestamp=1606247186.6618466)      [('tile_y', [-1, 8]), ('tile_x', [-1, 4])],None,23
No: 9   GFLOPS: 19.64/21.25     result: MeasureResult(costs=(0.013666064,), error_no=0, all_cost=0.6099884510040283, timestamp=1606247187.2684672)      [('tile_y', [-1, 256]), ('tile_x', [-1, 32])],None,58
No: 10  GFLOPS: 22.72/22.72     result: MeasureResult(costs=(0.0118135662,), error_no=0, all_cost=0.6184511184692383, timestamp=1606247187.8485868)     [('tile_y', [-1, 64]), ('tile_x', [-1, 128])],None,76

Finally we apply history best from the cache file and check its correctness. We can call the function matmul directly under the autotvm.apply_history_best context. When we call this function, it will query the dispatch context with its argument and get the best config with the same argument.

# apply history best from log file
with autotvm.apply_history_best("matmul.log"):
    with tvm.target.Target("llvm"):
        s, arg_bufs = matmul(N, L, M, "float32")
        func = tvm.build(s, arg_bufs)

# check correctness
a_np = np.random.uniform(size=(N, L)).astype(np.float32)
b_np = np.random.uniform(size=(L, M)).astype(np.float32)
c_np = a_np.dot(b_np)

c_tvm = tvm.nd.empty(c_np.shape)
func(tvm.nd.array(a_np), tvm.nd.array(b_np), c_tvm)

tvm.testing.assert_allclose(c_np, c_tvm.asnumpy(), rtol=1e-2)

Gallery generated by Sphinx-Gallery