Autotuning with micro TVM

Author: Andrew Reusch, Mehrdad Hessar <https://github.com/mehrdadh>

This tutorial explains how to autotune a model using the C runtime.

import numpy as np
import subprocess
import pathlib

import tvm

Defining the model

To begin with, define a model in Relay to be executed on-device. Then create an IRModule from relay model and fill parameters with random numbers.

data_shape = (1, 3, 10, 10)
weight_shape = (6, 3, 5, 5)

data = tvm.relay.var("data", tvm.relay.TensorType(data_shape, "float32"))
weight = tvm.relay.var("weight", tvm.relay.TensorType(weight_shape, "float32"))

y = tvm.relay.nn.conv2d(
    data,
    weight,
    padding=(2, 2),
    kernel_size=(5, 5),
    kernel_layout="OIHW",
    out_dtype="float32",
)
f = tvm.relay.Function([data, weight], y)

relay_mod = tvm.IRModule.from_expr(f)
relay_mod = tvm.relay.transform.InferType()(relay_mod)

weight_sample = np.random.rand(
    weight_shape[0], weight_shape[1], weight_shape[2], weight_shape[3]
).astype("float32")
params = {"weight": weight_sample}

Defining the target #

Now we define the TVM target that describes the execution environment. This looks very similar to target definitions from other microTVM tutorials.

When running on physical hardware, choose a target and a board that describe the hardware. There are multiple hardware targets that could be selected from PLATFORM list in this tutorial. You can chose the platform by passing –platform argument when running this tutorial.

TARGET = tvm.target.target.micro("host")

# Compiling for physical hardware
# --------------------------------------------------------------------------
#  When running on physical hardware, choose a TARGET and a BOARD that describe the hardware. The
#  STM32L4R5ZI Nucleo target and board is chosen in the example below.
#
#    TARGET = tvm.target.target.micro("stm32l4r5zi")
#    BOARD = "nucleo_l4r5zi"

Extracting tuning tasks

Not all operators in the Relay program printed above can be tuned. Some are so trivial that only a single implementation is defined; others don’t make sense as tuning tasks. Using extract_from_program, you can produce a list of tunable tasks.

Because task extraction involves running the compiler, we first configure the compiler’s transformation passes; we’ll apply the same configuration later on during autotuning.

pass_context = tvm.transform.PassContext(opt_level=3, config={"tir.disable_vectorize": True})
with pass_context:
    tasks = tvm.autotvm.task.extract_from_program(relay_mod["main"], {}, TARGET)
assert len(tasks) > 0

Configuring microTVM

Before autotuning, we need to define a module loader and then pass that to a tvm.autotvm.LocalBuilder. Then we create a tvm.autotvm.LocalRunner and use both builder and runner to generates multiple measurements for auto tunner.

In this tutorial, we have the option to use x86 host as an example or use different targets from Zephyr RTOS. If you choose pass –platform=host to this tutorial it will uses x86. You can choose other options by choosing from PLATFORM list.

repo_root = pathlib.Path(
    subprocess.check_output(["git", "rev-parse", "--show-toplevel"], encoding="utf-8").strip()
)

module_loader = tvm.micro.AutoTvmModuleLoader(
    template_project_dir=repo_root / "src" / "runtime" / "crt" / "host",
    project_options={},
)
builder = tvm.autotvm.LocalBuilder(
    n_parallel=1,
    build_kwargs={"build_option": {"tir.disable_vectorize": True}},
    do_fork=True,
    build_func=tvm.micro.autotvm_build_func,
)
runner = tvm.autotvm.LocalRunner(number=1, repeat=1, timeout=100, module_loader=module_loader)

measure_option = tvm.autotvm.measure_option(builder=builder, runner=runner)

# Compiling for physical hardware
# --------------------------------------------------------------------------
#    module_loader = tvm.micro.AutoTvmModuleLoader(
#        template_project_dir=repo_root / "apps" / "microtvm" / "zephyr" / "template_project",
#        project_options={
#            "zephyr_board": BOARD,
#            "west_cmd": "west",
#            "verbose": 1,
#            "project_type": "host_driven",
#        },
#    )
#    builder = tvm.autotvm.LocalBuilder(
#        n_parallel=1,
#        build_kwargs={"build_option": {"tir.disable_vectorize": True}},
#        do_fork=False,
#        build_func=tvm.micro.autotvm_build_func,
#    )
#    runner = tvm.autotvm.LocalRunner(number=1, repeat=1, timeout=100, module_loader=module_loader)

# measure_option = tvm.autotvm.measure_option(builder=builder, runner=runner)

################
# Run Autotuning
################
# Now we can run autotuning separately on each extracted task.

num_trials = 10
for task in tasks:
    tuner = tvm.autotvm.tuner.GATuner(task)
    tuner.tune(
        n_trial=num_trials,
        measure_option=measure_option,
        callbacks=[
            tvm.autotvm.callback.log_to_file("microtvm_autotune.log.txt"),
            tvm.autotvm.callback.progress_bar(num_trials, si_prefix="M"),
        ],
        si_prefix="M",
    )

Out:

Current/Best:    0.00/   0.00 MFLOPS | Progress: (0/10) | 0.00 s
Current/Best:  221.97/ 221.97 MFLOPS | Progress: (1/10) | 5.48 s
Current/Best:  112.44/ 221.97 MFLOPS | Progress: (2/10) | 8.18 s
Current/Best:  114.36/ 221.97 MFLOPS | Progress: (3/10) | 12.06 s
Current/Best:  260.71/ 260.71 MFLOPS | Progress: (4/10) | 15.88 s
Current/Best:  265.12/ 265.12 MFLOPS | Progress: (5/10) | 19.71 s
Current/Best:  158.71/ 265.12 MFLOPS | Progress: (6/10) | 22.32 s
Current/Best:  247.02/ 265.12 MFLOPS | Progress: (7/10) | 24.95 s
Current/Best:  113.80/ 265.12 MFLOPS | Progress: (8/10) | 27.56 s
Current/Best:  281.78/ 281.78 MFLOPS | Progress: (9/10) | 30.17 s
Current/Best:  220.41/ 281.78 MFLOPS | Progress: (10/10) | 32.78 s Done.

Timing the untuned program

For comparison, let’s compile and run the graph without imposing any autotuning schedules. TVM will select a randomly-tuned implementation for each operator, which should not perform as well as the tuned operator.

with pass_context:
    lowered = tvm.relay.build(relay_mod, target=TARGET, params=params)

temp_dir = tvm.contrib.utils.tempdir()

project = tvm.micro.generate_project(
    str(repo_root / "src" / "runtime" / "crt" / "host"), lowered, temp_dir / "project"
)

# Compiling for physical hardware
# --------------------------------------------------------------------------
#    project = tvm.micro.generate_project(
#        str(repo_root / "apps" / "microtvm" / "zephyr" / "template_project"),
#        lowered,
#        temp_dir / "project",
#        {
#            "zephyr_board": BOARD,
#            "west_cmd": "west",
#            "verbose": 1,
#            "project_type": "host_driven",
#        },
#    )

project.build()
project.flash()
with tvm.micro.Session(project.transport()) as session:
    debug_module = tvm.micro.create_local_debug_executor(
        lowered.get_graph_json(), session.get_system_lib(), session.device
    )
    debug_module.set_input(**lowered.get_params())
    print("########## Build without Autotuning ##########")
    debug_module.run()
    del debug_module

Out:

########## Build without Autotuning ##########
Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs
---------                                     ---                                           --------  -------  -----              ------  -------
tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  295.7     98.681   (1, 2, 10, 10, 3)  2       1
tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         3.024     1.009    (1, 6, 10, 10)     1       1
tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       0.93      0.31     (1, 1, 10, 10, 3)  1       1
Total_time                                    -                                             299.653   -        -                  -       -

Timing the tuned program

Once autotuning completes, you can time execution of the entire program using the Debug Runtime:

with tvm.autotvm.apply_history_best("microtvm_autotune.log.txt"):
    with pass_context:
        lowered_tuned = tvm.relay.build(relay_mod, target=TARGET, params=params)

temp_dir = tvm.contrib.utils.tempdir()

project = tvm.micro.generate_project(
    str(repo_root / "src" / "runtime" / "crt" / "host"), lowered_tuned, temp_dir / "project"
)

# Compiling for physical hardware
# --------------------------------------------------------------------------
#    project = tvm.micro.generate_project(
#        str(repo_root / "apps" / "microtvm" / "zephyr" / "template_project"),
#        lowered_tuned,
#        temp_dir / "project",
#        {
#            "zephyr_board": BOARD,
#            "west_cmd": "west",
#            "verbose": 1,
#            "project_type": "host_driven",
#        },
#    )

project.build()
project.flash()
with tvm.micro.Session(project.transport()) as session:
    debug_module = tvm.micro.create_local_debug_executor(
        lowered_tuned.get_graph_json(), session.get_system_lib(), session.device
    )
    debug_module.set_input(**lowered_tuned.get_params())
    print("########## Build with Autotuning ##########")
    debug_module.run()
    del debug_module

Out:

########## Build with Autotuning ##########
Node Name                                     Ops                                           Time(us)  Time(%)  Shape              Inputs  Outputs
---------                                     ---                                           --------  -------  -----              ------  -------
tvmgen_default_fused_nn_contrib_conv2d_NCHWc  tvmgen_default_fused_nn_contrib_conv2d_NCHWc  219.0     98.818   (1, 1, 10, 10, 6)  2       1
tvmgen_default_fused_layout_transform         tvmgen_default_fused_layout_transform         1.802     0.813    (1, 6, 10, 10)     1       1
tvmgen_default_fused_layout_transform_1       tvmgen_default_fused_layout_transform_1       0.817     0.369    (1, 3, 10, 10, 1)  1       1
Total_time                                    -                                             221.619   -        -                  -       -

Gallery generated by Sphinx-Gallery