Making your Hardware Accelerator TVM-ready with UMA

Authors: Michael J. Klaiber, Christoph Gerum, Paul Palomero Bernardo

This is an introductory tutorial to the Universal Modular Accelerator Interface (UMA). UMA provides an easy-to-use API to integrate new hardware accelerators into TVM.

This tutorial gives you step-by-step guidance how to use UMA to make your hardware accelerator TVM-ready. While there is no one-fits-all solution for this problem, UMA targets to provide a stable and Python-only API to integrate a number of hardware accelerator classes into TVM.

In this tutorial you will get to know the UMA API in three use cases of increasing complexity. In these use case the three mock-accelerators Vanilla, Strawberry and Chocolate are introduced and integrated into TVM using UMA.


Vanilla is a simple accelerator consisting of a MAC array and has no internal memory. It is can ONLY process Conv2D layers, all other layers are executed on a CPU, that also orchestrates Vanilla. Both the CPU and Vanilla use a shared memory.

A block diagram of Vanilla

Vanilla has a C interface vanilla_conv2dnchw(...)` for carrying out a Conv2D operation (including same-padding), that accepts pointers to input feature map, weights and result, as well as the dimensions of Conv2D: oc, iw, ih, ic, kh, kw.

int vanilla_conv2dnchw(float* ifmap, float*  weights, float*  result, int oc, int iw, int ih, int ic, int kh, int kw);

The script uma_cli creates code skeletons with API-calls into the UMA-API for new accelerators.

For Vanilla we use it as follows: (--tutorial vanilla adds all the additional files required for this part of the tutorial)

pip install inflection
cd $TVM_HOME/apps/uma
python --add_hardware vanilla_accelerator --tutorial vanilla generates these files in the directory vanilla_accelerator which we are going to revist.

Vanilla backend

The generated backend for vanilla is found in vanilla_accelerator/

class VanillaAcceleratorBackend(UMABackend):
    """UMA backend for VanillaAccelerator."""

    def __init__(self):

        self._register_pattern("conv2d", conv2d_pattern())
        self._register_tir_pass(PassPhase.TIR_PHASE_0, VanillaAcceleratorConv2DPass())
        self._register_codegen(fmt="c", includes=gen_includes)

    def target_name(self):
        return "vanilla_accelerator"

Define offloaded patterns

To specify that Conv2D is offloaded to Vanilla, it is described as Relay dataflow pattern (DFPattern) in vanilla_accelerator/

def conv2d_pattern():
    pattern = is_op("nn.conv2d")(wildcard(), wildcard())
    pattern = pattern.has_attr({"strides": [1, 1]})
    return pattern

To map Conv2D operations from the input graph to Vanilla’s low level function call vanilla_conv2dnchw(...), the TIR pass VanillaAcceleratorConv2DPass (that will be discussed later in this tutorial) is registered in VanillaAcceleratorBackend.


The file vanilla_accelerator/ defines static C-code that is added to the resulting C-Code generated by TVMś C-Codegen in gen_includes. Here C-code is added to include Vanilla’s low level library``vanilla_conv2dnchw()``.

def gen_includes() -> str:
    topdir = pathlib.Path(__file__).parent.absolute()

    includes = ""
    includes += f'#include "{topdir}/"'
    return includes

As shown above in VanillaAcceleratorBackend it is registered to UMA with the self._register_codegen

self._register_codegen(fmt="c", includes=gen_includes)

Building the Neural Network and run it on Vanilla

To demonstrate UMA’s functionality, we will generate C code for a single Conv2D layer and run it on the Vanilla accelerator. The file vanilla_accelerator/ provides a demo running a Conv2D layer making use of Vanilla’s C-API.

def main():
    mod, inputs, output_list, runner = create_conv2d()

    uma_backend = VanillaAcceleratorBackend()
    mod = uma_backend.partition(mod)
    target ="vanilla_accelerator","c"))

    export_directory = tvm.contrib.utils.tempdir(keep_for_debug=True).path
    print(f"Generated files are in {export_directory}")
        AOTModel(module=mod, inputs=inputs, outputs=output_list),


By running vanilla_accelerator/ the output files are generated in the model library format (MLF).


Generated files are in /tmp/tvm-debug-mode-tempdirs/2022-07-13T13-26-22___x5u76h0p/00000

Let’s examine the generated files:


cd /tmp/tvm-debug-mode-tempdirs/2022-07-13T13-26-22___x5u76h0p/00000
cd build/
ls -1


To evaluate the generated C code go to codegen/host/src/default_lib2.c

cd codegen/host/src/
ls -1


In default_lib2.c you can now see that the generated code calls into Vanilla’s C-API and executes a Conv2D layer:

TVM_DLL int32_t tvmgen_default_vanilla_accelerator_main_0(float* placeholder, float* placeholder1, float* conv2d_nchw, uint8_t* global_workspace_1_var) {
     vanilla_accelerator_conv2dnchw(placeholder, placeholder1, conv2d_nchw, 32, 14, 14, 32, 3, 3);
     return 0;


Coming soon …


Coming soon …

Request for Community Input

If this tutorial did not fit to your accelerator, lease add your requirements to the UMA thread in the TVM discuss forum: Link. We are eager to extend this tutorial to provide guidance on making further classes of AI hardware accelerators TVM-ready using the UMA interface.