tvm.relay.backend

Backend codegen modules for relay.

The Python interface to the Relay reference interpreter.

class tvm.relay.backend.interpreter.ConstructorValue(tag, fields, constructor)
class tvm.relay.backend.interpreter.RefValue(value)
class tvm.relay.backend.interpreter.Executor

An abstract interface for executing Relay programs.

evaluate(expr=None, binds=None)

Evaluate a Relay expression on the executor.

Parameters
  • expr (Optional[tvm.relay.Expr]) – The expression to evaluate.

  • binds (Optional[Map[tvm.relay.Var, tvm.relay.Expr]]) – Additional binding of free variable.

Returns

val – The evaluation result.

Return type

Union[function, Object]

class tvm.relay.backend.interpreter.Interpreter(mod, device, target)

Simple interpreter interface.

Parameters
  • mod (tvm.IRModule) – The module to support the execution.

  • device (Device) – The runtime device to run the code on.

  • target (tvm.Target) – The target option to build the function. Only homogeneous execution is supported.

  • CAUTION (Despite the API the module is prepared upon each call to evaluate) –

  • create_executor. (rather than once in) –

  • is (That) –

  • code-block: (.) – python: executor = relay.create_executor(kind=”debug”, mod=module) a = executor.evaluate(expr)(args1) b = executor.evaluate(expr)(args2)

  • efficiency (will prepare all the bindings in module twice. For) –

  • hoist (try to) –

  • possible (calls to evaluate as high as) –

  • create_executor (preferably immediately after) –

  • code-block: – python: func = relay.create_executor(kind=”debug”, mod=module).evaluate(expr) a = func(args1) b = func(args2)

TE compiler engine (replacing legacy compile_engine).

class tvm.relay.backend.te_compiler.LoweredOutput(outputs, implement)

Lowered output

class tvm.relay.backend.te_compiler.CCacheKey(source_func, target)

Key in the TE Compiler.

Parameters
  • source_func (tvm.relay.Function) – The source function.

  • target (tvm.Target) – The target we want to run the function on.

class tvm.relay.backend.te_compiler.CCacheValue

Value in the TE Compiler, including usage statistics.

tvm.relay.backend.te_compiler.get_valid_implementations(op, attrs, inputs, out_type, target)

Get all valid implementations from the op strategy.

Note that this function doesn’t support op with symbolic input shapes.

Parameters
  • op (tvm.ir.Op) – Relay operator.

  • attrs (object) – The op attribute.

  • inputs (List[tvm.te.Tensor]) – Input tensors to the op.

  • out_type (relay.Type) – The output type.

  • target (tvm.target.Target) – The target to compile the op.

Returns

ret – The list of all valid op implementations.

Return type

List[relay.op.OpImplementation]

tvm.relay.backend.te_compiler.select_implementation(op, attrs, inputs, out_type, target, use_autotvm=True)

Select the best implementation from the op strategy.

If use_autotvm is True, it’ll first try to find the best implementation based on AutoTVM profile results. If no AutoTVM profile result is found, it’ll choose the implementation with highest plevel.

If use_autotvm is False, it’ll directly choose the implementation with highest plevel.

Note that this function doesn’t support op with symbolic input shapes.

Parameters
  • op (tvm.ir.Op) – Relay operator.

  • attrs (object) – The op attribute.

  • inputs (List[tvm.te.Tensor]) – Input tensors to the op.

  • out_type (relay.Type) – The output type.

  • target (tvm.target.Target) – The target to compile the op.

  • use_autotvm (bool) – Whether query AutoTVM to pick the best.

Returns

ret – The best op implementation and the corresponding output tensors.

Return type

tuple(relay.op.OpImplementation, List[tvm.te.Tensor])

tvm.relay.backend.te_compiler.get_shape(shape)

Convert the shape to correct dtype and vars.

class tvm.relay.backend.te_compiler.TECompiler

TECompiler to get lowered code.

lower(source_func, target=None, mod_name='default')

Lower a source_func to a CachedFunc.

Parameters
  • source_func (Union[tvm.relay.Function, CCacheKey]) – The source relay function.

  • target (tvm.Target) – The target platform.

Returns

cached_func – The result of lowering.

Return type

CachedFunc

jit(source_func, target=None)

JIT a source_func to a tvm.runtime.PackedFunc.

Parameters
  • source_func (Union[tvm.relay.Function, CCacheKey]) – The source relay function.

  • target (tvm.Target) – The target platform.

Returns

jited_func – The result of jited function.

Return type

tvm.runtime.PackedFunc

clear()

clear the existing cached functions

items()

List items in the cache. :returns: item_list – The list of items. :rtype: List[Tuple[CCacheKey, CCacheValue]]

tvm.relay.backend.te_compiler.get()

Get the global TE Compiler.

Returns

engine – The TE Compiler.

Return type

tvm.relay.backend.TECompiler

tvm.relay.backend.te_compiler.lower_to_primfunc(relay_func, target)

Lower Relay Function to TIR PrimFunc.

Parameters
  • relay_func (relay.Function) – The source primitive function, created by FuseOps.

  • target (Target) – The compilation target.

Returns

prim_func – The created prim func.

Return type

tir.PrimFunc

A compiler from a Relay expression to TVM’s graph executor.

The compiler is built from a few pieces.

First we define a compiler from a single Relay expression to the graph language. We require the expression to be a function. The function’s parameters correspond to the placeholder/inputs and model parameters found in the computation graph representation. The body of the function represents the computation graph.

The compiler’s output is a program in the graph language, which is composed of Node, NodeRef, InputNode, OpNode. This “little language” represents programs in TVM’s graph format.

To connect to the graph executor, we use a printer that converts our graph format into TVM’s JSON format. The resulting string can be loaded by contrib.graph_executor or any other TVM runtime compatible systems.

class tvm.relay.backend.graph_executor_codegen.GraphExecutorCodegen(mod, target)

The compiler from Relay to the TVM runtime system.

codegen(ir_module, func)

Compile a single function into a graph.

Parameters
  • ir_module (tvm.ir.Module) – The module to compile

  • func (tvm.relay.Expr) – The function to compile.

Returns

  • graph_json (str) – The graph json that can be consumed by runtime.

  • mod (IRModule or Dict[Target, IRModule]) – The lowered functions.

  • params (Dict[str, tvm.nd.NDArray]) – Additional constant parameters.

The Relay Virtual Machine.

Implements a Python interface to compiling and executing on the Relay VM.

tvm.relay.backend.vm.compile(mod, target=None, target_host=None, params=None)

Compile the module to VM executable. A helper function for VMCompiler.

Parameters
  • mod (tvm.IRModule) – The Relay module to build.

  • target (any multi-target like object, see Target.canon_multi_target) – For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

  • target_host (None, or any target-like object, see Target.canon_target) – Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver to setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm intepreter is used.

  • params (dict of str to NDArray) – Input parameters to the graph that do not change during inference time. Used for constant folding.

Returns

exec – The VM executable that contains both library code and bytecode.

Return type

tvm.runtime.vm.Executable

class tvm.relay.backend.vm.VMCompiler

Compiler that compiles Relay module to VM executable.

set_params(params)

Set constant parameters for the model.

Parameters

params (dict of str to NDArray) – Input parameters to the graph that do not change during inference time. Used for constant folding.

get_params()

Return the updated weights.

lower(mod, target=None, target_host=None)

Lower the module to VM bytecode.

Parameters
  • mod (tvm.IRModule) – The Relay module to build.

  • target (any multi-target like object, see Target.canon_multi_target) – For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

  • target_host (any target-like object, see Target.canon_target) – Host compilation target, if target is device.

codegen()

Generate the kernel library.

optimize(mod, target=None, target_host=None, params=None)

Helper method that optimizes a Relay module via VM.

Parameters
  • mod (tvm.IRModule) –

  • target (any multi-target like object, see Target.canon_multi_target) – For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.

  • target_host (any target-like object, see Target.canon_target) – Host compilation target, if target is device.

  • params (dict of str to NDArray) – Input parameters to the graph that do not change during inference time. Used for constant folding.

Returns

  • mod (tvm.IRModule) – The optimized relay module.

  • params (dict) – The parameters of the final module.

get_exec()

Get the VM executable.

Returns

exec – The VM executable that contains both library code and bytecode.

Return type

tvm.runtime.vm.Executable

class tvm.relay.backend.vm.VMExecutor(mod, device, target)

An implementation of the executor interface for the Relay VM.

Useful interface for experimentation and debugging the VM can also be used directly from the API. supported by tvm.runtime.vm.

Parameters
  • mod (IRModule) – The module to support the execution.

  • device (Device) – The runtime device to run the code on.

  • target (any multi-target like object, see Target.canon_multi_target) – For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.