tvm.driver

Namespace for driver APIs

tvm.lower(inp: Schedule | PrimFunc | IRModule, args: List[Buffer | Tensor | Var] | None = None, name: str = 'main', binds: Mapping[Tensor, Buffer] | None = None, simple_mode: bool = False) IRModule

Lowering step before build into target.

Parameters:
  • inp (Union[tvm.te.schedule.Schedule, tvm.tir.PrimFunc, IRModule]) – The TE schedule or TensorIR PrimFunc/IRModule to be built

  • args (Optional[List[Union[tvm.tir.Buffer, tensor.Tensor, tir.Var]]]) –

    The argument lists to the function for TE schedule.

    It should be None if we want to lower TensorIR.

  • name (str) – The name of the result function.

  • binds (Optional[Mapping[tensor.Tensor, tvm.tir.Buffer]]) – Dictionary that maps the Tensor to Buffer which specified the data layout requirement of the function. By default, a new compact buffer is created for each tensor in the argument.

  • simple_mode (bool) – Whether only output simple and compact statement, this will skip LoopPartition, api wrapper generation and Unrolling.

Returns:

m – The result IRModule

Return type:

IRModule

tvm.build(inputs: Schedule | PrimFunc | IRModule | Mapping[str, IRModule], args: List[Buffer | Tensor | Var] | None = None, target: str | Target | None = None, target_host: str | Target | None = None, runtime: Runtime | None = None, name: str | None = 'default_function', binds: Mapping[Tensor, Buffer] | None = None)

Build a function with arguments as signature. Code will be generated for devices coupled with target information.

Parameters:
  • inputs (Union[tvm.te.schedule.Schedule, tvm.tir.PrimFunc, IRModule, Mapping[str, IRModule]]) – The input to be built

  • args (Optional[List[Union[tvm.tir.Buffer, tensor.Tensor, tir.Var]]]) – The argument lists to the function.

  • target (Optional[Union[str, Target]]) – The target and option of the compilation.

  • target_host (Optional[Union[str, Target]]) – Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm interpreter is used.

  • runtime (Optional[Runtime]) – Runtime to generate artifacts for

  • name (Optional[str]) – The name of result function.

  • binds (Optional[Mapping[tensor.Tensor, tvm.tir.Buffer]]) – Dictionary that maps the binding of symbolic buffer to Tensor. By default, a new buffer is created for each tensor in the argument.

Returns:

ret – A module that combines both host and device code.

Return type:

tvm.module

Examples

There are two typical example uses of this function depending on the type of the argument inputs: 1. it is an IRModule.

n = 2
A = te.placeholder((n,), name='A')
B = te.placeholder((n,), name='B')
C = te.compute(A.shape, lambda *i: A(*i) + B(*i), name='C')
s = tvm.te.create_schedule(C.op)
m = tvm.lower(s, [A, B, C], name="test_add")
rt_mod = tvm.build(m, target="llvm")
  1. it is a dict of compilation target to IRModule.

n = 2
A = te.placeholder((n,), name='A')
B = te.placeholder((n,), name='B')
C = te.compute(A.shape, lambda *i: A(*i) + B(*i), name='C')
s1 = tvm.te.create_schedule(C.op)
with tvm.target.cuda() as cuda_tgt:
  s2 = topi.cuda.schedule_injective(cuda_tgt, [C])
  m1 = tvm.lower(s1, [A, B, C], name="test_add1")
  m2 = tvm.lower(s2, [A, B, C], name="test_add2")
  rt_mod = tvm.build({"llvm": m1, "cuda": m2})

Note

See the note on tvm.target on target string format.