tvm.contrib.graph_executor

Minimum graph executor that executes graph containing TVM PackedFunc.

tvm.contrib.graph_executor.create(graph_json_str, libmod, device)

Create a runtime executor module given a graph and module.

Parameters
  • graph_json_str (str) – The graph to be deployed in json format output by json graph. The graph can contain operator(tvm_op) that points to the name of PackedFunc in the libmod.

  • libmod (tvm.runtime.Module) – The module of the corresponding function

  • device (Device or list of Device) – The device to deploy the module. It can be local or remote when there is only one Device. Otherwise, the first device in the list will be used as this purpose. All device should be given for heterogeneous execution.

Returns

graph_module – Runtime graph module that can be used to execute the graph.

Return type

GraphModule

Note

See also tvm.contrib.graph_executor.GraphModule for examples to directly construct a GraphModule from an exported relay compiled library.

tvm.contrib.graph_executor.get_device(libmod, device)

Parse and validate all the device(s).

Parameters
Returns

  • device (list of Device)

  • num_rpc_dev (Number of rpc devices)

  • device_type_id (List of device type and device id)

class tvm.contrib.graph_executor.GraphModule(module)

Wrapper runtime module.

This is a thin wrapper of the underlying TVM module. you can also directly call set_input, run, and get_output of underlying module functions

Parameters

module (tvm.runtime.Module) – The internal tvm module that holds the actual graph functions.

module

The internal tvm module that holds the actual graph functions.

Type

tvm.runtime.Module

Examples

import tvm
from tvm import relay
from tvm.contrib import graph_executor

# build the library using graph executor
lib = relay.build(...)
lib.export_library("compiled_lib.so")
# load it back as a runtime
lib: tvm.runtime.Module = tvm.runtime.load_module("compiled_lib.so")
# Call the library factory function for default and create
# a new runtime.Module, wrap with graph module.
gmod = graph_executor.GraphModule(lib["default"](dev))
# use the graph module.
gmod.set_input("x", data)
gmod.run()
set_input(key=None, value=None, **params)

Set inputs to the module via kwargs

Parameters
  • key (int or str) – The input key

  • value (the input value.) – The input key

  • params (dict of str to NDArray) – Additional arguments

run(**input_dict)

Run forward execution of the graph

Parameters

input_dict (dict of str to NDArray) – List of input values to be feed to

get_num_outputs()

Get the number of outputs from the graph

Returns

count – The number of outputs.

Return type

int

get_num_inputs()

Get the number of inputs to the graph

Returns

count – The number of inputs.

Return type

int

get_input(index, out=None)

Get index-th input to out

Parameters
  • index (int) – The input index

  • out (NDArray) – The output array container

get_input_index(name)

Get inputs index via input name.

Parameters

name (str) – The input key name

Returns

index – The input index. -1 will be returned if the given input name is not found.

Return type

int

get_input_info()

Return the ‘shape’ and ‘dtype’ dictionaries of the graph.

Note

We can’t simply get the input tensors from a TVM graph because weight tensors are treated equivalently. Therefore, to find the input tensors we look at the ‘arg_nodes’ in the graph (which are either weights or inputs) and check which ones don’t appear in the params (where the weights are stored). These nodes are therefore inferred to be input tensors.

Returns

  • shape_dict (Map) – Shape dictionary - {input_name: tuple}.

  • dtype_dict (Map) – dtype dictionary - {input_name: dtype}.

get_output(index, out=None)

Get index-th output to out

Parameters
  • index (int) – The output index

  • out (NDArray) – The output array container

debug_get_output(node, out)

Run graph up to node and get the output to out

Parameters
  • node (int / str) – The node index or name

  • out (NDArray) – The output array container

load_params(params_bytes)

Load parameters from serialized byte array of parameter dict.

Parameters

params_bytes (bytearray) – The serialized parameter dict.

share_params(other, params_bytes)

Share parameters from pre-existing GraphExecutor instance.

Parameters
  • other (GraphExecutor) – The parent GraphExecutor from which this instance should share it’s parameters.

  • params_bytes (bytearray) – The serialized parameter dict (used only for the parameter names).

benchmark(device, func_name='run', repeat=5, number=5, min_repeat_ms=None, end_to_end=False, **kwargs)

Calculate runtime of a function by repeatedly calling it.

Use this function to get an accurate measurement of the runtime of a function. The function is run multiple times in order to account for variability in measurements, processor speed or other external factors. Mean, median, standard deviation, min and max runtime are all reported. On GPUs, CUDA and ROCm specifically, special on-device timers are used so that synchonization and data transfer operations are not counted towards the runtime. This allows for fair comparison of runtimes across different functions and models. The end_to_end flag switches this behavior to include data transfer operations in the runtime.

The benchmarking loop looks approximately like so:

for r in range(repeat):
    time_start = now()
    for n in range(number):
        func_name()
    time_end = now()
    total_times.append((time_end - time_start)/number)
Parameters
  • func_name (str) – The function to benchmark. This is ignored if end_to_end is true.

  • repeat (int) – Number of times to run the outer loop of the timing code (see above). The output will contain repeat number of datapoints.

  • number (int) – Number of times to run the inner loop of the timing code. This inner loop is run in between the timer starting and stopping. In order to amortize any timing overhead, number should be increased when the runtime of the function is small (less than a 1/10 of a millisecond).

  • min_repeat_ms (Optional[float]) – If set, the inner loop will be run until it takes longer than min_repeat_ms milliseconds. This can be used to ensure that the function is run enough to get an accurate measurement.

  • end_to_end (bool) – If set, include time to transfer input tensors to the device and time to transfer returned tensors in the total runtime. This will give accurate timings for end to end workloads.

  • kwargs (Dict[str, Object]) – Named arguments to the function. These are cached before running timing code, so that data transfer costs are not counted in the runtime.

Returns

timing_results – Runtimes of the function. Use .mean to access the mean runtime, use .results to access the individual runtimes (in seconds).

Return type

BenchmarkResult