Cross Compilation and RPC¶
This tutorial introduces cross compilation and remote device execution with RPC in TVM.
With cross compilation and RPC, you can compile a program on your local machine then run it on the remote device. It is useful when the remote device resource are limited, like Raspberry Pi and mobile platforms. In this tutorial, we will use the Raspberry Pi for a CPU example and the Firefly-RK3399 for an OpenCL example.
Build TVM Runtime on Device¶
The first step is to build the TVM runtime on the remote device.
All instructions in both this section and the next section should be executed on the target device, e.g. Raspberry Pi. We assume the target is running Linux.
Since we do compilation on the local machine, the remote device is only used for running the generated code. We only need to build the TVM runtime on the remote device.
git clone --recursive https://github.com/apache/incubator-tvm tvm cd tvm make runtime -j2
After building the runtime successfully, we need to set environment variables
~/.bashrc file. We can edit
vi ~/.bashrc and add the line below (Assuming your TVM
directory is in
To update the environment variables, execute
Set Up RPC Server on Device¶
To start an RPC server, run the following command on your remote device (Which is Raspberry Pi in this example).
python -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090
If you see the line below, it means the RPC server started successfully on your device.
INFO:root:RPCServer: bind to 0.0.0.0:9090
Declare and Cross Compile Kernel on Local Machine¶
Now we go back to the local machine, which has a full TVM installed (with LLVM).
Here we will declare a simple kernel on the local machine:
Then we cross compile the kernel. The target should be ‘llvm -mtriple=armv7l-linux-gnueabihf’ for Raspberry Pi 3B, but we use ‘llvm’ here to make this tutorial runnable on our webpage building server. See the detailed note in the following block.
To run this tutorial with a real remote device, change
to False and replace
build with the appropriate
target triple for your device. The target triple which might be
different for different devices. For example, it is
'llvm -mtriple=armv7l-linux-gnueabihf' for Raspberry Pi 3B and
'llvm -mtriple=aarch64-linux-gnu' for RK3399.
Usually, you can query the target by running
gcc -v on your
device, and looking for the line starting with
(Though it may still be a loose configuration.)
-mtriple, you can also set other compilation options
Specify a specific chip in the current architecture to generate code for. By default this is inferred from the target triple and autodetected to the current architecture.
Override or control specific attributes of the target, such as whether SIMD operations are enabled or not. The default set of attributes is set by the current CPU. To get the list of available attributes, you can do:
llc -mtriple=<your device target triple> -mattr=help
These options are consistent with llc. It is recommended to set target triple and feature set to contain specific feature available, so we can take full advantage of the features of the board. You can find more details about cross compilation attributes from LLVM guide of cross compilation.
Run CPU Kernel Remotely by RPC¶
We show how to run the generated CPU kernel on the remote device. First we obtain an RPC session from remote device.
Upload the lib to the remote device, then invoke a device local compiler to relink them. Now func is a remote module object.
remote.upload(path) func = remote.load_module("lib.tar") # create arrays on the remote device ctx = remote.cpu() a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), ctx) b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), ctx) # the function will run on the remote device func(a, b) np.testing.assert_equal(b.asnumpy(), a.asnumpy() + 1)
When you want to evaluate the performance of the kernel on the remote
device, it is important to avoid the overhead of network.
time_evaluator will returns a remote function that runs the
function over number times, measures the cost per run on the remote
device and returns the measured cost. Network overhead is excluded.
time_f = func.time_evaluator(func.entry_name, ctx, number=10) cost = time_f(a, b).mean print("%g secs/op" % cost)
Run OpenCL Kernel Remotely by RPC¶
For remote OpenCL devices, the workflow is almost the same as above. You can define the kernel, upload files, and run via RPC.
Raspberry Pi does not support OpenCL, the following code is tested on Firefly-RK3399. You may follow this tutorial to setup the OS and OpenCL driver for RK3399.
Also we need to build the runtime with OpenCL enabled on rk3399 board. In the TVM root directory, execute
cp cmake/config.cmake . sed -i "s/USE_OPENCL OFF/USE_OPENCL ON/" config.cmake make runtime -j4
The following function shows how we run an OpenCL kernel remotely
def run_opencl(): # NOTE: This is the setting for my rk3399 board. You need to modify # them according to your environment. target_host = "llvm -mtriple=aarch64-linux-gnu" opencl_device_host = "10.77.1.145" opencl_device_port = 9090 # create schedule for the above "add one" compute declaration s = te.create_schedule(B.op) xo, xi = s[B].split(B.op.axis, factor=32) s[B].bind(xo, te.thread_axis("blockIdx.x")) s[B].bind(xi, te.thread_axis("threadIdx.x")) func = tvm.build(s, [A, B], "opencl", target_host=target_host) remote = rpc.connect(opencl_device_host, opencl_device_port) # export and upload path = temp.relpath("lib_cl.tar") func.export_library(path) remote.upload(path) func = remote.load_module("lib_cl.tar") # run ctx = remote.cl() a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), ctx) b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), ctx) func(a, b) np.testing.assert_equal(b.asnumpy(), a.asnumpy() + 1) print("OpenCL test passed!")
This tutorial provides a walk through of cross compilation and RPC features in TVM.
Set up an RPC server on the remote device.
Set up the target device configuration to cross compile the kernels on the local machine.
Upload and run the kernels remotely via the RPC API.