Deploy Deep Learning Models¶
TVM is capable of deploying models to a variety of different platforms. These how-tos describe how to prepapre and deploy models to many of the supported backends.
Deploy the Pretrained Model on Adreno™
Deploy the Pretrained Model on Adreno™
Deploy the Pretrained Model on Adreno™ with tvmc Interface
Deploy the Pretrained Model on Adreno™ with tvmc Interface
Deploy the Pretrained Model on Android
Deploy the Pretrained Model on Android
Deploy the Pretrained Model on Jetson Nano
Deploy the Pretrained Model on Jetson Nano
Deploy the Pretrained Model on Raspberry Pi
Deploy the Pretrained Model on Raspberry Pi
Compile PyTorch Object Detection Models
Compile PyTorch Object Detection Models
Deploy a Framework-prequantized Model with TVM
Deploy a Framework-prequantized Model with TVM
Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)
Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)
Deploy a Quantized Model on Cuda
Deploy a Quantized Model on Cuda
Deploy a Hugging Face Pruned Model on CPU
Deploy a Hugging Face Pruned Model on CPU
Deploy Single Shot Multibox Detector(SSD) model
Deploy Single Shot Multibox Detector(SSD) model