This document discusses some of the issues that I faced while trying to deploy a ONNX object detection model on DeepStream. It also provides some suggestions and solutions for some issues.
- In the blog, I mentioned that the ONNX model zoo has the SSD and YOLOv3 models. However, since I faced some issues while trying to use them.
- The SSD model was unable to be converted to a TensorRT engine due to the presence of "view" layers in the original PyTorch model. You can refer to this issue for more details.
- The opset version of YOLOv3 in the model zoo was 10. DeepStream v4.0 supports opset versions <=9.
- To circumvent these issues, I used the Tiny YOLOv2 model instead. This model was compatible with DeepStream.
- This branch (5.1) of the onnx2trt repository must be used for building the library from source. You can clone the branch using the following command:
git clone --recursive --branch 5.1 https://github.com/onnx/onnx-tensorrt.git- Now, you can build onnx2trt using the following commands. Note that the cmake command is broken into multiple lines.
cd onnx-tensorrt
mkdir build
cd build
cmake .. \
-DCMAKE_CUDA_COMPILER=/usr/local/cuda-10.0/bin/nvcc \
-DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include \
-DTENSORRT_ROOT=/usr/src/tensorrt \
-DGPU_ARCHS="53"
make -j2
sudo make install- Once built, you can run the following command to convert an onnx model to a
.trtfile.
onnx2trt my_model.onnx -o my_engine.trt