Skip to content

Using customvision onnx on Nvidia (TensorRT/Deepstream) #19

@amarrmb

Description

@amarrmb

I am experimenting to get a customvision model (exported as ONNX) to run on a nvidia device using deepstream SDK (TensorRT engine for accelerating ONNX).

  1. I was able to follow the steps in this repository and train a model using CustomVision.AI and run it on Nvidia device.
    This works great for a object detection type of model.

  2. When i am using a classification model trained using customvision and exported as ONNX, i get several warnings during conversion:
    [W1]:Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
    [W2]:Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.

  3. Even though the TensorRT model gets generated, its not working and i get incorrect results (possibly due to the weights cast down).

So wanted to check if Microsoft has tried this setup? Any information on how customvision uses ONNX internally?

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions