Training pointpillars and create rpn.onnx and pfe.onnx files

Hi there,
I want to train a pointpillars model and use the onnx trained models in developed package bu autoware, but when I train a model, the output is some tckpt files. how can I generate pfe.onnx and rpn.onnx files to load them in pointpillars node?

Please go to the PointPillars repository (https://github.com/nutonomy/second.pytorch) to get information on how to train the model. Once you have your new model, you can use Nvidia’s TensorRT model optimizer (https://developer.nvidia.com/tensorrt).

1 Like

Hi,

I realized it’s not as straightforward as you described. I trained a pointpillar model from https://github.com/traveller59/second.pytorch and managed to export PFE and RPN ONNX models after modifying the original python implementation to fit the C++ code from Autoware. I had to inspect the models from https://github.com/k0suke-murakami/kitti_pretrained_point_pillars and get the right input/output format.

But even then the models imported in Autoware do not behave properly and I suspect there are some differences in the C++ implementation and the python code I used for training.

Could you please share some more details on how the autoware’s pretrained ONNX models are generated? That will be extremely helpful for many people. Thanks!

Hi, the repo is here.
Hope this will help.

Kindly provide scripts to convert from torch pointpillar trained models to onnx format. Kindly do the needful

For further question, please follow our support guidelines.

Thanks for using Autoware and for your question. Howevere ask that you please ask questions at the ROS Answers website following our support guidelines. Please pay particular attention to the information we ask you to provide.

Discourse is for news and general interest discussion. ROS Answers provides a forum which can be filtered by tags to make sure the relevant people can find and/or answer the question, and not overload everyone with hundreds of posts.

1 Like

I am able to convert pre-trained models(pfe.onnx and rpn.onnx) into tensorrt. But I am not able to convert our models into tensorrt.

ONNX IR version: 0.0.4
Opset version: 9
Producer name: pytorch
Producer version: 1.1
Domain:
Model version: 0
Doc string:

While parsing node number 16 [Squeeze -> “175”]:
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/builtin_op_importers.cpp:1570 In function importSqueeze:
[8] Assertion failed: axis != BATCH_DIM
ERROR: failed to parse onnx file

Please address the issue. Thanks in advance

Thanks for using Autoware and for your question. Howevere ask that you please ask questions at the ROS Answers website following our support guidelines. Please pay particular attention to the information we ask you to provide.

Discourse is for news and general interest discussion. ROS Answers provides a forum which can be filtered by tags to make sure the relevant people can find and/or answer the question, and not overload everyone with hundreds of posts.

Can I ask you how you export the PFE and RPN onnx models from ckpt. Do you mind to share the code with me?

Thanks for using Autoware and for your question. However we ask that you please ask questions at the ROS Answers website following our support guidelines. Please pay particular attention to the information we ask you to provide.

Discourse is for news and general interest discussion. ROS Answers provides a forum which can be filtered by tags to make sure the relevant people can find and/or answer the question, and not overload everyone with hundreds of posts.