ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Build autoware 1.11 and point_pillars package

Hello,

  1. I want to build autoware 1.11 from source to test lidar_point_pillars. but when I run the “./colcon_release” I get the following error. I tested on both tensorrt 5.0.2 and 5.1.2 and cuda 10.0.
    command: ./colcon_release >normal.log 2>err.log
    results:
    https://gist.github.com/kargarisaac/3d85e38d3586fc2d80aea37b11ee5e00
    https://gist.github.com/kargarisaac/a878470d8235d08b661d75ac62f9ee57
    command: colcon build --packages-up-to lidar_point_pillars >normal2.log 2>err2.log
    results:
    https://gist.github.com/kargarisaac/523910fab5f21ee5cfd2f94c6f712263
    https://gist.github.com/kargarisaac/5637d533fd523a0ed353cabfab33a8c0

  2. Where should I clone pretrained onnx files?

  3. Can I use this package (lidar_point_pillars) separately without bulding whole autoware?

  4. What is the pre-trained model for? cars only or all classes?

  5. I also used docker image with cuda but cannot find cuda in it. the point_pillars node doesn’t work in docker. What should I do to use it in docker.


Update:

  • Operating system and version:

    • Ubuntu 16.04
    • gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.11)
    • cmake version 3.5.1
  • Autoware installation type:

    • From source
  • Autoware version or commit hash

    • 1.11
    • git rev-parse HEAD -> 6a7d1b9f66fd353eb5c6ad8df942c433fff8e2a1
      6a7d1b9f66fd353eb5c6ad8df942c433fff8e2a1
  • ROS distribution and version:

    • kinetic
  • ROS installation type:

    • sudo apt-get install ros-kinetic-desktop-full
  • GPU model

    • 0 Quadro M4000
    • 1 Tesla K40c
  • Drivers

    • Driver Version: 410.104
  • CUDA version

    • 10.0
  • CUDNN version

    • cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
      define CUDNN_MAJOR 7
      define CUDNN_MINOR 4
      define CUDNN_PATCHLEVEL 2
      #define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
  • TensorRT

    • 5.0.2 and 5.1.2 using .deb file

1, Not quite sure how to answer. Maybe check this issue?
Can you post an issue in the github with detail information?
Can you post here more infotmation as @amc-nu suggested?

Please provide information with the results of colcon build --packages-up-to lidar_point_pillars.

2, Can you check README file. Available here.
Please notice that the license is under BY-NC-SA 3.0.

3, Yes. Again, colcon build --packages-up-to lidar_point_pillars.

4, only car

5, I guess one way to do is to install cuda in the docker env

Thanks

@kargarisaac
Complementing @Kosuke_MURAKAMI answers.

First of all, we don’t know your environment, please read carefully the Support Guidelines.

If you file an issue we need (described in the issue and guidelines):

  • Operating system and version:
    • ( OS and version (e.g. Ubuntu 16.04, MacOS 10.14, Windows 10 build 1817) )
    • gcc version, cmake version
  • Autoware installation type:
    • (How did you install Autoware? From source, from binaries, Docker, etc. Link to a guide if you followed one.)
  • Autoware version or commit hash
    • ( If from binaries or docker, give the version. If from source, give the output of git rev-parse HEAD or the repos file you use )
  • ROS distribution and version:
    • ( State the name of the ROS distribution you are using, and if applicable a patch version )
  • ROS installation type:
    • ( How did you install ROS? From source, from binaries, Docker, etc. Link to a guide if you followed one.)

In this case we also need to know your:

  • GPU model
  • Drivers
  • CUDA version
  • CUDNN version
  • TensorRT

Docker images do not include TensorRT due to its licensing. For this reason, PointPillars node is not included on the CUDA enabled images.

Finally when sharing your log instead of pasting the whole of text, please upload the specific log file or use gists

1 Like

@Kosuke_MURAKAMI @amc-nu
I updated the question.
thank you very much for your answers and helps

Checking your environment, and your log files. Seems you have a conflict with the TensorRT versions installed. Remove TensorRT, and re install only one version.

Other useful resources:
https://devtalk.nvidia.com/default/topic/1044316/tensorrt/tensorrt-5-0-2-6-onnx-run-error/
https://devtalk.nvidia.com/default/topic/1044318/version-of-nvonnxparser-used-to-built-samples-can-t-build-samples-/

1 Like

Thank you. You were right. There are some versions from different users. So I decided to use docker. I pulled the 1.11 docker image and installed tensorrt inside that. Then I built autoware successfully. I installed tensorrt 5.1.2 and cuda 10.1 (I did this because when I tried to install trt 5.0.2 with cuda 9 and cudnn 7.3.1 the results were: https://devtalk.nvidia.com/default/topic/1049509/problem-in-installing-tensorrt-in-docker/?offset=1#5326363 I don’t know the reason. Then I updated everything- cudnn to 7.5 and cuda to 10.1). Now when I run the launch file or run it in runtime manager for point_pillars and set the paths for onnx files, it gives me:

[lidar_point_pillars-2] process has died [pid 11162, exit code -11, cmd /home/autoware/Autoware/ros/install/lidar_point_pillars/lib/lidar_point_pillars/lidar_point_pillars /points_raw:=/points_raw __name:=lidar_point_pillars __log:=/home/autoware/.ros/log/7bd6862c-56e6-11e9-9563-f0d5bff055ff/lidar_point_pillars-2.log].
log file: /home/autoware/.ros/log/7bd6862c-56e6-11e9-9563-f0d5bff055ff/lidar_point_pillars-2*.log

I cannot find the logfiles in the error too. when I use ls in /home/autoware/.ros/log/7bd6862c-56e6-11e9-9563-f0d5bff055ff/ directory I see:
master.log roslaunch-isaac-Lenovo-ideapad-Y700-15ISK-11130.log rosout-1-stdout.log rosout.log

I also changed my system:

  • Operating system and version:
    • Ubuntu 16.04
    • gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
    • cmake version 3.5.1
  • Autoware installation type:
    • Build from source inside docker (autoware/autoware:1.11.0-kinetic-cuda)
  • Autoware version or commit hash
    • 1.11.0
  • ROS distribution and version:
    • kinetic
  • ROS installation type:
    • sudo apt-get install ros-kinetic-desktop-full
  • GPU model
    • GeForce GTX 960M
  • Drivers
    • Driver Version: 396.54
  • CUDA version
    • 10.1
  • CUDNN version
    • cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
      define CUDNN_MAJOR 7
      define CUDNN_MINOR 5
      define CUDNN_PATCHLEVEL 0
      #define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
  • TensorRT
    • 5.1.2 using .deb file

CUDA 10.1

Autoware currently doesnt support 10.1. Switch to 9.0 or 10.0

CUDA 10.1 and Driver Version: 396.54

You have a in compatible setup.
https://docs.nvidia.com/deploy/cuda-compatibility/index.html

CUDNN version

When switching driver version, be careful of the cudnn version.
https://docs.nvidia.com/deeplearning/sdk/cudnn-support-matrix/index.html

If you still get an error, enable output to screen. in the PointPillars launch file, to get the error shown in the screen.
http://wiki.ros.org/roslaunch/XML/node

Finally, development in the provided 1.11 Docker image is not recommended.
Instead modify Autoware’s Dockerfile to also include TensorRT when building the image.

2 Likes

Thank you for your helps. I finally built it in docker. using cuda 9.0 and trt 5.0.2. When I use the pre-trained models, it can just detect the car itself. not others. I use sample_moriyama_150324.bag file. Where can I get the files that you used (https://github.com/autowarefoundation/autoware/pull/2029)? I also used kitti data from ‘http://www.cvlibs.net/datasets/kitti/raw_data.php’ and created a rosbag from it (kitti_2011_09_26_drive_0005_synced.bag), but again the results are same.
Do you have any suggestion?

@kargarisaac Glad you solved the problem.

Just like Kosuke mentioned before

This model only supports car detection.

I mean it just detect the car itself, not other cars around it.
image

I see the following errors and warnings when I print logs in screen: (sample_moriyama_150324.bag)

[ERROR] [1554461493.214623761, 1427157930.855918749]: “base_link” passed to lookupTransform argument target_frame does not exist.
[ WARN] [1554461478.934492458, 1427157916.572421234]: TF to MSG: Quaternion Not Properly Normalized

for kitti data it’s ok (without error). but do not see anything yet

[ERROR] [1554461493.214623761, 1427157930.855918749]: “base_link” passed to lookupTransform argument target_frame does not exist.
[ WARN] [1554461478.934492458, 1427157916.572421234]: TF to MSG: Quaternion Not Properly Normalized

Please check your rosbag has base_link.

Tested on kitti_2011_09_26_drive_0005_synced.bag
Please check the parameter.
image
image

1 Like

here is the logfile for outputs:

I can see the bounding boxes from lidar_euclidean_cluster package, but not for point_pillars.

I used
sudo apt-get install ros-kinetic-rviz

and now it works. thank you for your helps @Kosuke_MURAKAMI @amc-nu

Hi ,
I’m new to autoware, I’m not sure how to build autoware inside docker.
I have install tensorrt inside docker container & commit it to new image name, now I’m stuck at how to build it so that i can run point pillars inside autoware docker.

@chowkamlee81

Thanks for your question. However we ask that you please ask questions on http://answers.ros.org following our support guidelines: http://wiki.ros.org/Support

ROS Discourse is for news and general interest discussions. ROS Answers provides a forum which can be filtered by tags to make sure the relevant people can find and/or answer the question, and not overload everyone with hundreds of posts.

When you ask your question please make sure to include enough information to reproduce your problem and tag it with autoware so that it’s easy for people to find.