I am curious as to peoples experiences using radar sensors for obstacle detection (using point clouds in particular)? I note that radar sensors are not particularly common in the ROS world so wondering what sensors people have used and what kind of success they have had.
i got some experience for integration about radar sensor, but not for using point clouds. what about the following? i do not think that they use ROS for product but radar sensor is used.
This has been a topic I’ve been thinking about privately for awhile, but your comment on ROS Answers prompted me to open a public ticket for additional discussion within the navigation community here: https://github.com/ros-planning/navigation2/issues/1617
I don’t think the best use is to just use the pointcloud, though you certainly could with sufficient and effective filtering. I think the best use is a method of detecting dynamic obstacles in the scene without needing to do some deep learning inferencing that would take up a great deal of CPU/GPU power (and expensive hardware). You can then input those into a costmap via pointclouds, but more effectively would be to track those dynamic obstacles and input them into the costmap as moving blobs, using a planner that can take into account time-varying obstacles, or the usual “vector-space” like representations common in autonomous driving.
In my experience, radars for statically mounted situations excel at all of the above. I have, very briefly, mounted a radar on a moving base and I saw a bunch of noise that was not there before as it and the obstacles moved in a scene. That may be an issue with the specific Ti chipset I was using at the time or a more common issue in Radars in general. I’m sure there are ways to filter it out, but my simple first-order-test methods were insufficient and I moved on to other topics at the time.
I think now would be a great time to readdress this if there is sufficient interest from the community and I’d love to collaborate on a project with you @aaron_Fulton to introduce that capability into the navigation stack.
Automotive-grade RADAR sensors tend to spit out processed data over the CAN bus connection, rather than raw points or similar. The processed data is typically detected objects and their speeds, as I recall (I’m not the engineer who was playing with the RADAR sensors, unfortunately).
You might want to look up the http://www.smokebot.eu/index.html project.
They did some astounding laser/radar/thermal integration for SLAM.
I can’t seem to find a good video on that aspect online right now, but I know they recorded wonderful demonstrations.
They developed their own sensors in this context afaik.
A while back, I worked in indoor-robotics with an INRAS Radarbook http://www.inras.at/en/products/radarbook.html .
The upside of this one was its high configurability, although I really only touched the basics back then.
If you’re interested, here is a video contrasting a specific mode of the radar with a SICK LMS 200 scanner:
I have a little experience from testing a small automotive radar a few years ago. Some variation of this https://developer.bosch.com/documents/723993/728200/GPRv1.0+English.pdf/c63aec16-9efe-2520-fd18-bacc54203240
As gbiggs said, these types of radar usually output preprocessed data and not the raw data. This radar output tracked targets along with some data for each tracked target, such as relative speed, angle, reflectivity and so on.
At that time I didn’t have a really good outdoor mobile robot that could go at high speeds, and that type of radar is neither suited nor needed for indoor use.
My main take away is that you have to do a lot of filtering of the data yourself afterwards, e.g. to filter out noisy targets and targets of low reflectivity. Correlating the radar data with lidar data could help in this tracking.
There wasn’t any driver for the radar at the time (and it still doesn’t seem like one exists) so I had to create my own driver to read and process the CAN data, but I never really optimized it for speed or roboustness.
Anyway, I’ll do a little video dump of some tests I did with the radar in case anyone is interested. I still have the raw data (ROS can_msgs/Frame), so if another driver came into existence you could probably recreate the results.
The visualization is a bit different in the videos depending on how old they are. As far as I remember the arrow points away or towards depending on relative velocity or goes to a sphere if no relative velocity. Some of the videos have text markers with the various data of the target.
Especially in this indoor reflectivity test you can see how the tracked target on the rotating robot is quite unstable at times, so the rest of the tests are outdoors.
indoor reflectivity test
I have quite some experience working with Radar+ROS. Typically radar does not provide a point cloud and afaik ROS does not have standard message for radar readings. There was a good one from Automotive Stuff though I could not find it any more. As @gbiggs said it provides a number of points (usually up to 64 points per frame depending on radar model). It’s often the case that 1 big object (like car) might have 2-6 points assigned to it. Smaller and non-metal objects might have 1 point. So it’s engineer job to define if points belong to same object or not. Certain point clustering algorithms can be used for that. Usually radar provides object velocity and it’s considered to be the main strength of radar but it’s not always the case. And I have experience that some sneaky manufacturers claim that radar has velocity detection but in practice it’s not. So be careful about it. Good radars also provide tracking, so you can tell that certain point in 2 different frames belong to the same object. That is valuable too. Continental and Bosh are top players when it comes to automotive radars. One downside is that they are not going to talk to you unless you are willing to buy 100+ items. Fell free to ask if you have more questions.
There are some non-automotive (or at least ones you can buy without 100+ from an OEM).
The 2 that come to mind are tje Ti has a chipset and some eval kits, and there’s a company called Omnipresence. The Ti chipset will not do the clustering for you but I’m not sure about omnipresence.
I’d love to see more radar work, I think it could have a big impact.
I agree with @smac that there is a lot of potential to do more with radar and ROS. Radar is a key ingredient in automotive navigation systems, so it surprised me that there was nothing much in the ROS space.
Some colleagues and I got excited about a year ago about some of the new high frequency radar chipsets that were becoming available and decided to form a start-up focusing on this technology (http://radariq.io). One of the key areas we identified that could benefit from this technology was in the robotics/navigation space, so I’m pleased to see there is interest in the community. One of our missions is to make radar plug-and-play and easy to use. As has been alluded to, there is a lot that goes on under the hood to make a radar sensor work well. Radar data can be noisy in general and does require cleaning up.
To date, we have been focusing on developing a quality raw point cloud (with all the filtering done on-module) as point clouds are what we have seen based on our research. And ROS already has a PointCloud2 format for accepting data!
Phase II of our development is focusing on presenting “objects” rather than point clouds (which will have vector information attached). It sounds like this could be of great interest. One of the first thing to do might be to define a message format for such data. Thanks for your interest @smac and we’d certainly be interested in collaborating with you on a project to incorporate radar into the navigation stack for ROS.