Standard messages for marine radars

I would like to propose a standard message format for marine radars.

I have written a driver for the Halo series of marine radars as well as an rqt plugin to monitor the radar as well as control its parameters.

I’m also working on tools to process the raw radar data into costmaps and to track targets.

It seems to me that we should be able to apply that work to other radars.

Here’s what I have so far as a starting point: GitHub - CCOMJHC/marine_sensor_msgs: This package defines message types for common marine sensors.

Does anyone have experience with a different brand of marine radar? I would like to know how to adapt the current message to marine radars in general.

Wouldn’t GitHub - ros-perception/radar_msgs: A set of standard messages for RADARs in ROS be sufficient?

I had seen that package put it seems to be designed for targets that are already extracted from the raw data. The message I’m proposing is for the lower level returns from which targets could be extracted and encoded with those messages.

Would radar_msgs/RadarReturn Documentation not be sufficient?

If you need to represent multiple returns per measurement, could sensor_msgs/MultiEchoLaserScan Documentation be a match?

The RadarReturn structure is way too big to be efficient in this use case. Five floats per “pixel” vs one byte is significant.

As for MultiEchoLaserScan, it would be more compact and could probably work, but it has the following in the comments:

# If you have another ranging device with different behavior (e.g. a sonar
# array), please find or create a different message, since applications
# will make fairly laser-specific assumptions about this data

So yeah, there are similar messages already, but none that seem to fit well enough for our purposes.

ROS messages generally sacrifice optimality for generality. I guess no message defined with byte-based intensity would make it into a standard. Using a float is the way intensities are represented in other standard messages like LaserScan. This is to be “future-proof” and have one definition that works for both integer-based intensities and float-based ones.

So now you’re two floats against five =) Not sure what is the bitrate of this sensor, but maybe even the suboptimal representation would be good enough for normal use.

You can try to jump into this older discussion to ping the people who were discussing the radar_msgs standardization: Radar Messages Standard .

ROS messages generally sacrifice optimality for generality. I guess no message defined with byte-based intensity would make it into a standard.

An exception to this is the camera image message (sensor_msgs/Image.msg), which has a uint8[] data used to store data from different encodings.

Roland’s radar messages, as well as some sonar messages that we’ve been working on, come closer to the data volumes of camera imagery, where a 2-5x factor is actually quite significant in bagfile size. Rather than a single return per beam (like sensor_msgs/LaserScan.msg expects) we get an array of intensities per beam.

1 Like

A quick calculation from a bag file I have shows around 16 megabit/sec. That could vary depending on the radar’s range setting, which affect rottation speed.

Maybe I could test with a single float instead of a byte and see the message can compress as much.

I wonder if there’s not an operational consideration here too. Raw radar messages are likely to be transmitted over a telemetry link to provide a radar overlay for operators. There are practical limitations to what can be pushed over a radio telemetry link, and often the worse conditions for radio links are the ones where having raw radar data is most important for vehicle safety. Keeping these messages relatively small is certainly advantageous. That said, I suppose one could create a lighter message type for telemetry if necessary I suppose.

1 Like

For context, when working with maritime radars (the RADARs I’m referring to are the Quantum RADAR and Wartsila RS-24) there are some structural patterns we can assume. The maritime radars I have worked with generally return a bitmap along the current scan angle. The radars also have a relatively slow scan speed so it may be advatageous to publish a “RadarSpoke” ore “RadarScanLine” message which only gives the bitmap along a certain angle. What @rolker has seems to cover the basic use cases much better than radar_msgs.

That being said there are a bunch of fancy features which maritime radars come with (target tracking, weather settings, etc.) which are manufacturer specific. I don’t know if the “standard” message should support those.

The wartsila RS-24 uses a standard called the Asterix standard for essentially transmitting images.

1 Like

Thanks for adding to the discussion Arjo. The radar I’ve used, the Halo series from Simrad, seems similar to the Wartsila unit you’ve used so it’s useful to have your perspective.

Here’s the one I’m using: HALO20+ Radar | Simrad USA

So I dug around for the Asterix protocol and found this:

That will be helpful in making the message applicable to more hardware.

The current radar messages are mostly focused on sparse radar messages. So each element is much more self contained. Thus if you’re starting to look at scanning radars with streaming data the structure is valuable in multiple ways. The first is that if we know the structure we can represent it more compactly, and the second is that the structure is also valuable for processing the data. (If two readings are known next to each other you can make inferences that aren’t possible if they’re not adjacent. Or you have to computed that they’re adjacent instead of knowing that based on the position in the structured return.)

To that end as the marine radar is a very close data structure to the MultiEchoLaserScan this is likely where we should look at most. It is a horizontal scanning ranger with sequential readings rotating in a line. You’ve noted that this is actually more compact than the proposed because it takes advantage of knowing the angles of the rays are sequential from start by increment and don’t need to be called out for every range measurement.

There’s also additional significant semantic information that’s valuable in processing the scans in sequene and order, knowing that they were sampled in sequence means that we can actually get a pretty good guess of the timing of any individual sample ray from the laser, and get more precise than the timestamp of the sector when we do a projection.

The caution not to reuse the LaserScan directly is good. As there are semantic differences, but that doesn’t mean that the datatypes themselves have to have different structures. I would recommend mirroring it as closely as possible.

Create a RadarEcho equivalent to LaserEcho telling you about what you find in any given beam. And then a RadarScan or extend/adapt the RadarSector message to do that. To get the real value from these messages you want to have the metadata that’s shared grouped in the higher level messages as is done in the MultiEchoLaserScan.

The closer the message is to the same structure we can also relatively easily make copies/extensions to the processing pipelines. Such as the laser_geometry with both projection and high fidelity projection capabilities.


Thanks for your input Tully, it’s much appreciated.

I like the idea of easily adapting existing pipelines so I’m taking another look at sensor_msgs/MultiEchoLaserScan.

I’ll first point out that the only reason for how I split up the metadata between RadarSector and RadarScanline is that it mimics the structure of the data from the particular radar I was using so I have no reason to stick to this layout to make this a standard message.

That radar sends UDP packets covering a handful of spokes (maybe 16 or 32, covering about 1 or 2 degrees per packet if I remember correctly). A few dozen packets would be sent for a complete revolution of the radar. The RadarSector message represents one such packet in my case. By continuously displaying those RadarSector messages on an operator’s display as they arrive, you get an animated radar sweeping effect as the radar spins at a 60 rpm or less. Until we can trust that the robot can correctly understand the radar, this is an important display to have for the human operators.

What this means is that for a complete scan, multiple RadarSector messages are needed and I don’t have an overarching message to represent that. I could be wrong, but I suspect that a MultiEchoLaserScan message would contain a complete scan?

Assuming we use the exact same structure as MultiEchoLaserScan, I’ll list some of the considerations that occur to me as I adapt it to the radar with which I have experience, the Simarad Halo.

Header timestamp: “timestamp in the header is the acquisition time of the first ray in the scan.”
The radar I’m using doesn’t know anything about time so the time a packet is received is what we have to work with. A driver could estimate the time of the first ray, but it would be a guess.

angle_min, angle_max, angle_increment: The Halo reports an angle per spoke, but a driver could calculate min, max and increment from those since they are regularly spaced (I hope!).
time_increment, scan_time: I’m not sure what’s the difference between the two, but I suspect a driver could calculate those from the incoming data.

ranges and intensities: Am I correct to assume only hits are recorded for laser scans? This differs from the data we get from the Halo radar which is the full timeseries along a spoke, regularly spaced. Of course, in open water on a calm day, most of those returns are zero, so only recording the returns above a threshold could be more efficient but we might miss some of the structural info. What I mean is that if we allow sparse data, we make it harder to tell which pixels are neighbors or not. One option is to leave ranges empty and always report all the intensities.

Let me think about this for a bit… Of course, and other insights are apreciated!

Some RADAR’s used for applications in air use PTP to timestamp the acquisition time for the start of the first data capture on the sensor.

In absence of PTP, or some other accurate time synchronization clock source, we have used the arrival time off the wire (less accurate) on some interface into the chip (i.e. SOC) processing the sensor data, minus a tuned acquisition delay time to arrive at a more accurate acquisition time.


For those who may be less familiar with marine radars, here a video where I replay radar data as well as images from BEN, our robot boat. Here I was experimenting with generating a costmap from the radar data.


Now that I’m back from Japan, I’m back to working on this. (Speaking of Japan, it was nice meeting some of you in person at ROSCon and/or IROS!)

So, I’m trying to mimic the MultiEchoLaserScan structure and I’m strugling with deciding how much to diverge.

The first issue is the difference between LaserEcho, which is a collection of reflextions, and the timeseries data returned from a marine radar. The data returned from a marine radar is a full raster line where the samples are equally spaced in time or range. This makes it convenient for visualization as the data can be loaded in texture memory and efficently plotted using OpenGL. I do realize that an operator display should be secondary as compared to using the data to extract the information necessary for autonomous navigation.

Ideas I have to deal with the different data representation are to support both the full timeseries as well as only reporting returns with intensities above a threshold. A single message type with arrays for both ranges and intensities could support both where ranges could be empty when the intensities represent the full timeseries. That would make reading the message more complicated since the reader would need to decide the data layout based on the presence or absence of range values.

An alternative is to have two different but very similar message types.

If both full or sparse representation are supported, I could imagine a clever radar driver selecting on the fly which one to use depending on which one is more efficient at the moment. That would mean users of the data would need to expect that. It’s not clear to me if it’s worth the extra complexity.

The next issue I have is the idea of reusing the laser_geometry pipeline. I really like the idea of using the TF tree to better position each ray, but I’m not sure if converting the data to a pointcloud is the right path for further processing. If you think of the samples radiating out from the radar as curved rectangular-ish shapes, the height of those rectangles is constant at the range resolution while the width increases as the range inreases. A single point for a sample doesn’t seem to capture the area a sample represents. Should a sample be repesented by more points in a pointcloud as range increases? If the idea is to use the pointcloud to udpate a costmap or for visualization, I feel like we would get better results with radar data itself rather than a pointcloud representation.

The more I think about this, the more I feel like we should keep it simple and only support the full timeseries.

I’ll keep on working on a version of the message that’s loosely based on MultiEchoLaserScan and will report back when I have something to share.

In the meantime, I’d welcome any more comments and insights!

time_increment is the time between two consecutive rays are measured; scan_time is the time between consecutive whole scans. In the simple case, time_increment * num_of_rays = scan_time. There are more complicated setups, though. Sick LMS-151 has 270° FOV and a laser projector rotating at constant speed full circle. So there is a 90° dead space where the lidar is not measuring (it physically doesn’t output anything in this direction). In this case, scan_time = 4/3 * time_increment * num_of_rays.

With less capable lidars without time sync, we do what Gordon described - use receipt time as the stamp, manually estimate the transport delay and subtract it. With PTP-capable lidars, you should definitely use the synchronized timestamp of the first measurement.

Some time ago, I’ve started creating a message type similar to LaserScan, but for 3D lidars. The project can be found here: multilayer_laser_scan/msg at master · peci1/multilayer_laser_scan · GitHub . After several iterations, I’ve concluded that the best way for variable number of output modalities would be a combination of a few fixed fields containing the info that every laser scanner has to produce (ranges, intensities) and putting there a PointCloud2-like structure with PointFields to keep other data (e.g. with Ouster lidars you could put there reflectivity, ambient lighting, second reflection etc.). This way, there can be some general-purpose packages like laser_geometry that can do the general stuff with each type of scan (using just the fixed fields), and then you could write specialized packages for processing scans that contain a specific modality.

I’m not sure whether the PointCloud2-like structure would be a good fit for the timeseries data (how many samples there are in a timeseries? tens? thousands?). I think lower tens of point fields would be a reasonable number, while having thousands of fields might be suboptimal (however, it’s just a gut feeling).

Thanks, after looking at the time components again, I figured they probably meant what you described. Thanks for confirming!

Very nice. What brand of radar is this?

It’s the Simrad Halo 20+.

1 Like