I created a ros2 driver to work with Daheng Computer Vision cameras via their galaxy library to utilises usb3 vision. I havent published it as I’m unsure how to include their libraries but the source code is available here.
One of the issues I’m now facing is that the sensor_msgs::msg::Image doesnt allow me to encode how the camera sensors were setup ie exposure settings at the time of capture and when the image was triggered.
I’m trying to understand if a) image.msg can be modernised or b) a new image_advanced.msg could be created.
What are people’s thoughts about how to include not just the image data but details of how the image was captured? How does one go about instigating change to sensor_msgs?
Time of capture should go into header.stamp. The other parameters have to go in their own message type. There’s almost no chance to change sensor_msgs/Image in such a substantial way. So you should start by creating your own message type that will hold all of these parameters. The exposure settings would have their own topic then, but any client can create a synchronized subscriber to get both the image and its metadata (the same way camera_info is used with images).
@peci1 time of capture can mean a few things. Depends on how you look at it. I’m triggering stereo cameras and there is a slight lag in time between the two and capture on the host computer.
I’ll look to raise an issue shortly.
All these computer vision cameras use Bayer format ie raw format. No jpeg encoding. Often it’s too cpu intensive to demoniac & compress at time of capture.
I really want a message format that you can save the raw format, a few timing details (such as trigger time or count for synchronisation), relevant meta data & maybe exif tags to ie Lon,Lat, camera orientation
For what it is worth, we can definitely make changes to the core messages if it makes sense. That said, we want to make sure that anything we add to core messages is broadly applicable, because adding fields to core messages imposes network overhead on everyone.
With that in mind, though, some of this information looks like it could possibly be put into the CameraInfo message. If there is enough community buy-in that this additional metadata is broadly useful, then we can consider putting it into the core message.
For the time being, though, I think @peci1’s advice is right, and you should probably create a custom message and use a synchronized subscriber to get the Image, CameraInfo, and your new CustomMessage together.
@clalancette@peci1 there are really three types of data - generic configuration data for the sensor or camera (eg acquisition frame rate, exposure mode, auto gain min, auto gain max, auto white balance roi|lamp house, expected gray value), image capture data (eg exposure time, gain, balance ratio) and camera vendor specific (eg trigger mode, trigger source, line source)
I’d also like to see some sought of EXIF values for an image - would like to store against each image its lon/lat and camera orientation.
As a minimum however to assist with processing raw bayer I’d suggest exposure time, gain and balance ratio against image
Actually in my experience too there’s been problems because we needed more metadata transported related to the images due to auto-focus and other automated information from ML / AI cameras. I’d support adding a few fields to include that information, but I agree that CameraInfo is the place. More and more cameras are not going to be fixed-focus and fixed-parameters moving forward – cell phone cameras and similar are becoming quite ubiquitous.
I capture data for playback now again with ros2 bags. Rarely include camera info as it’s static. Not sure what the history has been that has led to this need to resynchonise messages? …. I’m more interested in synchronising stereo image capture from multiple nodes. Shortly will be adding more cameras.