Image Projection Library - Open Source Release

Hello everyone,

I want to announce the open source release of image_projection. The package allows to easily configure projections from multiple calibrated camera sources. A demo launch configuration is available.

You can

  • Rectify a distorted image
  • Create a cylindrical projection (equirectangular, mercator …) from 360° camera data from multiple cameras
  • Create a perspective (pinhole) projection from fisheye camera data
  • Convert non-linear fisheye data to an ideal fisheye lense (“tru-theta”)
  • Define a virtual pan-tilt camera

This software has been used extensively by Team Hector. We use the compact 50$ 360° camera Insta360 Air as a virtual pan-tilt camera and 360° object detection camera simultaneously.

License: MIT

I am looking forward to your feedback.

Best wishes,
Martin Oehler

12 Likes

This is cool! How is the latency on Insta360 Air? Is it good enough for teleoperation? I’m really interested in the virtual pan-tilt camera, I had some OK results with rviz_textured_sphere before (my blog post on this here) but wouldn’t mind trying something more efficient :slight_smile:

We’ve been using the Insta360 Air for a few years at Team Hector.
So far, we didn’t have any issues with it regarding latency when teleoperating.
We’re using it as our main operator camera (with the package Martin just released) and as far as I know, there is no noticeable latency from the camera itself.
Network latency from the robot to the operator station is usually a bigger issue depending on the connection quality.

1 Like

Hey msadowski, thanks for you interest. The Insta360 Air can be connected via USB so latency is low. Projection adds a few milliseconds depending on your desired output resolution. You can test this with your configuration with the provided bag files. But as Stefan said, most latency will be due to (wireless) communication.

I actually saw your blog post before and quite liked your camera setup. I also tested rviz_textured_sphere. By default, you can not move the rviz camera center freely, so you will always have some kind of distortion as can also be seen in your video. This can be fixed with rviz_camera_stream. A second issue is, that the plugin assumes an ideal linear (“rectified”) fisheye image with both fisheye images having the same camera center. This is never the case on real cameras. image_projection does not have these limitations. I also implemented an ideal fisheye projection plugin to produce corrected input for rviz_textured_sphere.

@Martin-Oehler this sounds amazing! I will try to schedule some play time and give this a shot, could be a nice follow up blog post to my previous one!

1 Like

Hello,
Could this package also work with the newer Insta360 One X? Or other brands like Ricoh Theta?
Many thanks :smiley:

Any camera will work as long as you can stream the images to ROS.

Before we switched to the Insta360 Air, we used a Ricoh Theta. They have a very small baseline (by using mirrors, amazing engineering) which leads to less stitching artifacts but they have to be turned on manually.
Currently, I am working on a setup with the Insta360 Pro2 which should offer a much higher resolution.

Edit: I forgot one restriction. The source camera(s) should be able to output raw sensor data, e.g. a perspective or fisheye image. Already processed data like a stitched panorama is not supported as an input right now.

could this package be used somehow to combine 2D color and a depth image?

The package supports only 2D input data right now. How would you combine 2D color and depth? By blending the depth data into the 2D image? What is your use-case?

1 Like

Depth cameras have lenses just like 2D cameras, so if it works for a 2D camera it might also work for a depth camera. I was thinking of projecting both 2D and depth knowing their extrinsic relationship, then doing the reverse if we wanted to show a colorized depth image. This package seems right up that alley I think.

So you want to reproject the depth image into the color camera, in order to generate colored point clouds. The standard way to do this is depth_image_proc/register. However, the standard ROS image processing stack don’t support fisheye cameras, or even large field-of-view cameras. I tried adding a distortion model for large FOV cameras (Add equidistant distortion model by mintar · Pull Request #358 · ros-perception/vision_opencv · GitHub), but the repo seems to be largely unmaintained. So if you have a fisheye camera, you’ll definitely need something like image_projection. Such a cool project, thanks!

2 Likes

If your goal is to color (depth) clouds from fisheye images, I implemented color_cloud_from_image for this purpose. It builds upon the same kalibr calibration back-end as image_projection. However, the project has not been documented yet.

1 Like

just to follow up on this topic, I took @Martin_Guenther advice and used a combination of what depth_image_proc provides along with image_proc and finally added the missing piece of xyzrgb_radial to combine the depth image and color camera image.

i am consistently amazed at what open-source software can do if you know how to fit the pieces together.

3 Likes