I am creating a post so to gather ideas and possibly create a package with your help for calibrating a non-overlapping multicamera rig and a 3D lidar. So far my thinking was to do what is done for a lidar and one camera calibration but this time for a camera rig.
My thinking was to use the classic photogrammetric resection by using the collinearity equation which requires at least 4 points with known coordinates in order to describe the extrinsic parameters of the camera. And by doing this for every camera and solving a system with all the equations together then my guess is that we could finally calibrate the sensors.
So the main parts of this approach would be the following:
[1] Target detection by lidar.
[2] Target detection by camera (each at a time).
[3] Extract the point coordinates from the target.
[4] Gather the observations for each camera.
[5] Least square adjustment.
Is this for ROS 1 or for ROS 2? The post says Melodic but I just wanted to clarify (I don’t think now is a good time to be creating new Melodic packages). If you are seriously considering undertaking this effort it would be great if it could become a a shared, common, ROS 2 library.
If you want, I can introduce you to the OpenCV team. My understanding is that OpenCV 5, which will be released early next year, will have an extensive re-work of the calibration tools including multi-camera calibration tools. I am not sure if it will include cross sensor modality (camera x lidar) calibration. You may also want to talk to the team over at @tangramvision as I think they are building some tooling in that direction. Along those lines @Luxonis-Brandon may have an interest in helping.
Sorry about the delay in response! I just realized that I must have messed up my notification settings on here. As I haven’t been getting any tags/etc.
And yes, I think this is right up @tangramvision 's alley as far as I understand. And I do think there is a project using OAK-D for this now. I’ll see if I can find it.