I am creating a post so to gather ideas and possibly create a package with your help for calibrating a non-overlapping multicamera rig and a 3D lidar. So far my thinking was to do what is done for a lidar and one camera calibration but this time for a camera rig.
My thinking was to use the classic photogrammetric resection by using the collinearity equation which requires at least 4 points with known coordinates in order to describe the extrinsic parameters of the camera. And by doing this for every camera and solving a system with all the equations together then my guess is that we could finally calibrate the sensors.
So the main parts of this approach would be the following:
 Target detection by lidar.
 Target detection by camera (each at a time).
 Extract the point coordinates from the target.
 Gather the observations for each camera.
 Least square adjustment.
Any ideas ?