For me the greatest promise of T265 was the plug’n’play aspect. I was hoping that I will plug it in to my robot and I will receive a good feedback from it that would allow me to detect wheel slip, especially on uneven/slippery terrains.
I’m really looking forward to see what happens to it in the future and I really hope Intel doesn’t abandon it.
Interesting! Did you set the publish_odom_tf parameter to true in the driver and managed to get a solid tree? Would you be able to share you settings, tf tree etc? I spent close to a week just on this and I’d love to learn how to set it up properly!
Yes, this looks good for a system where you use just the camera. How would you integrate it on your robot when it comes to tf structure?
Where would you connect the camera to base_link? Because the only way I can think of that won’t violate the REP-105 is if you have a camera_link -> base_link transform, which is quite backwards from what I’m used to.
@allenh1 any more information you can give on that? We’re looking at VSLAM integrations with Nav2 so that sounds like something we should consider (assuming its a full slam with loop closure).
I glanced over the paper, but they didn’t really compare it with any recently published things like Kimera / T265 / etc (ORB SLAM1, OKVIS, etc) to get an actual impression for how it lines up.
Basalt looks really nice! I’ve had some good results using VINS Fusion on the T265. Often, the results are better than the internal visual SLAM on the T265, but sometimes worse. Could be my parametrization though. I didn’t do a quantitative evaluation.
I hope someone has the time / inclination to do some quantitative comparisons of the options on the same hardware, it would be interesting to see over a variety of situations and datasets if any stick out.
Isn’t that basically what the KITTI odometry benchmark tries to show? http://www.cvlibs.net/datasets/kitti/eval_odometry.php
Although I realize that there is a difference in feeding in V(I)O to a robot system that has a separate SLAM method to do loop-closures compared to letting the V(I)O(SLAM) do it’s own loop closures. I don’t know, but I think that all algorithms in the KITTI ranking are doing their own loop closures.
Also, I don’t actually know if the KITTI dataset has IMU data with a good sync to their camera data, so maybe some of the results will be better with newer cameras with rigidly coupled and hardware synced IMUs?
the file size (.bag) are very large. even using compression, it didn’t reduce size much. They are 300mb + for just 8 seconds. How can I store tracking video in lesser space?
Thanks for reporting your experiences, @vasanth_reddy1! This kind of hands-on information by actual users is really valuable and hard to get from manufacturer’s websites.
I’ve seen that too on a colleague’s brand-new (2020) laptop. On the other hand, on my old 2015 laptop I don’t have any problems whatsoever. Might be related to what USB chipset you’re using.