Realsense T265 - hands on review


I’ve made a short blog post describing my experience with Intel Realsense T265 tracking camera used with a wheeled mobile robot that I think some of you could find useful:

Tl;dr summary of the post: I really like the idea behind this sensor but I found that it was quite difficult to get a ‘proper’ REP-105 compliant setup with it. In my opinion at this stage it can be a decent unit to have for R&D but I’d say it’s not there yet for commercial applications.

If you have any tips on using T265 or notice that I do anything wrong in my setup then I would really appreciate your feedback!


Nice article.

I was really impressed with the T265. Having a commercially off the shelf device that does visual odometrytells something about how much VSLAM is improving.

It worked pretty well in my experiments, but then I had to ask myself: if I have wheel odometry and an IMU, do I need the T265 ?

I come to the conclusion that the T265 is an amazing device that is not really useful in many practical cases.

The fact that it is “just” Visual odometry and I can not reuse maps, makes it less attractive than it could be.

But I think it is great for non-wheeled robots like drones ans hand-held devices.


Thank you for this. The coordinate convention makes my head spin, hopefully this will help.

While impressive, so far I see the t265 as better suited for integration with an external vio/vslam system, which is kind of ironic. It is really amazing how well it works out of the box but I can’t quite figure out a good way to translate that into a production asset.


For me the greatest promise of T265 was the plug’n’play aspect. I was hoping that I will plug it in to my robot and I will receive a good feedback from it that would allow me to detect wheel slip, especially on uneven/slippery terrains.

I’m really looking forward to see what happens to it in the future and I really hope Intel doesn’t abandon it.

I dont get your problem with the TFs. For me the T265 is publishing TFs in compliance with REP 105

Interesting! Did you set the publish_odom_tf parameter to true in the driver and managed to get a solid tree? Would you be able to share you settings, tf tree etc? I spent close to a week just on this and I’d love to learn how to set it up properly!

i have just used default settings :wink:

Yes, this looks good for a system where you use just the camera. How would you integrate it on your robot when it comes to tf structure?

Where would you connect the camera to base_link? Because the only way I can think of that won’t violate the REP-105 is if you have a camera_link -> base_link transform, which is quite backwards from what I’m used to.

if you want your standard odom --> base_link --> camera_link you have to handle TF by yourself or use robot_localization package.

Can we take pictures with the Intel Tracking camera T265 ? I am interested in getting pictures as well with the tracking records.

Yes, you can. It is monochromatic fisheye stereo camera and you can access both stereo streams.


This is what you get:


I’ve had some pretty amazing results with a SLAM implementation called Basalt. There’s a wrapper for ROS 1 and for ROS 2.


Basalt looks super cool! I’ll actually feature it in Weekly Robotics!

@allenh1 any more information you can give on that? We’re looking at VSLAM integrations with Nav2 so that sounds like something we should consider (assuming its a full slam with loop closure).

I glanced over the paper, but they didn’t really compare it with any recently published things like Kimera / T265 / etc (ORB SLAM1, OKVIS, etc) to get an actual impression for how it lines up.

Basalt looks really nice! I’ve had some good results using VINS Fusion on the T265. Often, the results are better than the internal visual SLAM on the T265, but sometimes worse. Could be my parametrization though. I didn’t do a quantitative evaluation.

1 Like

I hope someone has the time / inclination to do some quantitative comparisons of the options on the same hardware, it would be interesting to see over a variety of situations and datasets if any stick out.

1 Like

Isn’t that basically what the KITTI odometry benchmark tries to show?
Although I realize that there is a difference in feeding in V(I)O to a robot system that has a separate SLAM method to do loop-closures compared to letting the V(I)O(SLAM) do it’s own loop closures. I don’t know, but I think that all algorithms in the KITTI ranking are doing their own loop closures.

Also, I don’t actually know if the KITTI dataset has IMU data with a good sync to their camera data, so maybe some of the results will be better with newer cameras with rigidly coupled and hardware synced IMUs?

Isn’t that basically what the KITTI odometry benchmark tries to show?

Yes, I believe basalt has an executable for running KITTI

the file size (.bag) are very large. even using compression, it didn’t reduce size much. They are 300mb + for just 8 seconds. How can I store tracking video in lesser space?