I hope this message finds everyone well. I am currently working on a project involving a Turtlebot and I am particularly interested in implementing vision-based navigation capabilities. As I am utilizing ROS2 for this project, I am in search of any existing packages or resources that could assist in achieving vision navigation with a Turtlebot.
Could anyone please advise if there are any ROS2 packages available that support vision navigation specifically for Turtlebots? Additionally, if there are any tutorials, documentation, or advice on integrating such a system, I would greatly appreciate your guidance.
Furthermore, if anyone has experience with custom implementations or workarounds to achieve vision navigation with Turtlebots in ROS2, your insights would be incredibly valuable.
Thank you in advance for your assistance and for sharing your expertise. I am looking forward to being an active participant in this community and contributing wherever I can.
If you will excuse my honesty, the question is so vague as to be unanswerable.
You say your goal is “vision based navigation”. I am a NOOB to ROS 2, and have seen that folks are using “vision with depth” (RGBD e.g. Kinect camera) on Turtlebot3 or “stereo vision for depth” (e.g. Oak-D-Lite) for mapping and localization on the TurtleBot4 lite. One of the approaches for “pointcloud” mapping and localization is the ROS / ROS 2 package RTABmap. (The typical onboard processor for Turtlebot3 or TurtleBot4 is heavily taxed with the large visual information dataset, so I am not setting my expectations high going into vSLAM on my Raspberry Pi 5 powered robot.)
I have not heard of any vision based navigation as yet. I believe most folks are using Nav2 for navigation in ROS 2, which works off whatever mapping and localization is available.
You will probably need to do a bunch of reading to be able to ask more specific questions. Perhaps using search tags on robotics.stack.exchange.
Now that I reflect a little more - I believe that the DonkeyCar’s neural-net based vision to driving commands might be classified as a very limited form of visual navigation (that does not involve localization or mapping)
interesting to read of your Project as I am an owner of an original 2011 Turtlebot (Ubuntu 14/ROS 1 Electric) and now undertaking its migration to Ubuntu 22.04/ROS 2Humble and Gazebo gz-sim Harmonic. I too have an Oak-D-Lite that I plan to use for ordinary RGD vision and hopefully depth / pointcloud functions.
However I have a question on your reference to the Turtlebo4, in which I have studied the documentation & github code. Pehaps I overlooked this, but I find no reference to it using this Camera depth features to perform SLAM 0r Navigation. As I read, only /odom from the wheels and /scan data from the RPLidar is used to localize the robot. The original TurtleBot uses a Microsoft Kinect , Publishing pointcloud2 msg then the depthimage_to_laserscan package Subscribes and Publish /scan to SLA & Nav2 packages. https://wiki.ros.org/depthimage_to_laserscan/
Appreciate if you could point to where in the githubs the Turtlebot4 Oak-D-Lite processes the camera depth data.
@RobotDreams
Sorry for asking such a vague question.
I am currently working on a project to create photorealistic 3D environments using a smartphone camera and UE5. In this environment I can use ROS2 to run the turtlebot3 waffle. However, I am struggling with quantitative metrics that can be used to effectively evaluate this environment.
I decided to evaluate it based on whether I can run the turtlebot3 waffle from camera images only.
Therefore, my question is: Is there any package or other way to achieve vision navigation using Turtlebot in ROS2?
Is there anything else that can be quantitatively evaluated using ROS2 and the turtlebot3 waffle?