My fork adds: Turtlebot 3 support Updated Gazebo interaction to work with increased real-time factor ROS 2 support, making it compatible with the latest middleware and tooling A WIP port to newer Gazebo versions, including Fortress and Harmonic
the Turtlebot gets removed and needs to be added again. When using msg.reset.model_only = True the Turtlebot model stays but the environment doesn’t seem to reset properly showing different behaviour than when using Gazebo Classic.
@Gi_T This is really interesting work! I have a somewhat related project where I’m training a neural network to perform navigation using a single camera, aiming to improve upon encoder/IMU-based odometry. The goal is to use low-cost hobby robot components and a single ESP32-based camera.
Deep reinforcement learning (DRL) for navigation often relies on LiDAR, which increases costs. Do you think it’s possible to extend DRL-based navigation using only a monocular camera? Monocular SLAM results seem a bit noisy. (GitHub - weixr18/Pi-SLAM: Implementing full visual SLAM on the Raspberry Pi 4B) . Seems do-able as these depth estimation models are getting really good. Latency might be the bottleneck. Im still a little new to DRL techniques in general.
That’s looking pretty good! Would be great to see some longer videos of it running.
I’ve always wondered how well these methods generalize to different sensors, say with different min/max range, FoV and frequency. Would it be need retraining for a lidar with twice the range?
A colleague of mine did his dissertation on the same topic for mapless navigation. Though that one was based on integrating markov chains with DWA if I recall right.
relies on LiDAR, which increases costs
Arguably cheap 2D lidars are now at price points comparable to cameras, no? There’s a decent selection under $100, from Slamtec, Robotis, LDRobot, etc.
Formfactor and cost is still not where it needs to be even compared to the slamtec type 2D Lidars. Think of something like the (late) Anki Cosmo. Trying to keep the total price under $200 for the entire robot. The challenge (naive as it may be) would be to use a single camera like an XIAO ESP32S3 Sense Camera (about $14.00 retail). and it be the only sensor (except for perhaps a simple cliff detection sensor along the base). With good encoder/motor and IMU you can get good enough odometry as shown here: https://www.youtube.com/watch?v=7Wbw5yRmgS8&t=2787s but what if we could improve on this with pre-trained localization model. For now im experimenting with ORBSlam but im it could be very interesting to compare the robustnes of this against a trained navigation model. That being said, the work of Boris Sofman/ANKI is really brilliant given the constraints they had to work with at the time.