Frustrations With State Of Hobby ROS 2 Robotics

Perhaps ROS 2 Robotics is just too complicated a subject, too diversified domain for a hobbyist. After 40 years of building robots, the last two years have been dedicated to building a ROS 2 robot that would allow me to leverage ROS community developed software to achieve my robot dreams - primarily being a fully autonomous, 24/7 home robot aware of the objects and people in its environment.

I successfully built a ROS 2 Humble (first Foxy, Galactic) robot from the ground up, based on the GoPiGo3 platform, starting with the hardware interface/control node, and progressing to mapping, and localization with a LIDAR and SLAM-toolbox.

Doing this alone, I became blocked at developing an EKF node to fuse IMU and wheel encoder data to improve the odometry accuracy. This hurdle increased my desire to “follow” folks rather than be a “blind leader of a group of one”

During that project I actively participated in the Create3 Simulator beta, and the Oak-D-Lite kickstarter program, so when the TurtleBot 4 Lite was released I grabbed one of the first units available.

I was really excited by the TB4Lite design, but discovered:

  • I was not a part of a flood of new owners as I had expected
  • The TB4 software had not integrated the Oak-D-Lite sensor for mapping/localization/navigation

and as my particular TB4Lite was damaged in shipping, it was returned.

At the end of last year, I decided to purchase a Create3 to form the platform for my next robot. Having an Oak-D-Lite sensor already, my plan for this robot is to investigate what is possible with only the Create3 sensors and the Oak-D-Lite stereo depth sensor. Choosing not to include a LIDAR in my robot architecture has again set me on a slightly different path than most ROS hobby robots, but seeing that there is a depthai-ros package and a launch file to tie the Oak camera with RTABmapping, I thought I would not be totally alone on this path.

While the support teams for both the Create3 and the Oak-D-Lite are really superb, the diversity of sensors, user installations, languages, ROS versions, package versions, DDS types, data and power connections, underlying operating systems, user sophistication, and user goals seems to me just a chaotic mess. I am again feeling left to navigate alone with no one to follow.

Most of my questions on robotics.stackexchange.com have gone unanswered. My issue reports created on github sit for weeks or months without solution. The community seems too diverse to find another hobbyist that might have seen and solved my particular challenges. There is lots of great activity going on, but the hurdles are very frustrating at this point.

17 Likes

I feel for you. I had bought a Create 3 a bit ago and just started to get back into it. It does seem like there has been a quite a bit of work in getting Create 3 docs updated and created and in general they are pretty good. But, like you, there does seem like a hole in support.
I’m assuming you have been to the Create 3 Forums on github. And there are some folks that have posted some examples out there. However considering the latest events at iRobot and the restructuring including Pausing the educational portion of iRobot, I would assume it is not going to get better. Hopefully some folks could pick up the slack.

1 Like

Surprised OakD didn’t meet your needs. Have you considered Gemini 2? There’s a new ROS2 wrapper update GitHub - orbbec/OrbbecSDK_ROS2: OrbbecSDK ROS2 wrapper

1 Like

I did not say that. Every depthai_sdk example I tried with the camera worked perfectly, even on the Raspberry Pi 5 with PiOS Bookworm.

I am just now starting to investigate the humble binaries version of the depthai-ros code, some of which appears to “work straight out of the box”, and some that appears to “crash with cryptic error code -11”, with indications that a new release is in the works. It is a complex, versatile implementation with pages of parameters.

I have no complaints about either the Create3 nor the Oak-D-Lite products, and no complaints about iRobot support or Luxonis - on the contrary I am elated with the products and the support.

My frustration is that ROS is such a simple architecture to learn and yet such a complex environment to build a robot.

I share a similar sentiment. Being on the (semi?) professional side, I often encounter frustrations as the advertised capabilities and stability frequently diverge from reality. When working with limited resources, I would suggest trying to define your end goal, and when stuck, attempt to compromise accordingly.

Navigation 2 is one of the better, if not the best package, but in my experience, it heavily relies on the lidar - raster map paradigm. Omitting that is a brave effort, and I applaud you. Combine this with a passive stereoscopic camera, and I can understand why you’re facing challenges.

For odometry, using the linear velocities from the wheel encoders and the orientation from the IMU is an okay approximation based on my experience. If you have something to correct the map->odom frame semi-frequently, it should work.

I would suggest trying to find like-minded people either on Discord or in real-life meetups and sharing the love of robotics with them to keep you motivated. Asking complex questions on forums is often like shouting into the void and waiting for the echo of “I have the same problem, have you solved it?”

3 Likes

First of all thank you for sharing your experience in such detail and honest way! After encountering my share of frustrations learnings ROS2 and messing up my relationship I am coming to the conclusion that coding in solitude is a pretty terrible experience!I want to develop robots for a better world( helping the health care system primarily) but the solitude is of the coding experience is the hardest part! I therefore propose that we could team up and organize some kind of coding retreat! I am in Portugal at the moment and it’s a very nice place to be with affordable living! If anyone thinks as I do please let me know! Thank you!

5 Likes

GitHub - linorobot/linorobot2: Autonomous mobile robots (2WD, 4WD, Mecanum Drive) supports the OAK-D

There is a community channel for Linorobot here Personal Robotics

1 Like

Your experience sounds familiar to me. I’m a hobbyist and I also want to build a fully-autonomous robot. Last few years I built a ROS robot prototype (Raspberry Pi, triangular base with omni-wheels, 3D printed parts, a mast, all kinds of sensors including LiDAR, camera, ultrasound and even a mic array), rechargeable battery. GitHub - makerspet/kiddo: Kiddo companion robot
Then I burned out and, long story short, realized something:

  • a project like that is too complicated for a solo hobbyist, no matter how skillful is the hobbyist. More specifically, the problem is the amount of time the project demands.

So, I had to rethink this project, if I were to continue. I decided to make the design as simple as possible. I ditched omni-wheels, mast, camera, mic array, Raspberry Pi (replaced with ESP32, micro-ROS) and even rechargeable battery. Kept the motors, the LiDAR; dumbed down the body; dumped bumper/cliff sensors.
Here is the resulting design after the remake. GitHub - makerspet/makerspet_loki: Maker's Pet Loki - a 200mm 3D-printed DIY pet robot compatible with Kaia.ai robotics software platform
The simplification has helped a lot. That said, even after the simplification I find this project still requires a huge (!) amount of time/effort. But at least it the bot now runs, maps, self-drives.

3 Likes

I have also been working on such a project, currently I’m stuck on trying to build ROS2 on a Raspberry Pi 5 with the intent of using the R-Pi with AI to navigate and find objects to pick up, and Arduinos as subscribers to do the sensor and motor operations.

I totally empathize with your burn out realization.

2 Likes

GitHub - linorobot/linorobot2: Autonomous mobile robots (2WD, 4WD, Mecanum Drive) works with microROS on Microcontrollers (Teensy, ESP32, Pi-Pico)

I think it’s a really good starting point personally.

For now this fork has ESP32/Pico GitHub - hippo5329/linorobot2_hardware: A fork of the linorobot/linorobot2_hardware project which builds firmware for esp32 to control mobile robots based on micro-ROS with various sensors support. but changes are gradually being merged upstream.

There is a community channel for Linorobot here Personal Robotics

Frustration Update

On A Create3 Base.

My config:

.

Current forced distraction: Pi5 at 50% CPU causes 100% CPU bottleneck in Create3 base due to “ROS 2 default middleware implementations”. Investigating twin FastDDS Discovery Servers on the Pi5.

.

I moved from a Pi4 over a GoPiGo3 robot base to the Create3 with the thought that building atop a native ROS 2 robot base would allow me to work on my ROS 2 Behavior/Functionality nodes without having to solve “platform issues”.

I really appreciate this reply. We were chatting in the Discord about students quite frankly biting off more than they can chew. Robotics isn’t easy; and a lot of people simply don’t know what they don’t know, and have trouble getting started. They get really frustrated and overwhelmed once they realize robotic requires learning a lot of things all at once.

In the Discord I proposed we put together one of these skill trees for ROS that we could point students to give them a better sense of the steps that lead to ultimately building a robot. I plan to put together a draft skill tree and post it to Discourse.

I would be interested in hearing what everyone in this thread thinks are the skills that belong on a ROS skill tree? I pasted the Dev Board skill tree below to give everyone a sense of what a completed skill tree looks like.

12 Likes

Hi Kat,

I think this is a great idea! My suggestion would be to create personas for the different types of people involved. I think it can be intimidating to see all the skills needed to build a robot. Perhaps the skills could be aligned to the role they play (eg. like classes in tabletop or video game RPGs).

For example, someone from a CS or Software Engineering background could focus on the firmware (#freertos, #zephyr, arduino, mcu, etc). But the same person may alternatively prefer to work on the CI/CD pipeline and tooling necessary to build and deploy. (docker, #jenkins, cmake, #devcontainers, etc)

A mech eng or physics student could be focused on doing the CAD work for the chassis, etc of the robot, but they could equally be focused on doing the simulation work to experiment with different designs.

Let me know how I can help!

Regards,
Rob

1 Like

That’s a really good point @robwoolley. I was simply walking through the ROS tutorials but there are a lot of precursors to even get to the ROS tutorials (Python, C++, basic linux commands, etc).

I am not going to be able to get all of this done myself, but if we had half a dozen people each making one chart I could see that working out well.

1 Like

May I suggest the creation of a separate threaded post for that dialog - PLEASE.

My frustration with the state of ROS 2 Robotics is from the standpoint of a degree’d software engineer with over 40 years commercial software and hardware experience, with seven years building robots from the ground up to the ROS layer.

Yes, there are challenges learning below the ROS layer, but only when folks can rely on the ROS layer to be stable, reliable and transparent, with supported, native ROS robots will they be able to work above the ROS layer and above platform issues.

Case in point, requiring me to understand UDP transport with segmented networks and packet management using multiple Discovery servers just to keep my robot from crashing when I launch RTABmap is not a weakness in my learning - it is an issue with ROS.

1 Like

Sure, but can we give it a few days? I’m meeting with a few people to re-boot the EdWG next week. We can fold this discussion into a broader discussion about educational material.

Case in point, requiring me to understand UDP transport with segmented networks and packet management using multiple Discovery servers just to keep my robot from crashing when I launch RTABmap is not a weakness in my learning - it is an issue with ROS.

Well, I would say that’s an issue with software complexity overall. If it is an actual issue then then you should file an issue. We’re also well aware that DDS is sub-optimal for a lot of simple use cases, especially over WiFi; that’s why Jazzy is focused on bringing a Zenoh RMW to the table. If there’s one thing that I wish everyone knew its that we’re usually aware of problems; the difficulty is finding the human resources to fix them. With a limited number of maintainers addressing issues like these can take a long time.

4 Likes

I love the one about releasing the smoke from the board!

2 Likes

This is a great idea, PUT IT ON THE MAIN ROS PAGE !!!

A suggestion would be to add a horizontal band (like the arrow that showed how advanced each topic is - from the example @Katherine_Scott had provided) demarcating a place on the skill tree. Showing the reader that “you have learned just enough and now the scale of your projects actually warrants the use of ROS”.

That is, you actually have multiple sensors, motors and actually need to relay data for computation or decision-making process or both. Which does require the benefits of a middle-ware such as ROS.

Most times (at least from I am from) beginners, such as myself, started to use ROS just to use ROS and that makes a simple problem that would take 2 days into a 2-month problem, because now you are more busy with project structure and builds rather than the project itself.
Complexity with none of the benefits (As a beginner you have no idea what you need). As a beginner, this leads to quick burnout.

4 Likes

I do not want to dilute the intention of this thread, but do start an educational thread where more Ideas can be shared about education material and organizing content focusing into a more ergonomic approach for someone who is just starting out.

1 Like

Went ahead and made this, hopefully this should suffice.

2 Likes