Why don't we use ROS?

Well we here at Ubiquity Robotics base all our robots around ROS. Our robots will, it seems, be less expensive than the pulurobotics robots and on almost all fronts have greater capability.

Why did we choose ROS? Well its simply a case of enabling a great user experience and development speed for the user. Before we built ROS based robots we built monolithic designs like the pulurobotics robot. We built an application called party-bot that looks around the room and uses face detection to find people to serve drinks to (see the video on our website https://ubiquityrobotics.com/#applications). With the monolithic design we had a team of 5 people working on it for several days to get a first prototype. Then it took us ages to tune it and it never really worked very well.

With the ROS design, it took 1 person 6.5 hours to get to the first prototype. We tuned it in 1.5 hours using the many ROS tools and it worked well fast.

Yes ROS has a steep learning curve - that’s something we as a community do need to solve. Yes it takes a long time to make all the parts of your robot work properly with ROS - that’s something that is a result of the fact that ROS forces you to think properly about the architecture of your robot.

To claim though that ROS is too heavy weight, just ain’t so! We too run our entire ROS stack on a raspberry Pi 3. In fact you can download our ROS image for the Raspberry Pi 3 here. https://ubiquityrobotics.com/#software. Our software and stack is great for running robots, but also great for just hooking up say a camera or other peripherals. We get terrific performance with ROS on a Raspberry Pi 3, but the best bit is that if the performance isn’t enough its easy to make use of additional compute infrastructure because you can make it all work together with ROS.

To claim that shoving everything on to one central processor solves all problems with timing ain’t so either nor is it particularly true that you must implement heavy weight timing synchronization mechanisms for every peripheral.

Personally I love ROS just like I love my car. Now if I am more familiar with a horse and buggy, it might seem a lot easier to use and a reasonable choice, because its true that cars have their own set of problems. However in the end the advantages of my car over more traditional modes of transport are just overwhelming and any problems that currently exist with cars are being worked on by a vast worldwide group of talented engineers. I don’t see that kind of engineering happening with my horse and buggy approach.

7 Likes

I think that the author has missed an important benefit of ROS for developers of robot applications: the tools.

The author is building a robot with only concern for the hardware and the low-level embedded software that drives it. This is a use case that ROS is not particularly well suited for. They can be considered correct in arguing that ROS is not for them. (ROS 2 is, or perhaps will be, a different story.)

But for someone building an application on top of a robot platform, ROS offers something very important: an integrated tool chain with many different tools that can help you design, introspect and debug your software. This may not be relevant when building the hardware-driving embedded software (or it might be, depending on how you work), but it is very relevant when you are dealing with complex data flows and data-processing or planning algorithms that need debugging but aren’t easy to debug when the output is a wall of numbers. Having tools like rosbag and rviz and rqt_* available can massively reduce the difficulty in developing robot software.

I’m not with a company, but a comment I have heard several times from people who are is that this is the value they get from ROS. They don’t use it because there are lots of existing nodes available (another comment is often that they don’t want those nodes because they are not reliable enough to go into a product). For them, the value is in the improvements in the robot development process provided by having an integration framework with a wide range of tools available that already work. I would be interested in hearing from the company people here what the most important value proposition of ROS is for them.

8 Likes

Hi, long time lurker here :slight_smile:

I think this remark of Chris Albertson about low level hardware control is spot on.

Allow me to shamelessly plug a project which addresses this issue :slight_smile:
I’ve been participating in another Open Source project called Machinekit, which is fills the gap of low level hardware control.
Machinekit is a real-time motion/io control stack. Forked from LinuxCNC a few years ago. It can be used in more applications than CNC control though.

The gem in Machinekit and LinuxCNC is the HAL layer, the Hardware Abstraction layer. Developing a system is done by configuring it, instead of programming/compiling. One basically wires components together and puts these functions on an execution thread.
The realtime thread typically has a cycle time of 1ms, but can be faster/slower depending on hardware.
This means that interfacing with a DC motor, or a stepper motor does not change the system other than choosing a different component and hardware.
One can change the running realtime system on the fly by adding/removing components and (re)wiring them.
There’s also are C/C++/Python API so one can interface from a (userland) application with the realtime HAL.

Machinekit runs on linux platforms, typically Debian, with ARM/x86/x64 hardware, an example would be a Beaglebone Black where the PRU’s do the realtime tasks, a PC with Mesanet PCI FPGA card and daughterboard, or a De0 nano SOC which includes the PFGA. The Mesanet firmware (which is very stable) also runs on this. Making re-use of the industrial Mesanet daughterboards possible.

So instead of doing path planning in realtime, we can do “off line" planning with ROS (ROS does the path planning) and we take this trajectory as input.
An example (prototype, proof of concept) is here: The trajectory is put into a HAL ringbuffer by a ROS node written in Python. Machinekit HAL components then read from the ringbuffer, interpolates the segments and gets the motors moving.
video:

We would love to one day have a generic ros_control node to interface with the HAL layer.

2 Likes

I’ve heard of Machinekit a couple of times, but never seen people outside the industrial community use it. Has anybody used machinekit to drive, say, a mobile robot’s wheels, including Odometry and IMU integration?

1 Like

In my time managing a team that used ROS for a large European research project and my time with ROS at home, I think the hardest part of ROS is the steep curve offered by the dependency and build system - make, cmake, gcc, et al…

I offered a tutorial on ROS once to a couple of Java developers and they stopped speaking to me thinking I’d take them away from their murky maven builds and drown them in make files! :joy:

While the existing tutorials makes it easy to setup from apt and get going, the hard part comes when you have to build topics with covariance, time synchronization and all the other good stuff to make your robot work. For example, I remember struggling to ENU transform my IMU for robot_localization and I just couldnt find a location with a good description :slight_smile: even on Answers.

Still I’m a big believer in ROS and experimenting with ROS2 where many of QoS limitations are taken out. I think its the only way of prototyping robots today despite the learning curve. Building monolithic applications is so 2008 but a good option still if you want to be like Golem from the Lord of the Rings :wink:

1 Like

Not that i’m aware of. This would typically be a setup where one would configure Machinekit to drive motors (with or without encoder feedback) and read sensor data. Then get the motor feedback and sensor data back to ROS via topics.
Depending on realtime constraints / hardware one could write a component for fast control (in MK, not in ROS), and publish info back to ROS.

1 Like

Driving the motors is done by dedicated micro-controllers, at least in the systems I work with. Simple ones take velocities and perform PID control, better ones do Model-Predictive Control (MPC). Often, IMUs are also often directly attached, as they are cheap enough for a while now to be just integrated on the board. Of course, for MPC and for IMU integration some configuration is necessary, but not a lot.

I’m not sure what role MachineKit would play in such a setup. Would you use MachineKit to generate the software running on the micro-controller?

1 Like

What about more complex systems, e.g, whole body motion as in humanoid robots? In order to achieve this, the motors need to be synchronized (I assume), this would require lots of configuration I guess?

Machinekit would either have software step generation (with RT-PREEMPT) kernel, or depending on platform the hardware taking care of this by PRU (microprocessor) on a Beaglebone, FPGA on a mesanet pci card / De0 nano soc.

As long as you have the motors driven from the same board this is not really a big thing because you’d have the interpolation done by a HAL component. Then the HAL component sets the velocities, etc. Where movement (rough interpolation) is planned by ROS.
Synchronized moves from different Boards is not available (not yet anyway; execution of the HAL function thread needs to be done from an external source (like a hardware interrupt from fpga), there’s been a bit of experimentation with NTP server in the past, but I don’t know the details)

1 Like

I’m not sure what you mean by “step generation”. The motors we use can either be driven by Pulse-Width Modulation (PWM), and most micro-controllers for this purpose do that in hardware based upon a defined level.

Given that the exact API on how to set the PWM is MCU specific, I wonder how “generic” MachineKit would be. Also, given that the code for this is largely trivial and consists out of writing an input value to the right output register, I wonder what benefit is to be gained from MachineKit.

1 Like

That’s an example of a component for a step generator for a stepper motor. You’d set velocity or position and stop caring about generating steps (let the hardware deal with that), or use PWM, or getting a velocity/position from encoder for that matter.

  • re-use of a configuration (read control system) on different hardware
  • use components (filters, PID etc) to configure your realtime control behaviour
  • defined interaction between realtime and non realtime
1 Like

@davecrawley may u suggest me a way to interface ROS with Arduino so that i can make a robot with my own .

Hi @davecrawley ,
I like your answer because you think exactly as I do. I’m a computer engineer, I follow ROS since its first versions but I was skeptical. I’m a Computer Engineer, I can write software very well, I want to do my own framework for my robot, I do not want someone doing this work for me… this was my opinion.
I spent two or more years to write my own software for an autonomous ground robot, and it worked … more or less.

There was a big problem with my work: I WAS NOT MAKING ROBOTICS, I was making Computer Science!

I figured it out, so I decided to move to ROS and in less than a month I reached the same level of work I did in two years and the next month my robot reached a level not minimalally comparable to 3 months before.

So Why do I use ROS? Because I want to make ROBOTICS, I want to concentrate on Robotics tasks, I do not want to spend time writing a wonderful TCP/UDP protocol, creating awesome message structures, thinking about amazing software infrastructures, creating debugging tools.
I want to study Computer Vision algorithms, Artificial Intelligence paradigms, Intelligent Navigation Behavior. This is what ROS allows: WORKING ON ROBOTICS

So, to reply to the question of this thread: “Why don’t we use ROS?” “Because you like to reinvent the wheels”

Walter

2 Likes

I think that is kind of a harsh answer. In general you are right, ROS enables you to do a lot of task faster, but there are also a lot cases where ROS isn’t the solution. (A lot of them are mentioned before).
Furthermore a robot consists of many more parts than just software running on a computer that is able to run ros. Especially in combination with low level software parts that are best to be run on microcontrollers ROS has still many deficits (or you don’t want the overhead of ROS on a bare metal controller)

Well, I agree with you, ROS is not done for microcontrollers (waiting for ROS2…), but low level modules normally have their own firmware and a “simple” communication protocol easily “translatable” to a standard ROS topic. I think that you can agree with me if I say that it is easier to write a node as a device driver if you have a standard way to interface it to the rest of the robot framework.

By doing this statement you are ignoring a whole lot of realtime scenarios. But yeah. For many cases it will be easier to write a node as a device driver.

The only thing I want to say in total is that neither ROS nor a self written monolithic software is the one and only solution.
There are enough cases to justify your own software stack but if you want your system used by others you will need a ROS interface.

In the case of a high DOF robot, like say Boston Dynamic’s “Atlas” humanoid robot. One could use Machine kit to synchronize the two to three dozen motors. What MK is good at is running many axis n synchronization to hit trajectory points in “n-space”. It abstracts the differences between types of motors.

In a typical application of MK user cars about tolerances of 0.0005 inches. Think about a 5-axis milling machine that is making something like a turbine fan blade. It is making very smooth compound curves

What MK adds to a robot is the ability to synchronize moments to the millisecond or better level. ROS is not good at this at all

Here is an application. Let’s say we want to make amble robot is play “dodge ball” with a group of humans. The humans have baseballs and through them as fast as they can at the robot. The robot has to avoid being hit. ROS would fail at this. Message passing is simply to slow. You need a controller that can do sub-millisecond level.

The reason MK can work so fast is it does not use massage tasing. There is a control loop that runs off an timer. Lets say it is a 1KHz timing that causes an interrupt. The output control voltages to every motor in calculated inside that loop. It’s a fast servo controller.

To interface MK. ROS would compute a trajectory through space and specify some velocities to be hit at points along that path. MK would then do the microsecond or millisecond level calculations to force the robot to the specified path and velocity.

The curent meth is that the servo control loop is closed inside ROS and corrections are send as messages to motor controller. This is to slow. Using MK would close the servo loop inside the 1KHz loop. (or there 10KHz loop if you had a fast enough computer.

The approach is different. Has motors have interface to accept commands and ROS can send commands. But the control loop is still closed inside ROS.

When I say control loop I means the one that is looking at error in the 6 DOF arm’s end position. ROS looks at this error then sends message to each motor controller. Machine kit does this calculation inside a lop that runs 1000 or more times per second.

A good division of labor might to have ROS send a message like (go to x,y,z with orientation a,b,c) and ROS might update this 10 or 20 times per second. Then MK was direct control of all 6 motors in the arm and update the motor control signals 1000 ties per second. MK is not just a dumb motor controller. It can compute the kinematics and plan a way to correct error in the trajectory

Again the defiance between using 6 motor controls and ROS vs. using MK is that MK can do kinematics.

Long story short, Machinekit would replace the “Arduino part” in current setups. Namely, everything that relates to low-level motion control such as for example servo loops.

As for distributed setups with “smart” motor drives you should neither use ROS nor any other best effort based system. For this part, I would suggest using CAN, FlexRay or any other Fieldbus system designed for this task.

How would Machinekit come into play in this scenario? Well, one can add a driver (a well defined software component in Machinekit) for the XYZ bus system, proprietary protocol or whatever necessary to control the drives.

Why would you do this? That would make replacing the entry-level PWM based RC servos with industrial grade closed-loop servos controlled via a Fieldbus as easy as pie. Think about a small prototype or educational robot vs. a big industrial robot out in the field. Imagine both driven by the same HAL layer without requiring to rewrite the whole low-level system.

A friend of mine put together his custom motor controller boards on a revived industrial robot. He considered both LinuxCNC and ROS, decided for ROS in the end as that way he could pull off more high level demos with MoveIt. His low-level control is through a ros_control layer.

Btw I’m one of the maintainers of ros_control :slight_smile: feel free to reach out by email.