There are people who are not interested in using ROS for their robots.
Here is one story that I found on Internet!
There are people who are not interested in using ROS for their robots.
Here is one story that I found on Internet!
Good one! Of course all-in-one software can easily be better than one
distributed into many parts all living on their own.
The point of ROS here is agility. If one can afford to spend a lot of time
making custom software there’s a high chance it will be better in many
ways. If they keep growing their software stack they’ll eventually reinvent
parts of ROS. I’ve seen too many projects going that way, sad for all that
wasted time of smart ppl.
That’s a really interesting article with some well-founded arguments against ROS. I had this kind of discussion (a little bit different because we are targeting modular robots) just on last friday.
When I started to take a deeper look into robotics I was strictly against ROS in the beginning, because my experience was: It is hard to learn, hard to integrate into an existing non-ROS software stack (custom build system makes it difficult to integrate, you need to start a roscore), and I couldn’t get it working on a system other than ubuntu. Furthermore (this is an issue I still see today) most software that is written for a ROS software integrates ROS in a way that you can’t use it without ROS even if it is just a hardware driver. For example we have a Sick TIM5xx laser scanner. The driver is quite well written, but it took a lot of work to use it without ROS.
In case of an system that comes from an single vendor, where you’ve got the control about all parts like in your case I think it is totaly valid to say, we don’t need to use ROS. Also we all might agree that there are a lot of problems ROS introduces. (Some I mentioned before)
But the whole point of using ROS is, that you want modularity, agility and you can rely on a whole lot of ready to use components (Let’s take for example the very powerful TF2 components). Furthermore (as stated in your article) it introduces standard messages for certain kinds of data. That means I can simply replace a Sick Laserscanner with a Hokoyu, start the fitting driver and the system still works.
I don’t know what kind of robot system you build, but in case other people want to integrate it into their own environment ROS might be a huge advantage.
Amen! That story mirrors my own experience. Neither approach is absolutely better or worse, they are each inherently better at solving different types of problems. Some realtime problems are just silly to try to handle abstracted away from the source of the events. But ROS, should not have a problem treating a low level integrated robot as a module by way of introduction to the “ROS way”.
What ROS lacks is a simple clear path (development, tutorial, community outreach) to go from individual low level integrated sumo bots, RC, Arduino firmata, Vex, FIRST, etc to a useful ROS dev environment that handles swarm communication, coordination, and configuration based modular code update on Arduino devices in the field that are not running ROS.
As others here, I agree with a lot of what the article says about sensor-data integration in ROS and also about the benefits of using micro-controllers directly. We’re actually working on making that work together with ROS better.
That said, I know that most compute and sensor platforms are far from being the major contributor to the BoM that Pulu makes them out to be. At least in our platforms, while being a significant part, they are from from the major cost driver. Similarly, sensor-data integration, while important, is far from being the main software cost driver. I’ve heard similar things from other vendors.
However, I know of several low-cost robot platforms which experienced a very hard surprise once they tried to tackle safety, which is a must for operation in human environments with loads. Now these platforms cost several times their original target.
Looking at Pulu’s current site, I would also guess that they have a lot to still to do in this regard. Their sensors are not safety-rated, and their chassis presents serious crush and tear risks. This kind of work is easy to underestimate.
The OTHER problem with ROS is that it only address what I call the “middle layer”. It is not good for low level hardware control. It leaves uses one their own for that. It is also not good for the higher level decision making that I would call AI. So you and up building bridges at both ends.
The other huge complain is the complexity of the dependancies and the build system. It is such that only “experts” are able to port ROS to new platforms, even if the platform is very much like Linux, say BSD UNIX or MacOS. It is basically a house of cards, one tiny problem and nothing works. In theory a distributed system that depends on massage passing should not be so tightly coupled. I should be able to build a ROS node on using some OS I just invented and plug it into a years old ROS system and it should “just work”. Every last bit of ROS should be ablate be build independently and the only interface should be the text of the massages pass. Itshould not all need to be built in the same work space.
Perhaps it is a cultural thing and we could correct this by hold a frequent “plug fest”. This is where people bring devices built using diverse software code bases and test for interoperability. We see this in other areas why not in robotics
In my opinion the root cause is over coupling. entire idea of a “ROS work space” is likely the cause of this.
That said, of course I continue to use ROS, but it is far harder and slower to use then it should be.
A workspace is really nothing more than a directory containing a nr of CMake projects. You should be able to just
cd $pkg_dir && make build && cd build && cmake ... There is no requirement for a shared workspace, it’s just convenience.
Can you give an example of the over-coupling you mention? Big software projects tend to be complex, but if there is some low hanging fruit here that should deserve attention.
Interesting article, indeed … also, I agree with most of what people wrote so far. However, I have to disagree with one statement “ROS has a steep learning curve / is difficult to learn”. That’s simply wrong from my point of view. Let me tell you why.
I am working at the University for quite some time now (probably too long ). I watched numerous students come and go. In general, we try to inspire young people to study robotics by giving them the opportunity to play with real (physical) robots and ultimately implement and integrate their very own small project(s).
I have never ever had a student that was not able to get familiar with ROS. I am not saying catkin workspaces are the holy grail. What I like to point out is that the ROS community and tutorials are something that is “already out there” and that is updated regularly — we, as the “teachers” don’t need to provide all that.
Moreover, most student projects require a basic software stack that exceeds what can be done by a single student in one term, e.g., writing a camera grabber/driver, image conversion and depth processing, state machine design, you name it. Once a student is familiar with how ROS works, e.g., wrt to workspace layout and compiling/building and deployment, he or she can push the project really fast.
Of course, we could have come up with our own workspace strategy and use existing (ideally) build tools like CMake (not catkin CMake login), but then diversity kicks in. Some students prefer C/C++, okay no problem using ROS, some prefer Python (more and more students actually prefer Python over C/C++), okay again, they can use the same framework, integrate easily and most importantly they can easily interface.
To make a point here, IMHO ROS is extremely useful in the education domain.
Well, is “the ROS way” is the right way? Probably not (IDK), because as some already said we need to tackle the problems on a lower level. Do we make the situation worse by teaching them “the wrong way”? IDK. But, as long as it works, and ROS does work on that level, I am happy about it. Maybe this post does not address the original topic (ROS for industrial applications) but I wanted to share these thoughts with you.
Well we here at Ubiquity Robotics base all our robots around ROS. Our robots will, it seems, be less expensive than the pulurobotics robots and on almost all fronts have greater capability.
Why did we choose ROS? Well its simply a case of enabling a great user experience and development speed for the user. Before we built ROS based robots we built monolithic designs like the pulurobotics robot. We built an application called party-bot that looks around the room and uses face detection to find people to serve drinks to (see the video on our website https://ubiquityrobotics.com/#applications). With the monolithic design we had a team of 5 people working on it for several days to get a first prototype. Then it took us ages to tune it and it never really worked very well.
With the ROS design, it took 1 person 6.5 hours to get to the first prototype. We tuned it in 1.5 hours using the many ROS tools and it worked well fast.
Yes ROS has a steep learning curve - that’s something we as a community do need to solve. Yes it takes a long time to make all the parts of your robot work properly with ROS - that’s something that is a result of the fact that ROS forces you to think properly about the architecture of your robot.
To claim though that ROS is too heavy weight, just ain’t so! We too run our entire ROS stack on a raspberry Pi 3. In fact you can download our ROS image for the Raspberry Pi 3 here. https://ubiquityrobotics.com/#software. Our software and stack is great for running robots, but also great for just hooking up say a camera or other peripherals. We get terrific performance with ROS on a Raspberry Pi 3, but the best bit is that if the performance isn’t enough its easy to make use of additional compute infrastructure because you can make it all work together with ROS.
To claim that shoving everything on to one central processor solves all problems with timing ain’t so either nor is it particularly true that you must implement heavy weight timing synchronization mechanisms for every peripheral.
Personally I love ROS just like I love my car. Now if I am more familiar with a horse and buggy, it might seem a lot easier to use and a reasonable choice, because its true that cars have their own set of problems. However in the end the advantages of my car over more traditional modes of transport are just overwhelming and any problems that currently exist with cars are being worked on by a vast worldwide group of talented engineers. I don’t see that kind of engineering happening with my horse and buggy approach.
I think that the author has missed an important benefit of ROS for developers of robot applications: the tools.
The author is building a robot with only concern for the hardware and the low-level embedded software that drives it. This is a use case that ROS is not particularly well suited for. They can be considered correct in arguing that ROS is not for them. (ROS 2 is, or perhaps will be, a different story.)
But for someone building an application on top of a robot platform, ROS offers something very important: an integrated tool chain with many different tools that can help you design, introspect and debug your software. This may not be relevant when building the hardware-driving embedded software (or it might be, depending on how you work), but it is very relevant when you are dealing with complex data flows and data-processing or planning algorithms that need debugging but aren’t easy to debug when the output is a wall of numbers. Having tools like rosbag and rviz and rqt_* available can massively reduce the difficulty in developing robot software.
I’m not with a company, but a comment I have heard several times from people who are is that this is the value they get from ROS. They don’t use it because there are lots of existing nodes available (another comment is often that they don’t want those nodes because they are not reliable enough to go into a product). For them, the value is in the improvements in the robot development process provided by having an integration framework with a wide range of tools available that already work. I would be interested in hearing from the company people here what the most important value proposition of ROS is for them.
Hi, long time lurker here
I think this remark of Chris Albertson about low level hardware control is spot on.
Allow me to shamelessly plug a project which addresses this issue
I’ve been participating in another Open Source project called Machinekit, which is fills the gap of low level hardware control.
Machinekit is a real-time motion/io control stack. Forked from LinuxCNC a few years ago. It can be used in more applications than CNC control though.
The gem in Machinekit and LinuxCNC is the HAL layer, the Hardware Abstraction layer. Developing a system is done by configuring it, instead of programming/compiling. One basically wires components together and puts these functions on an execution thread.
The realtime thread typically has a cycle time of 1ms, but can be faster/slower depending on hardware.
This means that interfacing with a DC motor, or a stepper motor does not change the system other than choosing a different component and hardware.
One can change the running realtime system on the fly by adding/removing components and (re)wiring them.
There’s also are C/C++/Python API so one can interface from a (userland) application with the realtime HAL.
Machinekit runs on linux platforms, typically Debian, with ARM/x86/x64 hardware, an example would be a Beaglebone Black where the PRU’s do the realtime tasks, a PC with Mesanet PCI FPGA card and daughterboard, or a De0 nano SOC which includes the PFGA. The Mesanet firmware (which is very stable) also runs on this. Making re-use of the industrial Mesanet daughterboards possible.
So instead of doing path planning in realtime, we can do “off line" planning with ROS (ROS does the path planning) and we take this trajectory as input.
An example (prototype, proof of concept) is here: The trajectory is put into a HAL ringbuffer by a ROS node written in Python. Machinekit HAL components then read from the ringbuffer, interpolates the segments and gets the motors moving.
We would love to one day have a generic ros_control node to interface with the HAL layer.
I’ve heard of Machinekit a couple of times, but never seen people outside the industrial community use it. Has anybody used machinekit to drive, say, a mobile robot’s wheels, including Odometry and IMU integration?
In my time managing a team that used ROS for a large European research project and my time with ROS at home, I think the hardest part of ROS is the steep curve offered by the dependency and build system - make, cmake, gcc, et al…
I offered a tutorial on ROS once to a couple of Java developers and they stopped speaking to me thinking I’d take them away from their murky maven builds and drown them in make files!
While the existing tutorials makes it easy to setup from apt and get going, the hard part comes when you have to build topics with covariance, time synchronization and all the other good stuff to make your robot work. For example, I remember struggling to ENU transform my IMU for robot_localization and I just couldnt find a location with a good description even on Answers.
Still I’m a big believer in ROS and experimenting with ROS2 where many of QoS limitations are taken out. I think its the only way of prototyping robots today despite the learning curve. Building monolithic applications is so 2008 but a good option still if you want to be like Golem from the Lord of the Rings
Not that i’m aware of. This would typically be a setup where one would configure Machinekit to drive motors (with or without encoder feedback) and read sensor data. Then get the motor feedback and sensor data back to ROS via topics.
Depending on realtime constraints / hardware one could write a component for fast control (in MK, not in ROS), and publish info back to ROS.
Driving the motors is done by dedicated micro-controllers, at least in the systems I work with. Simple ones take velocities and perform PID control, better ones do Model-Predictive Control (MPC). Often, IMUs are also often directly attached, as they are cheap enough for a while now to be just integrated on the board. Of course, for MPC and for IMU integration some configuration is necessary, but not a lot.
I’m not sure what role MachineKit would play in such a setup. Would you use MachineKit to generate the software running on the micro-controller?
What about more complex systems, e.g, whole body motion as in humanoid robots? In order to achieve this, the motors need to be synchronized (I assume), this would require lots of configuration I guess?
Machinekit would either have software step generation (with RT-PREEMPT) kernel, or depending on platform the hardware taking care of this by PRU (microprocessor) on a Beaglebone, FPGA on a mesanet pci card / De0 nano soc.
As long as you have the motors driven from the same board this is not really a big thing because you’d have the interpolation done by a HAL component. Then the HAL component sets the velocities, etc. Where movement (rough interpolation) is planned by ROS.
Synchronized moves from different Boards is not available (not yet anyway; execution of the HAL function thread needs to be done from an external source (like a hardware interrupt from fpga), there’s been a bit of experimentation with NTP server in the past, but I don’t know the details)
I’m not sure what you mean by “step generation”. The motors we use can either be driven by Pulse-Width Modulation (PWM), and most micro-controllers for this purpose do that in hardware based upon a defined level.
Given that the exact API on how to set the PWM is MCU specific, I wonder how “generic” MachineKit would be. Also, given that the code for this is largely trivial and consists out of writing an input value to the right output register, I wonder what benefit is to be gained from MachineKit.
That’s an example of a component for a step generator for a stepper motor. You’d set velocity or position and stop caring about generating steps (let the hardware deal with that), or use PWM, or getting a velocity/position from encoder for that matter.