ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

What do you want to see in an educational ROS platform?

We’ve started developing a table top / ground ROS robot for use in educational environments. We’d like to hear from educators that really want to teach ROS based robotics, but don’t feel they can because currently available robots are either too expensive, too complicated or too [insert your adjective here].

We’d also like to hear from educators that are teaching ROS robotics but want to broaden what they do, or find alternative platforms.

If you are an educator who offers courses and would like to do so with ROS can you respond to this thread with the one thing that you’d like to see in a educational platform for ROS. If enough people express interest we will do a couple of group calls so we can hear what people would like and make trade-off decisions. Then we will do our best to build what the group wants with open source repos to go along with it!

Please respond with the one thing you’d like to see in an ROS robot platform oriented for education.

David

P.S. Thanks to Tully and Kat for suggesting this!

5 Likes

I’m not using ROS in education yet but I’ve been looking for a platform like this a while back. I was looking at this from a hobbyst perspective, so a low price was a must (Turtlebot looks good but it’s a bit too expensive if someone wanted to buy it to get into ROS).

Some of the platforms I looked at didn’t have wheel encoders but I think it’s a very good to have if you are teaching about odometry.

Here are some things I was looking for in the robot platform:

  • Aforementioned wheel encoders
  • IMU
  • Onboard computer running ROS (extra points for this to be an option, lot’s of people already have embedded computers)
  • Ease of integration of custom sensors (consideration for space, robot payload)
  • Some built-in DC-DC converters would be great to power external sensors (3.3, 5 and 12V would be superb)

Functionality wise I wanted to be able to build a full ROS stack on the platform, starting with the drivers, through ros_control and ending with navigation and localization. I was hoping that with a platform like this I could guide students through all levels of software design for a robot.

Sorry it’s not a single thing but I thought some of these observations could be useful for you.

4 Likes

Perfect educational resource to my mind is github repo with whole bunch of small working examples. Each of them compiles, runs, has reach comments and illustrates particular piece of ROS functionality.
E.g.
example 1: how to use tf2
example 2: how to use can_bridge
example 3: how to use pointcloud and access individual points
example 4: you name it…

5 Likes

Awesome list! I agree price is the key thing that is missing here.

How about if we designed it so that you’d have ROS topics shared between a workstation computer and the robot itself and the heavy lifting was done on the workstation? Do you think that would be workable? This is one strategy to keep costs down for the robot - it also is a great example of the power of ROS.

This reminds me a bit of the vector_ros: https://github.com/betab0t/vector_ros. I can see a value in that for learning some high level concepts.

The reason I was initially looking at low level concepts is because I think there is a niche in this area. At the time I was looking into it I couldn’t find any tutorials on how to create a robot from scratch with ROS (I remember it took me quite a while to figure out how ros_control works).

If you were to share the topics with the user’s workstation would a user be able to add their own sensors on the robot? Because to me that is the single best thing about ROS and projects like vector_ros are very interesting but because you are not able to extend the robot the educational value might be a bit limited.

That is definitely possible and workable, given decent WiFi. Network setup is nontrivial I guess for complete noobs, but once that’s taken care of, running code from a workstation removes the need to sync code from workstation to the robot.

Running hardware interfaces of course needs to happen on the robot.

Two places too look at are the SV-ROS github, which uses a Neato Botvac as a ROS platform, and the ROS by Example books by Patrick Goebel, available from Lulu. It is possible to build a ROS mock Turtlebot using a Raspberry Pi, and a Botvac that has a “Lidar” for $300.

In my experience with new comers (or in some cases even research labs that have been using ROS for years) is that network issues tend to cause them a lot of grief. I think the idea to have some of the logic run on the workstation is good but that would require the existence of very thorough noob friendly tutorials about how to get your network just right.

1 Like

Yes network provisioning is usually messy.

We made it significantly less messy at Ubiquity Robotics by building PiFi. On bootup it scans all the available networks then boots in to AP mode with a unique network name (something like ubiquityrobotXXXX) where XXXX is the last 4 digits of the MAC address.

You can connect to the robot via AP mode then when you do it presents a list of available networks and you can elect to connect to one of those or you can just stay in AP mode.

It works well, although educational institutions some times have problems with new WiFi networks and also don’t always make it easy to connect to the available infrastructure networks. Our solution is pretty slick and I can’t think of a better one - but I am all ears for suggestions.

I have used turtlebot3 for a while and for what it’s intended for (as an introduction to ROS), it doesn’t really justify the price of it.

In the case of turtlebot3, the heavy lifting such as SLAM, map navigation is done on a workstation. But, I found this rather limiting.

Ideally the onboard computer can function as a WiFi hotspot, possibly with a 4G/LTE dongle, so it can connect to internet if it needs to. In this case any computer could connect to such network. This will minimize network infrastructure.

With waffle-pi, beginners could run the examples, launch the ros nodes and configure it with the available setup. But, that’s pretty much the extent of what they can do. If they want to level up eventually, they will seek for better sensors and onboard computer.

It’s hard to scale the turtlebot or to upgrade it without replacing the core components (raspberry pi 3 model B, the dynamixel motors, etc.). When I plug the OpenCR to an Intel NUC, it doesn’t immediately work out of the box, without some configuration and re-testing the Arduino code. This is what most beginners are not aware of.

I also noticed people have used more powerful computers such as the Jetson TX2 to run processing onboard because tracking or depth cameras such as realsense / zed are too heavy for a raspberry pi 3, and yet they are quite popular among robotics researcher.

Perhaps something like the PULP platform, OpenMV camera is a good alternative considering the costs.

On the software side, it’s not clear how to migrate to ROS 2 while keeping the software stack to be the same. This is if we still want to run the same SLAM / navigation packages. Though I understand it really depends on whether there’s an upgrade on dependencies.

In short, I would suggest that there should be a guidance on such “upgrade” path and scalability issues. As probably those who started to learn robotics are here to invest their time and skills in the long run.

1 Like

My apologies I’m not too familiar with the tb3 but trying to understand why SLAM (I assume against an rplidar) qualifies as a heavyweight algorithm. What’s being used as a default SLAM algorithm for tb3?

The default setup (as per documentation) is to run the SLAM algorithm and navigation on a remote computer. Running gmapping (the default) or hector with rpildar A3 on the raspberrypi 3 model B is fine and I have tested that. But, I think cartographer is heavier. The other default option is to run frontier_exploration.

To build the DWA local planner and map_server on the rpi3 itself requires pcl_ros to be installed as a dependency, which is unnecessary in most cases. So, up until now I never run the DWA local planner + map_server onboard.

Then, once I setup a realsense T265 as a tracking camera to improve odometry, the realsense nodelets used up about 50% of memory on average. Occasionally the realsense camera manager crashes. If I run SLAM onboard.

If I just use a 3S LiPo 5000mah battery (it’s already larger than the default). The realsense fisheye camera nodes are not streaming their images. I assume there’s not enough power from OpenCR. Although these camera images are fine if I use a larger capacity battery and higher voltage, such as 6S.

For sure, I would need a better computer and more battery cells + capacity to run something like RTAB-Map with D435 + T265 for example.

The impression is the rpi3 seem to only be utilised to run the turtlebot bringup or at most gmapping. I am in the process to at least migrate to rpi4 for its USB3 as realsense cameras worked best with USB3.

At Stanford, one researcher ran a compute intensive Turtlebot II with extra 18v battery packs for a gaming Nvidia equiped laptop. Runs up to 1 hour autonomous were obtained.

For what it is worth, I successfully ran cartographer on a Pine64 attached to a Turtlebot 2. The caveat was that I had to run it in 2D mode; in 3D mode, it was too heavyweight. You can see my short presentation from ROSCon 2017 about it here.

1 Like

Interesting, thanks for sharing your talk!

Turtlebot 2 does seem to have a better setup from what I’ve seen. It doesn’t have waffle plates. But, they are not really necessary.

It comes with ASTRA camera or ASUS Xtion pro live if it’s bought from clearpath robotics. Also dimension is better, while turtlebot3 require additional plates and plate support.

I’ve been teaching with Turtlebot 2 for several years now. In terms of hardware, it is hard to beat:

  • The size is good. It is small enough to be safe and portable, but big enough to operate on human-scale problems like office delivery, tour guide etc.
  • RGBD sensors provide a nice bang for the buck. We can do 2d-slam, 3d-slam, computer vision etc.
  • Using a laptop for computation makes it much easier to use in the classroom. Trying to get networking set up correctly and keep it working for multiple robots is a pain. It is much easier to just write some code on a laptop and plug it in.

There are a few downsides:

  • It would be nice if it were cheaper (though honestly, I don’t think the price is unreasonable).
  • The ROS Turtlebot packages are not very easy for novices to make sense of. TB2 is relatively easy for beginners to use, but it is hard for beginners to modify. It would be nice to see an educational platform that serves as a clear, well-documented, example of how to set up a robot with ROS support.
  • As far as I can tell, TB2 is going away. It’s not clear if the Kobuki base is even being manufactured anymore.

I’m at the point where I need to re-equip our robotics classroom, and I’m at a loss. It looks like Turtlebot 3 is the default choice at this point, but it doesn’t have any of the pros I list above.

So… to answer your question. The platform I’m looking for is something that looks a lot like TB2, with nice clean ROS and ROS2 packages.

1 Like

At Ubiquity Robotics we have been running platforms off RPi for a couple of years. We have a simplified navigation node called move-basic that is suitable for student learning and runs on low powered CPUs.

I going to suggest a few things to make this discussion more productive.

  1. Please make it clear if you are an educator or not and if so what your student body looks like. Not all students are the same and it helps us if we understand the different constituencies of users.

  2. If you have a request or an idea please frame it in terms of the subject matter you want to teach not the hardware. Hardware changes from quarter to quarter; fundamentals more slowly.

I’m not an educator.

But however we’d like to consider robotics as an abstraction like how software is. It’s not the same, hardware is also an important subject.

A junior engineer could quickly drop a robot platform that doesn’t do all the things he/she could see on videos of latest research.

What they couldn’t see is the efforts, workarounds, to make an algorithm work in a certain environment with certain setup of hardware.

If I would to educate someone, be it a student or an engineer. I would emphasize:

  • Environments that a robot should operate. This will cover perception, map, navigation, kinematics (if we’re considering a more complicated movement than 2-wheeled differential drive robot)
  • Some mechanical / electronics foundation (enough to get something working)
  • OS & networking fundamentals, why Python / C++ as a common programming language. Although anyone can use other languages, but these two are the most common.
  • Upgrade / scalability problems
  • Algorithms
  • Software management
  • User Interface
  • Security (this would be the advanced subject)

The JPL Mars rover is probably also a good reference as an educational platform.