What do you want to see in an educational ROS platform?

Hi,

I am working with David on this project. And I after this thread I have a couple questions for everyone:

  • How many of you would use a robot like this to teach how to make a ROS robot (ros_control, tf setup, etc) vs how to program applications that run on ROS robots?

  • Is LiDAR or RGBD a hard requirement for a robot like this to be useful for you? What if it was a optional accessory?

Rohan Agrawal
Ubiquity Robotics

I teach an undergraduate ROS-based robotics course for 20-25 students per year. To answer your questions:

90% - teaching how to program a robot that already has ROS support.
10% - learning to make a ROS robot.

RGBD or LiDAR should at least be a well-supported option.

1 Like

As a high school educator.
My shopping list is to be able to build from things I already have. I’m just starting on my ROS journey so feel free to rubbish my ideas.
Reusing components I already have. Eg if I use

  • a laptop onboard for camera etc
  • starting with a dedicated Arduino controller for wheels/encoding.
  • phone for GPS etc
  • Using Wii controllers or Kinect cameras.
  • Arduino dedicated to sonar rangefinding

Partially this is to slowly build up the available components and build up the capacity.

1 Like

I’m an educator, and teach robotics to undergrads and grad students, from a wide variety of backgrounds. Up until now, the best bet for me was a Turtlebot 2. As several people have said, it would be nice if it was cheaper, but it’s hard to see how to do that without losing functionality.

The recurring problem that we have with robots in a university environment is networking. Our networks are heavily firewalled and not ROS-friendly. The wired and wireless infrastructure are owned by two different groups on campus. If we’re going to use wireless to and from the robots, we need to be able to do it from 20 robots, all in the same room, all at the same time. We’re (technically) not allowed to spin up our own wireless networks without going through the university IT folks.

For a specific ask, I’d like a smaller, cheaper version of the Turtlebot 2, I guess.

– Bill

3 Likes

Hello @wdsmart, you may want to review ROS 2 to address several of these aspects. DDS is much more flexible and can be used in combination with VPNs, VXLANs and even SSH tunneling to bypass network limitations. Of course, these techniques will impact latency of communications.

@wdsmart @Nathan_Sprague

Thanks for you input.

So I think both of you have said you want a better and cheaper Turtlebot 2. To avoid doing what Henry Ford would have described as looking for a “faster horse” - perhaps we could figure out what you are teaching with your Turtlebot 2s. Are you teaching?

  1. Localization
  2. Navigation
  3. Obstacle avoidance
  4. Object detection and recognition

Provided the robot can do those things and provides a reasonable platform for didactics, does it matter how it does it?

1 Like

I spend a lot of time on 1 and 2. Somewhat less time on 3 and 4. I would also add mapping to the list.

“How it does it” does matter, in the sense that some designs will work better than others in an educational environment. For example, both @wdsmart and I mentioned the challenges that result from requiring network connectivity.

At the top of this thread you mentioned that you’ve already started developing a ROS robot for education. At this point, you might get more useful feedback if you describe what you have in mind. I’m definitely curious! I would love to see a wider range of well-supported low-cost robots that work with ROS.

1 Like

ROS 2 does address many of the problems that we’ve historically had with ROS, but the central problem, that we’re working in a networking environment that we don’t have control over, is still there. There’s nothing inherent to ROS that will make this easier, unfortunately. However, keeping this in mind when designing the whole system, and trying to include as many components as possible that allow us to not interact with the University infrastructure can lessen the pain.

Maybe the specific ask includes “the robot should support ROS 2”?

– Bill

In classes, it’s mostly, 1, 2, and 3, with only a bit of 4. Since we’re usually teaching robotics in the context of the traditional algorithms, then having a sensor package that approximates a LIDAR is useful. The ability to add our own sensors (probably simple switches, temperature sensors, etc) would be a big plus, and make it more widely useful in more classes. A battery life of a couple of hours at least, too.

– Bill

Well there are a couple of different approaches here. (Yes I am developing a couple of different possible robots).

Both robots would be differential drive, made of aerospace grade aluminum, support ROS 2, and be a lot of fun to work with. They would both be about the size of an open hand. Both would have several hours of endurance.

Option 1) is a robot that has a LIDAR like sensor, onboard computer (so fewer networking hassles - though still nowhere near as nice as working on a laptop), ~ 10Kg payload, camera and a short range obstacle avoidance sensor. Weight would be about 5 kg with batteries. Cost will be ~ $500 might even be quite a bit more and you might wind up having to pay extra for keyboards, screens, etc. if you truly want to operate the robot without touching the network at all.

Option 2) is much more minimalist. Its a robot that attempts to do things via off board compute, keeps sensors to a minimum, but does achieve localization with an onboard camera and, together with your off board compute, allows you to do other cool things like object recognition etc. Payload likely to be a few hundred grams maybe a kilo. Weight will be much lower than option 1 (may be 1/5th to 1/10th) due to the fact that you don’t need hefty batteries to run all that electronics. Cost will likely be below $100 could be even lower.

Personally I am with Elon Musk on the future of localization being with cameras rather than LIDARs hence the interest in doing it that way - and the algorithms will all be open source. This approach has the advantage that it keeps cost way down.

Obviously I have the mindset of producing the lowest cost robot possible that does interesting things. From my point of view, both robots:

  1. Localize
  2. Navigate
  3. Avoid obstacles
  4. Do object detection and recognition

How they do those things is very different and that’s what leads to the improved price for one of the options and in fact the architecture of this option may lead to better performance on many if not all the above tasks than the heavier choice. Obviously we can put in tools to make network provisioning easier, or even require no external network at all by being their own access point. Doing everything with the robot as its own access point is how I currently do all my demos of all my current robots - because I too never want to have to deal with connecting with an unfamiliar network. Tools of course might help, however, there is no tool to deal with obstinate university network administrators - if that is the root cause problem.

Now you tell me - what do you think? Both approaches are possible - and we want to make robots that people actually use and can learn how to do things with.

I am teaching ROS for a couple of years now and performing multiple trainings per year. I have had lots of different students from school level up to Ph.D. and industry as part of the ROSIN project: https://rosin-project.eu/

From my experience the audience is not very diverge. The question is more what are the topics that you are going to focus on? I am doing mainly three different ROS trainings:

  1. Navigation with mobile robots
  2. Manipulation with industrial robots
  3. Aerial robotics with flying systems

In case of these use cases you will need different hardware setups. But for all kinds of these trainings the ROS Basics are the same. All of my trainings include:

  1. ROS Filesystem
  2. ROS Communication (Topics, Services, Actions)
  3. static/dynamic transforms using TF
  4. image processing (e.g. AR Tag detection)
    (5. Gazebo simulation)

For these topics and also further going on with Navigation from my experience the TurtleBot3 is not too bad. It includes all parts mentioned in the first post:

  • Aforementioned wheel encoders
  • IMU
  • Onboard computer running ROS (extra points for this to be an option, lot’s of people already have embedded computers)
  • Ease of integration of custom sensors (consideration for space, robot payload)
  • Some built-in DC-DC converters would be great to power external sensors (3.3, 5 and 12V would be superb)

We added a webcam additionally to the basic setup. If you already have some spare parts available, I recommend to talk to your local reseller to buy a modified self-made TurtleBot kit. For example, if you do already have a Raspberry Pi, you don’t need to include that in your order. It is also possible to 3D print all parts of the robot frame, the layout is Open Source!

BUT! I also have a lot problems using the hardware from Robotis, mainly the OpenCR board and also the Dynamixel motors. The firmware for the OpenCR board used to make problems, the Dynamixels switching off and switchtig to an error mode, which you can’t solve by yourself. Even uploading a new firmware to the motors does not work.

But pricewise I can still recommend it. From my experience the TB3 Burger for 550$ for a mobile robot including wheel odometry, onboard computer, LiDAR, IMU, power distribution board… is still a good price. We have used self-customized robots before that. This way we had the possibility to be in fact a bit cheaper, but it was so much overhead to get everyhing working as expected and you keep tweaking your hardware for years instead of focusing on the educational part and improve the learning materials.

1 Like