Is Turtlebot the right platform for us?

I’m sorry, Pito, but I don’t have the OpenCR board yet. I believe that it is programmed like an Arduino, in the same languages, but it uses an M0 ARM chip which means there may be some incompatibilities.

I did use several of the “Teensy 3.x” boards which are also ARM-based using the Arduino IDE. These worked fine, but not all Arduino libraries were compatible. Most of the libraries were compatible due to the fine people that created the board. I think that all the official libraries were compatible (at the source level), but some unofficial libraries weren’t.

I thnk it comes down to the programmers that programmed the core code for the OpenCR.

Unless there is something that the OpenCR board gives us that the Teensy 3.x doesn’t, I would have preferred that they went with the Teeny’s. The Teensy is very small and still does more than an Arduino, packing a lot into a breadboard-friendly dual-inline package. They are also inexpensive.

I will wait and see before final judgment.

I am currently going to start experimenting with a UDOO x86 ultra. This is a quad-core x86 (faster than the Joule) with a built-in Arduino onboard. I should be able to use it for both the main processor and the sensor board for a robot.

I haven’t had a chance to play with my own mBot yet.

Good luck with the course. I think that you’re right in using just the mBot. And you might be able to get enough mBots for the entire class.

Jay

Hi Jay

Thanks for the info… I will investigate further about OpenCR. Also thanks for the reference to Teensy 3.0. and UDOO x86 ultra. Will check them out!

Pito Salas
Brandeis Computer Science
Feldberg 131

Hi there (name?)

Thanks for lots of good insights… Comments inline:

Seems like with a budget that can barely afford two small robots, you’re asking for too much here.

I would suggest using more leverage (e.g a bigger community which already has a lot of resources… like a Turtlebot 2), or reducing some scope. If you make a ‘robot’ with an arduino and a servo, im sorry but that’s not really a robot course. it’s a course on servo actuation. It’s got to be a bit more complex than that.

I assume you mean the ROS community vs. the arduino one? Yes, I totally agree, that’s why I am starting to participate here! Do you consider Turtlebot3 a good candidate? Because that’s the one I was digging deeper on. (I thought the 2 was discontinued?)

I highly recommend demonstrating kinematic chains (e.g arms), different drive types of robots, different types of sensors, and different types of algorithms which can navigate, move, and orchestrate the robot.

Gotcha.

Just keep in mind, please, that a robot is a SYSTEM as much as an entire laptop is a system. This also means a microcontroller inside a mouse, is not a laptop. (which is what is proposed by only using something as small as a microcontroller and a servo.

Of course: I am well aware of that :slight_smile:

Given also that the students learn from the teacher, I recommend getting a lot more knowledgeable real fast before they come along.

Working on it. But I don’t exactly agree with your model. I think the teacher creates the environment to allow students to learn and to teach themselves. Also by the way note that this is more of an independent study course.

So, you need a super community supported robot/system (you HAVE to use ROS here, given the above constraint). I’ve answered that for you.

I think ROS is amazing, I agree, especially because of the community. (The software architecture of ROS itself seems to me a little overly complex but that’s just my first impression. I know I have to get deeper.)

Also, don’t bullshit around on anything that’s not a full computer. ROS works best on an Ubuntu 16.04/14.04 operating system and not on an ARM device. You’ll want to not be constrained by performance when picking and choosing modules. it will cost you a lot more in time and grief than the difference in price between a RaspBerryPI and a $700-800 intel NUC/mini-ITX form factor computer.

Good advice. I heard elsewhere also that Pi was underpowered for the job. Intel Joule?

With a platform like this, you don’t have to worry about where to go next. Software on a proper computer will take you to the sky and limit of research in robotics (mostly). Given you’re teaching introductory ROS, a platform like this could easily give you 4 semesters of course data, and with thought, 2-4 more.

At least give the students a fighting chance. If you get the kids (If they’re graduate level) working on an mBot to do ‘research’ or ‘learn on’, the mBot is nothing like the robots in industry or used in academia, you aren’t setting them up for a future or helping them out by showing them whatever you’re coding on an mBot.

My thought (maybe wrong) was this: a simple, successful experience, where they confront face to face the fact that a robot interacts with the real world and won’t just do what you tell it, i.e. even drive in a straight line. Do you not buy that?

Thanks much. So in summary, I wonder about your opinion about the Turtelbot3 with the Joule. And of course I am keen to continue the discussion to learn more from you!

Thanks,

Pito

Perhaps if you first prepared a relatively simple project for the students by having them use a specific part of ROS to do what you wanted.

For example, the suggestion of having the robot roll in a simple square. Though I might suggest doing line or maze following. Both of these seem more fun for students than rolling in a square which seems more like a homework problem.

Yes, I agree that all of the above are homework problems, but people react better when there is some fun and just a little competition involved, IMHO.

If you teach the general concepts of ROS and then concentrate on the parts of ROS that the students would need in order to achieve the goal, ROS might work out.

After I do the basics to learn ROS with the Waffle as ordered, I will attempt to replace the Joule with a NUC and give the Joule to my preordered Burger. After this, I will keep those machines for demos and build something more complex and fun.

Hi Jay

Perhaps if you first prepared a relatively simple project for the students by having them use a specific part of ROS to do what you wanted.

I assume the idea of doing something like that on an arduino robot you feel is not the best use of time. My second project I was designing is for them to create a map of a small part of the building and then using the ROS simulator train the simulated robot to find it’s way from my office to an office down the hall.

For example, the suggestion of having the robot roll in a simple square. Though I might suggest doing line or maze following. Both of these seem more fun for students than rolling in a square which seems more like a homework problem.

A maze is much more fun, good idea. I thought that not using line following but making them think about 2d geometry as they try to do a straight line, using some kind of sensor to find distance to walls might be a little more challenging and interesting. Line following seems too easy or am I thinking about this wrong?

If you teach the general concepts of ROS and then concentrate on the parts of ROS that the students would need in order to achieve the goal, ROS might work out.

After I do the basics to learn ROS with the Waffle as ordered, I will attempt to replace the Joule with a NUC and give the Joule to my preordered Burger. After this, I will keep those machines for demos and build something more complex and fun.

Great idea for a progression! Thanks and keep any suggestions coming!

Pito

Well, I’d like to see you personally get it done in 3 months with starting with 0 ROS or robotics knowledge :wink:

If you’re going to ever do distance sensors, you have 3 options:

  1. The best: laser range finder
  2. Close second: 3d cameras (kinect, intel euclid, intel realsense)
  3. Last resort: proximity

Given your price range, I’d try and go for #2 if it fits the budget, and don’t bother with #3 other than for proof of concept/want people to feel the pain while learning. Actually a combination of them all would be a good way to demonstrate that your algorithms are theoretically bounded by the quality of your sensor data =)

Pito, design the course and what you want to show first. The robot you need will be decided by that. If you are unsure of what the course needs in it/how to go about that, lets take this offline/another thread. I’ve built many a robots and i think I can help you here with coming to a close on what you need to show the students.

Working with point clouds for distance measurement using laser or 3d cameras is a big challenge in 3 months along with getting the robot moving.

A proximity sensor for wall following is more doable in that period.

I disagree. There are many platforms out there where it’s one launch file (e.g one cmdline type in) to bring up a screen to show you what point data it is seeing, and consequently what data you can see from the terminal.

Given that, you could instruct the students to do stuff with the data using python, C++. In fact, it is the exact same amount of work in ROS to get a prox sensor coming up… to ROS it doesn’t care, it’s all pointcloud data/laserscan data…

You aren’t asking them to write an algorithm that extrapolates the depth from the images, it’s given already.

The reason I’m not a fan of the square is that it is a highly intellectual exercise. My suggestions will encourage competition. I don’t think that you need to say anything.

Yes, you can do these things with an Arduino-based robot. But I believe they will learn more using ROS.

One of the most important things is that you understand ROS and the algorithms for your projects. I will be installing

…Linux on a machine tomorrow so that I can start learning ROS in simulation soon. I have books and the web, but I learn best when I can write code!

Thanks everyone for this thread, I learned a lot. I will continue providing small updates from time to time on my progress and experience in another topic here.

This is a strong way of going about it. Getting your hands dirty inside a simulator can go far with ROS. I always shy away from telling people to go straight to hardware unless they have algorithms already written in sim that they want to see happen in real life too.

I plan on using the simulator first if only because I don’t have the hardware yet. And if I learn enough (and the shipping date slips) I might build my current target robot. I will, of course, practice by simulating it first.

Is is possible to use the simulator to simulate a target-rich-environment, er… I mean a room crowded with fragile people and furniture?

Interesting comment because I’m wrestling with that decision. I just finished a project that used simulation. Now that it’s wrapped up I don’t know whether to go back to my hardware robot or work in simulation. I like seeing stuff happen but the simulation is easier to work with.

There’s always intricacies that happen when porting to a real robot after working in sim, depending on the data/algorithm/problem being solved. But I find that is best dealt with by allocating enough time in your project when you switch over from sim to hardware.

Hardware just gets so tedious at times. You have to plug the robot in, ssh into it, make sure it boots, make sure ubuntu is acting fine, perhaps make it switch networks etc etc. Sim is like alright, roslaunch gazebo and my robot system is up and functional, back to writing my program/testing my program.

Testing things on a real robot is also time consuming as well, you have to wait for real actuation, perhaps your robot powers off or the network goes down etc.

Anyways, I’m a hard advocate for that. We’ve kind of gotten off topic here and OP said he was satisfied with the responses and wants to open a new thread. So, that being said. Good chat folks. I’m out.

1 Like

Hi Routiful

Some qns:

  1. why does the waffle need 2 sensors are there any differences in their roles?
  2. can I install 2 more dynamixel actuators to boost the speed and payload of the turtlebot3?
  3. I read somewhere there are only 3x RS485 ports in the OpenCr. How many dynamixel can the OpenCr controls?

Thanks in advance!

Cheers

Hello @ian :slight_smile:

Nice to see you again!

  1. Yes. these two sensors have similar functions. (but its have adv and dis adv) Some users used to camera and others prefer Lidar. So they can use both or one of them what they purpose.

  2. Of course. You can attach more dynamixel to TB3.

  3. dynamixel used TTL and RS-485 communication and it supports daisy chain . So you can connect it over twenty motors.

Thanks
Darby

Thanks Darby.

If i buy the Burger. What cable do I need to buy to daisy chain 2 more XL430 Dynamixel?

The advertised payload for burger is 15kg. If i add 2 more will the payload to 30kg?

Thanks Darby.

If i buy the Burger. What cable do I need to buy to daisy chain 2 more XL430 Dynamixel?

The advertised payload for burger is 15kg. If i add 2 more will the payload to 30kg?

Hi @ian :slight_smile:

You can buy cable on ROBOTIS Store (America or International)

In my opinion, TB3 burger has four Dynamixels can’t make double payload. How about purchasing several Waffle plate? Then you can make large base for more payload like Waffle.

Thanks
Darby