MyzharBot - Autonomous Tracked robot

The robot in the photo is MyzharBot, a tracked autonomous robot powered by ROS.
MyzharBot is a project that I started many years ago to continue to work on robotics after I left the University Research Community to start to work in industry. Now MyzharBot has reached its fourth version… and is still in evolving.
What I want to do with MyzharBot is to continue to study algorithms of autonomous navigation mainly based on Computer Vision, the research field that I love the most together with Machine Learning and Artificial Intelligence.
The robot is powered by the powerful Nvidia Jetson TX1 that allows to analyze in real time the information coming from the Stereolabs ZED stereo vision sensor.
The motors are controlled by small motor control board uNav developed by Officine Robotiche, an Italian non-profit association that aims to promote robotics, of which I am one of the founders.
The previous version, MyzharBot v3 was one of the first robots to use an Nvidia Jetson board (TK1) documenting this on the web, so Nvidia decided to add me to the “Jetson Champ” group and invited me to GTC 2015 conference and to GTC 2016 conference, to demonstrate the use of their boards in their booth during the exposition time slot.

What I think is that MyzharBot is nothing compared to the amazing robots developed by the major research centers in the world, but I’m really proud of my work since I cannot dedicate my whole time on it, but only a few hours during the week ends (and a lot of nights!!!) and despite of this it works and it is a huge hit every time it is exposed publicly.

For the fast evolution of the robot during the last few years I must thank Mauro Soligo, Raffaello Bonghi and all the members of Officine Robotiche that collaborated on Electronics, Mechanics and Software developing… a big thank you also to the Nvidia Embedded group that supported me with their amazing boards and their constant presence in the last 18 months.

More information on the project are available on its website: http://myzharbot.robot-home.it

PS I’m Walter Lucetti, an old computer engineer born in the far 1977 :slight_smile:

3 Likes

Hi, nice robot :slight_smile:

Have you made any comparisons between the the depth image from the ZED sensor and structured light scanners like the kinect or Asus Xtion in semi-well-lit indoor conditions?

I’m planning to make a 1/3 scale autonomous version of a warehouse truck for Toyota Material Handling during an upcoming summer internship. Right now I’m trying to figure out what sensor to use for loading and unloading operations and the ZED camera seems like it could be a good option :slight_smile:

1 Like

Hi samlam,

if you plan to use your robot exclusively indoor then the Asus Xtion is the
best choice. I use stereo vision because I want to go outdoor, but the
precision of a RGB-D sensor “kinect-like” is surely higher.

The ZED camera is an amazing sensor, Stereolabs wrote a really good SDK
that allows you to take advantage of stereo vision without having to fight
with the “stereo calibration process” that is always the real problem of
stereo vision.
The depth map generated by ZED is really similar to a RGB-D depth map and
furthermore it is HD (more than 640x480).

The limitation is that it is however a stereo vision sensor and if your
"environment" is not highly texturized the stability of the depth measure
is not really high.

Take in consideration that I’m planning to add a Depthsense DS325 to the
robot (I own it), I’d like to use it indoor to generate a better
environment map and when the light is not enough to retrieve RGB
information and then Depth information by ZED.

Walt

1 Like