Following from this thread we’re going to try doing a semi regular “ROS glamour shots” thread in ROS Projects. What is a glamour shot? It is a screen capture, tutorial, video, or image of whatever you are working on in ROS or Gazebo. We all do this in our private work forums but we should do it out in the open more.
There are five rules:
It must be something you or your team made.
It must use ROS or Gazebo.
You need to include at least a one sentence description.
Nothing is too big or too small. We want everything from home brew bots to fleets of autonomous cars.
I feel like I shouldn’t repost the two things I’ve recently posted (one in the attached thread and another here) and instead I present that one time we drove a robot in VR back in grad school (4 years ago I think).
The area is digitized using a kinect and ElasticFusion + a home grown mesh simplification algorithm, the VR rendering was done through blender, and AMCL localizes the robot in this environement against an existing occupancy grid map.
We did some videos of our robots with different use case (well, the usage is arguable) just for fun with my intern this year (and also to teach them ROS a bit).
I might post them on a regular basis, I want to see more cool robot!
Just to begin with something basic, we did the good old " I’m not a robot " with a dobot :
The catch here : We didn’t use a camera for recognition, we plugged our control software with a small ros node that send keyboard (1-9 and enter) command to the robot driver with prerecorded cartesian trajectory. We just had to launch it, the robot press submit, we press the correct button to pass the captcha, and press enter so the robot can do the last trajectories and submit the captcha. The robot is just semi-autonomous but it was fun to make !
Slammer, a robot I’m building as a learning platform for ROS.
It is additionally serving the role of development/test robot for a new version of uNav from Officine Robotiche based on STM32.
It is equipped with a RPLidar A1, a Realsense r200, a BNO055 imu, current/voltage monitor and is powerd by a a Jetson Nano.
Erwhi Hedgehog is actually the smallest (120x120mm) ROS robot that achieve autonomous navigation and vision tasks.
This is main repo to the project: https://github.com/gbr1/erwhi-hedgehog
It is completely opensource and openhardware.
It works on ROS, Gazebo and, if you need, AWS Robomaker.
ROS Kinetic running on a i.MX6 ARM Cortex-A9 Quad Core, main sensor is a rplidar a2 (image still shows the XTion). Odometry is corrected with a BNO055 IMU. It is powered by two batteries in parallel. I/O is done by additional AVRs microcontrollers.
@safijari thank you for your interest in Erwhi Hedgehog!
I’m working on “how to buy” option and understanding what could be better for a student like me (e.g. open an online shop, sell on tindie, etc.).
Please feel free to write to me an email (my address is giovannididio.bruno@gmail.com) with the list of parts you need.
A first kit of the robot (without sengi carrier board and other “robotic parts”) was released by UPBoard/AAEON:
In fact demos I did for this kit are the same of Erwhi, and Intel IoT “commercial” is with Erwhi.
Ros Melodic. About 350kg. 4m x 4m x 1m. SDF completed with launch files at Flower Spider.
It’s fully pneumatic with PWM solenoid actuators so may be a step too far (!) to make it walk without some mechanical changes.
I’m currrently working on;
Modelling Pneumatic cylinders forces in Gazebo
Experimenting with Gait
I’ll then move on to a motion plan and controller.
It’s an art project and when it walks it’ll be covered in floral suits which are slowly growing;