The summer has started, which means all the Bc. and MSc. theses should already be defended!
It is known that the ROS community builds a lot on top of academic research, and these might be the next few pieces. Who knows? Despite the very weak position academia has in OSRA, let’s show there are great things happening out there!
Whether you’re a student or a thesis supervisor, let’s share the interesting works using ROS or Gazebo in this thread. I’ll kickstart with those from our department.
The problem he tackled was the fact that GNSS receivers have only very basic support in simulators for mobile robotics. The currently available simulated sensors have a very unfaithful error model of the estimated positions (usually only additive Gaussian noise or a small random walk). If you’ve ever worked with a real GNSS receiver, you know that this is far from what you get in reality.
One of the most important factors that influence the computed position and its covariance is the visibility of the satellites. It influences both the DOP (dilution of precision), i.e., the geometry of the visible parts of constellations, and the convergence of the position estimation algorithm.
Václav has implemented a raytracing algorithm that detects which satellites are hidden behind scene objects (buildings, vegetation, etc.) and computes a full-fledged RINEX observation log with only the unblocked signals or with some signals damaged (if they, e.g., passed through a tree). The algorithm can even simulate cycle slips for phase measurements if some satellite disappeared for a while and then was found again. You can use your favorite RINEX processing tool to get the position and covariance estimates.
Our future direction is making the simulation model more complete (including atmospheric effects), also implementing the position estimation part in Gazebo/ROS using the standard interfaces, and making the plugin actually usable by the community (as the current version is not yet production quality software).
In the last few years, many algorithms for robust control of quadrupeds on difficult terrain have started to appear. If you wonder whether to use a Model-Predictive-Control-based algorithm or a Reinforcement-Learning-based one or something halfway, this thesis is the guide that tells you the pros and cons of various approaches. And it also provides implementations for all of them.
The algorithms are compared on a difficult obstacle course to examine their strong and weak points. Some of the algorithms have a blind and perceptive variant. And it is easy to add your own implementations to the comparison.
Our future direction is looking deeper into context-based controller switching and allowing easier usage of other robot models than Anymal D. We already have partial support for Spot. How will @lnotspotl’s Spot simulation compare to the one@ggrigoris surely writing about in a few weeks?
It tries to fill the large simulation gap Boston Dynamics has left after announcing the real Spot robot. Up until now, there hasn’t been any practical simulation of Spot. Tomáš has connected the bits and pieces, fixed the nonsensical inertia values in models from Clearpath/heuristicus and connected the Spot Arm model to the robot. Integrating Moveit and Moveit Servo for its control, it has now become possible to plan and execute complex tasks like pick-and-place from visual sensors with whole-body motion.
Our goal is to implement as much of the spot_ros interface as possible, and extend it where it’s missing some features we’d like to use. The thesis has been a great start in this direction, but we’re not there yet.
Our long-term cooperation with rescuers has shown one significant drawback of ground robots. As soon as they grow big enough to be actually useful, the rescuers start being scared of them. Imagine a 300 kg mini tank running over your leg, or even worse.
To tackle this problem, Bohumil has constructed and experimentally evaluated a combination of multiple radio-based sensors that can detect the distance and position of a rescuer relative to the robot, even in complete darkness or otherwise impaired visibility.
Ultra-wideband (UWB) radios are great for distance measurements, and with multiple units on the robot, they can also estimate the relative position of a transmitter. A directional array of Bluetooth antennas cannot tell much about distance but provides quite precise relative angle information.
As a result, there is a practical, low-cost (~100s - 1000s €) COTS sensor suite that allows the robot to demonstrate that it knows about the humans around it and it won’t run into them, no matter what happens. The UWB can also be used just as an emergency stop in case the human gets too close to the robot. These short-range measurements are extremely reliable.
Our future direction is to do more thorough testing in (literally) corner cases (i.e. when the human hides behind a corner) and other demanding situations.
Hi @peci1, very interesting topics! I’m still working on my thesis (defense in August only), so I can’t share much, but my topic is “evaluating state-of-the-art lidar slam in challenging indoor environments with glass and specular surfaces”.
Basically, I’m investigating the typical failure cases of algorithms like “slam toolbox” when there is transparent glass or reflective mirrors around the robot. Usually, transparent glass is only detected at very shallow angles, so the map shows gaps where the glass is located. With mirrors, it’s worse: you get both the gap and a mirror image of the reflection. This messes up mapping, localization, and path planning.
There have been many specialized solutions involving plenty of pre-calibration, specific environment conditions, or intense parameter setting for specific cases. Typical conditions that crumble down when you don’t really know the environment in advance, or when a minor change could cause errors.
So I’m studying algorithms that can deal with as many variables as possible! Anything that could mess up the LiDAR: windows, glass doors, glass walls, glass railings, mirrors, their frameless variants, even metallic and highly reflective elevator doors. Add curved glass to the mix and the difficulty skyrockets.
I’m pretty excited to share my findings, but it’s going to take some time. For now, this is all I can share. Hope to see more people posting their works here!
Spot Explores Simulated Disasters With Robotic Ears
Overview
My master thesis was titled “Sound Source Tracking as a Heuristic for Frontier Exploration in Search and Rescue using a Quadruped Mobile Robot.”
This was a full systems integration project, including mapping, behavioral modelling, localization, perception, control, and hardware development. It was designed around the Spot quadruped robot developed by Boston Dynamics.
I developed Spot to explore previously unmapped environments while tracking the sounds of human speech. The direction of the incoming sound was used to intelligently decide where to search next. Much like human search parties, this system used human voice as a compass, building a map of its environment as it searched for the target source.
Experimentation was performed in ideal lab environments, semi-structured simulated urban disaster scenarios, and unstructured forest terrain. I performed field trials at the Kingston Fire and Rescue Training Facility and got to participate in the 2023 NSERC Canadian Robotics Network field trials. My results concluded that sound source tracking was a capable heuristic, demonstrating a viable concept where autonomous disaster robots could be improved by giving them the ability to listen for people needing help.
*Special thanks to Ingenuity Labs and my PI Professor Joshua Marshall.