GSoC and Intern Projects for Summer 2020

Hi All,

Summer is coming to a close, and that means that our GSoC students and summer interns are wrapping up their projects. This year we had seven GSoC students and eight in house interns split between our Bay Area and Singapore offices. Due to COVID-19 all of our interns were more or less remote this year, and despite having to work remotely they all did a bang up job. I wanted to briefly summarize the work completed this year and introduce you to this year’s batch of freshly minted open source contributors. Most of this year’s GSOC work focused on Ignition Gazebo and each of the students produced a post of the Gazebo Community forums. I have listed a quick project summary below. Most of the Gazebo students have posted their work to the GazeboSim Forum. Here are a few key highlights

The Open Robotics interns for both the Singapore and Mountain View offices were also largely remote this year. We had eight wonderful interns who have contributed a number of improvements to ROS, Ignition, the ROS build farm, and other projects at Open Robots. Open Robotics would like to thank Kevin Ma, WK Wong, Fred Shao, Brandon Ong, Maanasa Kotha, Audrow Nash, Pedro Pena, and Rafi Abdullah for both the contributions and efforts in the face of very difficult circumstances. A few of these students have opted to post brief summaries of their projects and experiences in the comments below.

Finally, I would like to welcome our first ever Google Summer of Docs student An Thai Le to the community! An will be working to improve the documentation for Ignition Gazebo.

8 Likes

Is the ignition libraries in rviz planning on being default in rviz2? I’d love to see that be the new visualizer in Galactic!

@Sarathkrishnan did an amazing job this summer putting together the Ignition RViz project, going beyond our expectations. The new project is written from scratch on top of Ignition libraries and doesn’t change the existing RViz2 codebase. Both projects can exist side-by-side.

We welcome the community to try it out, provide feedback and contribute! It currently depends on Ignition Dome (to be released at the end of this month) and ROS Foxy. It can certainly be released into Galactic if there’s enough interest.

2 Likes

Is there any documentation about setup and use you can point to? Any plans to fully replace rviz2 with it in a future distro release?

The wiki is the entry point: Home · gazebosim/gz-rviz Wiki · GitHub

No plans.

1 Like

Hi ROS community!

I’m Kevin Ma and I have just completed my 3.5-month-long internship at Open Robotics, Singapore. Here, I would like to present highlights of my work over this period. My primary task was to set up a demo environment in simulation to showcase the traffic control of heterogeneous robot fleets across multiple levels in a building with the Robotics Middleware Framework (RMF). The task inherently requires the creation of lift cabins along with doors as part of the existing world generation pipeline. Further, a plugin to control the lift in simulation and interface with RMF had to be developed. The last piece of the puzzle was to connect the navigation graphs on different levels for the planner to generate feasible plans across levels. The outcome of my efforts now allows the community to design custom multi-level worlds and visualize robots sharing lifts as they travel between different levels!

This demo world I created features a three-level imaginary clinic building, with three different fleets of robots performing unique tasks. This world is hosted in the rmf_demos repository. The robots in the building travel across different levels via the lifts that were automatically generated from 2D annotations of floor plans. This feature is implemented in traffic_editor repository. I also helped develop Gazebo and Ignition plugins to control the behavior of lifts within these simulators. The plugins listen for rmf_lift_msgs::LiftRequest messages that are published by the RMF Smart Fleet Adapters and suitably move the lift cabins to desired levels. The lift cabins may be configured to have different sets of cabin doors that serve different levels as specified in the traffic_editor GUI. The lift plugin then ensures the right set of doors open when the lift reaches its destination.

Summary of contributions:

Here are some pictures & videos of the demo:



Click here to check the video!!

As a side topic, I’d like to briefly introduce myself. My name is Kevin Ma, currently studying Robotics at Singapore University of Technology and Design as an undergraduate student. I plan to pursue graduate studies after completion of my course, but I’m looking out for research or internship opportunities as well! Feel free to contact me via any of the following:

Thank you!

7 Likes

Hey there! Brandon here.

This was my second time working with Open Robotics, and I’ve greatly enjoyed the culture and the wonderful work done over here. Over the summer I had the privilege of jumping around many different projects and supporting the different efforts (and of course contributing to the open-source ecosystem.)

I was lucky enough to get some of my work released alongside v1.0.0 of RMF, and to also work on a middleware implementation for RMW! So I can say that I finally have a much deeper understanding of the inner workings of ROS2, as well as its super amazing type support and interface generation system.

As most of my experience (both in independent projects, as well as my previous summer with Open Robotics) was mainly dealing with the higher levels of the abstraction stack, this summer was a very big change of pace for me, as I had to pick up many things I took for granted when I was working in those higher abstraction layers.

Unfortunately, I do not have a lot of interesting pictures to show, since I’ve been mostly doing the non-flashy work of handling data and working with middlewares :frowning:

Build Tooling for Simulation Builds: pit_crew, traffic_editor, and model_downloader

Repo

image

I built build-automation tools in order to help Open Robotics generate simulation worlds more seamlessly using traffic_editor .

Specifically, I developed the pit_crew library to help allow models to be downloaded from Ignition Fuel to a non-ignition Gazebo model directory as well as sanitise them to ensure that they don’t break any sims. I then folded this into the sim build pipeline, so everything works super seamlessly.

Lots of sanity checks!

Additionally, I helped develop a couple of nifty command-line tools with pit_crew folded in (like a support script to support grabbing assets after traffic_editor and associated packages are installed via .deb files).

I also overhauled how thumbnails are stored and managed, messing around with the ament resource index to allow thumbnails to be found no matter how they are installed (either from source or from a .deb ).

RMF Fleet Adapter Python Bindings

Repo

I wrote Python bindings for the C++ robot fleet adapter API from rmf_core , managing memory for it, and writing unit tests and documentation. I was lucky to be able to get this done in time for the RMF v1.0.0 release, so I’m very happy about that!

The bindings are packaged as a hybrid ament package, combining both C++ and Python. (It’s built using ament_python , but the setup.py calls CMake to generate the bindings.)

I also created an example implementation for MiR100 robots using those bindings. (That was a little bit difficult to write blind though, because I didn’t have access to a robot to test it on.)

rmw_zenoh

Repo

Writing the middleware implementation for the rmw interface was a veritable boss battle of a project. We wanted to explore an alternative middleware (Zenoh) to improve upon the current scaling constraints with DDS middlewares (particularly with regards to node discovery.)

it was a bit of a challenge because of lack of documentation for rmw , and how spread out the code-base is for any implementations (I had to search through something like 7 different repos, with mostly uncommented code.)

My original tasking was to get a quick prototype of a working rmw implementation that could at least publish strings so that Morgan could compare the performance of Zenoh against some of the other middlewares like FastRTPS or CycloneDDS, but then the feature-set kept growing.

By the end of the summer, I was able to get the following done/implemented:

  • Folded in type-support code generation
  • Low level memory management
  • Data serialisation and deserialisation
  • Pubsub
  • Service servers and clients
  • And message queues for pubsub and services with varying depth

If Zenoh shows promise, it would go a long way towards providing a fully fleshed out implementation for ROS2 that is suitable for extremely large environments with many nodes, which would be a significant contribution to the open-source ecosystem. So fingers crossed!

I was also able to do a show and tell on how rmw and type support works. This was recorded, and the slides can be found here, in case anyone needs a great resource to guide any developments for new rmw implementations.

Sum Up

I’d like to thank Aaron, Grey, Geoffrey, and Morgan for all the support they’ve rendered me during this summer. It’s always refreshing to be working in an environment with so much collaboration, and I don’t think the work from home arrangements hindered us too much.

As always, I will be looking forward to seeing what great exploits Open Robotics has in store for the open-source ecosystem, and I would very much relish the possibility of working with everyone again (either in Singapore or in HQ.)

So this is not a goodbye, but a see you somewhere on the net, and maybe sometime soon :wink:

CH3EERS!

@methylDragon

GitHub LinkedIn
methylDragon@gmail.com

9 Likes

What exactly is the point / what are the advantages of the ignition rviz clone?
From the information available it looks like a clone with (currently) less features and a different UI that would only split the developer + user base (assuming that plugins are not compatible).
It is, of course, a great project for learning but I have trouble seeing the value added for the ROS community as it addresses the same problems in the same way.
Maybe someone here could clarify this for me?

I understand your concern. I’d argue that variety is a good thing though. And I think we’re far from a situation where people are having to maintain 2 separate sets of plugins for both visualizers.

This is just a proof of concept that we’re offering the community in case anyone is interested. If anything, it could start a conversation about how RViz2 itself can be improved.

I think @Sarathkrishnan made a good summary on his post, but I’ll just highlight a couple of features.

  • It’s based on QtQuick / QML, which provides more flexibility and modern look and feel. You have native support for material design and touch interfaces. I should point out that others have been looking into bringing QML support to RViz2, which is great!
  • Ignition Rendering’s abstraction allows using rendering engines other than Ogre 1. For example, you could use Ogre 2 right now, which has support for Physically Based Rendering (PBR).

I recommend bringing Ignition RViz questions to the original post in order not to take over this GSoC post. I realize not everyone may be willing to create a Gazebo Community account, so I think it’s also valid to start a new post just for that here on ROS Discourse in case people are interested.

2 Likes

I’ll also just add that we (myself and other maintainers at Open Robotics) have been hoping to one day consolidate the rendering logic between rviz and gazebo/ignition for years, just to make it so that we have less code to maintain and so that improvements from either community can be had in both.

A good example of this is the switch to Ogre 2 from Ogre 1, that @chapulina mentioned. We need to do this at some point for rviz so we can support newer architectures like metal and vulcan, but if we were using ignition rendering it might make a smoother transition.

There are lots of technical reasons that this is hard to do or even perhaps not worth it, but this project was an interesting one because it was a good learning experience but at the same time it gave us a lot more insight into how difficult this might be if we had time/resources to do it.

6 Likes

Hi there, I’m WK Wong and have recently completed my Summer internship with Open Robotics. It was an enjoyable experiencing working on interesting tasks that was not only cool, but it was also a great opportunity for one to contribute back to the Open Source Community. The internship was spent working on a variety of tasks, which ranges from debugging low level hardware issues to creating high level software packages. The project one undertook was the testing and utilization of the various GPU accelerated algorithms and libraries, so as to push the limits of the OVC4.

As the OVC4 would be using the Xavier NX module, the Jetson Xavier NX was used in conjunction with Raspberry Pi v2 cameras (IMX219) of different FoV (Field of View) and setups. (The wide angle fisheye camera uses and adapter to replace the original lens, which has a horizontal Fov of 220 degrees.) The setups are as follows:

  • Monocular Normal/Wide angle fisheye camera
  • Stereo Normal/Wide angle fisheye camera

Below are the highlights of the tasks done:

ROS package wrapper for the CUDA Visual Library by RPG

The vilib Visual library consists of some GPU optimized algorithms by the University of Zurich and ETH, it would be interesting to test it out on the Jetson Xavier NX as the library is GPU optimized. However, as the library was relatively new and there was no ROS wrapper written for the library, one decided to create a ROS Package that acts as a wrapper. This enable individuals to be able to access the libraries through ROS nodes, and the nodes available are the FAST feature detection & the Lucas-Kanade (LT) tracking of objects using the FAST features.

Wide angle fisheye lens calibration & rectification with omnidirectional camera model

As the calibration of wide angle fisheye camera was a tricky affair (where the Fisheye model of OpenCV resulted in weird reprojections), it was found that the calibration and rectification of this type of lens was to use the omnidirectional camera model instead. However, the camera calibration node did not include the camera model as the omnidirectional camera model was part of the OpenCV contribution library. Hence, I created a few ROS packages such as masker_util and omni_proc_ros to facilitate in the wide-angle fisheye camera calibration.

Performance & Comparison of the various image processing algorithms in gpu_stereo_image_proc

The gpu_stereo_image_proc ROS package was used to test the performance of the respective stereo matching algorithms on the Jetson Xavier NX, which consists of both CPU and GPU based algorithms:

  • Stereo Block Matching (SBM) [CPU]
  • Semi-global Block Matching (SGBM) [CPU]
  • Fixstars Semi-global Matching (libSGM) [GPU]
  • VisionWorks (libVX) [GPU]

Pose estimation of Objects with PointClouds and Euclidean Cluster Extraction

The Point Cloud (PCL) data are obtained through the stereo_image_proc, which is then passed through various filters (Voxel Grid, Statistical Outlier Removal) to obtain the estimated pose of the objects that are detected within the PCL data via Euclidean Cluster Extraction. The node would then publish the centroid of the respective cluster, and the cuboid(s) that are bounded by the PCL in the particular cluster.

In summary, it was a great opportunity to acquire the various skills and knowledge through this internship, from understanding the theoretical concepts of performing image matching to obtain disparity depth maps to improving one’s technical skill and know-how’s in programming.

To summarize, the tasks that was accomplished are as follows:

  • Wrote a ROS package wrapper for the ETH/University of Zurich vilib Visual Library: vilib_ros.
  • Wrote a package to perform calibration and rectification of wide angle fisheye lens with the omnidirectional camera model: omni_proc_ros.
  • Compared and tested the performance of various image processing algorithms with a stereo camera setup: gpu_stereo_image_proc.
  • Create a ROS package to determine the pose of the respective cluster of PointCloud as detected from the Euclidean Clustering: neven_ros.

I would like express my deepest appreciation to Luca and Aaron for all the help and support they have given while I was here, and it really helped when I was facing various issues or trying to improve the way certain things are implemented.

Feel free to checkout the documentation of the project here.

WK Wong
Blog | Github | LinkedIn

4 Likes

I understand your concern. I’d argue that variety is a good thing though. And I think we’re far from a situation where people are having to maintain 2 separate sets of plugins for both visualizers.

This is just a proof of concept that we’re offering the community in case anyone is interested. If anything, it could start a conversation about how RViz2 itself can be improved.

I admit I had the same concerns as @StefanFabian about splitting the dev/user base. Variety is good and it seems like this was a helpful exercise, but no one is keen on updating all the existing Rviz plugins to a new framework. It would be great if this effort had a roadmap to join with rviz2 somehow.

just […] make it so that we have less code to maintain

I’m sure that’s what everyone with an Rviz plugin is thinking :slight_smile:

I guess I am at the other end of the spectrum when it comes to Ignition RViz. I first what to start off by saying thank you for the open-source contribution and I am excited about the new capability this provides. It was asked in a previous comment what this provides over RViz, well to start is that it is built off Qt Quick versus Qt Widget which has better support for tablets and smart phones opening up new avenues for improving user experience leveraging the same tools. The second is near realistic rendering with Ogre2 and OptiX. Third direct access to sensor simulation, no need to run both RViz and Gazebo you get it for free with Ignition RViz because it comes with Ignition Rendering/Sensors. In addition, since Ignition GUI leverages Qt Quick you could leverage these tools to create Industrial Human Machine Interface (HMI) leveraging touch screen. I have already began replacing several visualization tools that once leveraged RViz with Ignition GUI/Rendering and will continue to migrate them over to leverage Ignition Robotics Software.

4 Likes

I really like we’re starting to use Gazebo’s rendering engine for RViz, but what you describe @Levi-Armstrong sounds like it violates separation of concerns.

I may be misunderstanding the statement. The Ignition Robotics Software provides the separation of concerns but I do not think a top level application needs to follow this principal.

awesome work Brandon on rmw_zenoh! Where can we find your show & tell recording?

Greetings! I’m Fred, and I’m one of the summer interns at Open Robotics (SG). I would like to share with you my project on crowd simulation over Robotics Middleware Framework (RMF).

Introduction

The purpose of my project is to add some “troubles” to the multi-robot system to make the simulation scenario more realistic, as handling the complex scenario can be a huge boost to the system robustness. Crowd simulation itself can be a quite huge topic (a good explanation can be found here), and luckily Menge provides an open source framework to solve the path planning and collision avoidance problems by using the finite state machine (FSM). My contribution on this project is creating a easy-to-use integration of Menge to the RMF demos and handling the actor-related plugin to visualise the simulation.

Integration of Menge

Menge requires 2 kind of configuration files to perform crowd simulation: navmesh file(.nav) and FSM config files(behavior.xml, scene.xml).

  1. Navmesh file

The whole crowd simulation is established on the connection graph, and navmesh provides a geometry format to define this connection graph using convex polygon. However, it can be a heavy work to create the navmesh for each scenario geometrically without a general method. I was inspired by the robot lane defined in traffic-editor and proposed a feasible way to generate the navmesh from “human lanes” with a certain width, which calculates the convex polygon vertices from a bunch of “connected” rectangles. The “human lane” is integrated into the traffic-editor GUI to make it easier to construct the navmesh.

  1. FSM config files

Each human is abstracted as a circle moving within the navmesh defined above. The FSM config files define how the human will transit from one goal to another goal globally, and how the human will react if a collision is going to happen locally. To make the configuration user friendly, a panel is integrated into the traffic-editor GUI.

The whole process of configuring the crowd simulation is:

You can find this integration in:

Actor plugin of gazebo and ignition gazebo

The actor plugin is responsible for visualising the simulation result from Menge to gazebo and ignition gazebo.

  1. Modification of Menge

In order to make the human react to the robots in the scenario, we modified the Menge lib a bit with the idea of “external agent”. While the plugin updates the position of “internal agent” from Menge to Gazebo, the plugin updates the position of “external agent” from Gazebo to Menge to make the robots be able to be involved in the simulation.

  1. Animation manipulation
    In gazebo11 and ignition gazebo, an actor can be spawned with multiple animations. An animation switch is added for each actor to make the actor “looking at their phone” when they are not moving and switch back to “walking” when they are heading to a target.

You can find related documents in:

You can find one of the demo videos at here. In this video, the robot system is requested to make delivery under the crowds. The human will avoid collision with robots (from Menge crowd simulation), and the robot will be trigger with emergency stop if there are humans in the front within a certain range (from RMF robot action).

3 Likes