Development topics for Aerial Robotics - Indoor Navigation

Hey,
As a part of ROS-Aerial, we’re starting an indoor navigation subcommittee. Let’s use this thread to start ideating on existing projects which can be relevant that we can work on and also any new packages which can help push out an open-source autonomy stack for indoor navigation.

Prior research landscape mapping of autonomy stacks:

Informal discussion Discord thread: Discord

Github Kanban page to track development tickets:

2 Likes

Thanks Mayank for starting this thread!

I guess my closest experience with this is trying to connect NAV2 with the Crazyflie and multiranger, which I partly succeeded (which I presented at ROScon last year). At least there were some lessons learned from that:

  • Current not a great solution for indoor 3D navigation. The UAV needs to act like a ground robot but this limits the capabilities
  • I had a lot of trouble of making it NAV2 work with sparse sensing. Ofcourse a ‘lidar’ with only 4 rangesensors is pushing the limits, but there are solutions that haven’t been converted to ROS that are able to do SLAM with this. Also, indoor navigating quadcopters aren’t big and are perhaps forced to carry less then ideal ranging sensors.
  • Visual based odometry (either with visual slam, or an downward facing camera for optical flow) is not well integrated in the ROS framework (at least what I know of), or at least there aren’t default packages for it. It would be nice to get it at the same level as wheel odometry solutions.
  • Simulation realism (but perhaps a separate subcommittee needs to be done for this), aerial robotic dynamics could be better.

Don’t see that of me judging the current framework in place, don’t get me wrong. I’m just indicating some pain points that I faced trying to ‘force’ a ros package to work for a platform that it wasn’t designed for, so that created some nice insights. Also I’m sure that just a summer project working on this does not make me the expert on indoor navigation of UAVs but I like making demos and playing around with it:)

These were the github projects I used:

1 Like

Yes, VIO is a topic that I’m interested in as well. As far as I know, several implementations were relying on sensors which could do on-board VIO like the Intel Realsense T265 tracking cameras (now discontinued) and directly consuming that into the autonomy ROS packages. Even I have not really seen packages for getting raw sensor data to run VIO on the host side for aerial use cases as such.

One approach could be looking into Nvidia IsaacROS packages like the one for VSLAM and seeing if it already works for 3d/can be extended for aerial use cases.

https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_visual_slam/index.html

Another concern I have is about how to tackle varying hardware/resource requirements since autonomy companion computers could vary a lot in capabilities?

One potential way could be to figure out a simulation environment that is usable for such investigations which can allow us to run at least proof of concepts of such packages running on laptops/PCs and then later figure out how they perform on specific hardware.

On the hardware-specific front, there have been some recent interesting implementations of VIO/SLAM with Luxonis OAK-D cameras that I’ve come across.
https://docs.luxonis.com/en/latest/pages/slam_oak/

There’s also a bunch of cool in-camera compute for localization examples that people have pulled off.
https://docs.luxonis.com/en/latest/pages/oak_on_drones/

From working with custom/prebuilt(like t265) sensors for VIO, I observed some things as follows:

  • T265 has some issues with velocity updates over 7-8 meter altitude but provides some good features out of the box, like a simple implementation of place recognition, which can correct pose drift. Also, it is a stereo cam-based tracking system that provides depth for features while the camera is static. (sadly discontinued)

  • I used oak-d-pro with some VIO algorithm. The issue I faced with oak was the imu msg arrival consistency, which can reduce the robustness of VIO algorithms ( also maybe I have to test it correctly since it has extensive custom API, which requires more testing time).

  • From my experience with custom VIO sensor

    • The VIO works best if the camera is HW-synced with IMU.
    • on the other hand, sampling IMU at a higher rate in the range of 200-500Hz with software sync is good, too.
    • Also, the cost for the initial setup can be as low as the cost of a global shutter camera with wide fov since autopilot already has an IMU.

From the perspective of performance vs compute:

  • open_vins performs best in comparison to other SOTA like vins_fusion and also has continuous support and updates from the developer.
  • open_vins can run on a single core on an RPi5 with 20fps 640p, since it uses KLT feature tracking and Multi-State Constraint Kalman Filter (MSCKF) (based on my limited understanding) while vins_fusion uses bundle adjustment optimization, which is more CPU intensive.
  • but vins_fusion provides some good features that might be necessary sometimes, like robust VIO initialization using SFM, while open_vins require suitable IMU excitation.
  • Both have multi-cam support in stereo configuration or multiple monocular configurations.
  • Both can estimate the intrinsic and extrinsic of the sensors online, provided offline calibration is okayish if not great.

From a simulation perspective, I only used rotors extensively. In my opinion, it provides an excellent platform to start with for indoor navigation. I’m unsure if it will be great for the VIO kind of thing.

I agree that 3d navigation is a mess right now; there is some excellent open-source software out there. But the issue comes down to the integration and testing, IMO.

Agilicious provides good software for controlling quadrotors at high speed, but there is nothing regarding avoidance/local planning. They generate time-optimal trajectories and provide them to the controller (also, software access is limited since it is not available openly for everyone).
Then there is Kumar robotics software, which provides all the things needed. or at least they claim that they integrate everything from global planning to local planning, localization and control.
Then there is teach repeat plan from HKUST Aerial Robotics group, which also integrates everything.

However, reproducing these setups without the constant support of the original software developers can be a tremendous task.

Currently, for the things I do in terms of 3D navigation. I am integrating different open-source things with some custom-built things since a big issue comes up with the compute cost, performance and form factor. which requires designing architecture while keeping compute, cost, performance and form factor in mind.

1 Like

I’m not sure if we should be concerned about it. I was thinking of it as a ros-navigation approach if possible (general purpose).
We can provide what kind of sensors and data are required and it can be utilized on the backend.

Yes, that can be a good start. But for that, we also need input from the other subcommittees as well (mainly outdoor-nav i guess).

I just wanted to add that we have a page of autonomy stacks for aerial robotics in the aerial robotics landscape:

However, we do have to try to focus on packages that have been ported for ROS 2 (since ROS 1 only has 1 more year to go) and unfortunately I think that the Kumar robotics software has not yet though.

Aerostack2 has been mentioned quite a lot and has been presented last year at our meetings and recently for the gazebo sim community meeting, however, has anybody used the indoor navigation part one yet?

By the way, should we add a sensor section to the hardware page on the aerial robotic landscape? Seeing some good tips here but we don’t have a list of this anywhere.

Not yet. I am planning to test it around this summer with cf2.1 (as they have used in example)

seems like a good idea.

2 Likes

@gajena I’ve added the packages and hardware you’ve recommended to the robotics landscape here:

There seems to be replacements for the Intel realsense T265 these days but I haven’t tried them out myself, but I’ve noted them in the list

2 Likes
1 Like

A sensor section, with a focus on ROS 2 supported sensors for aerial applications, would be a useful addition.

1 Like

This work looked very relevant to try and test out from @srmainwaring 's presentation today!

1 Like

A similar scenario of GPS-denied navigation, almost similar but a little bit more complicated environment I have seen is in Forest regions with canopies and thick vegetation, which is an outdoor scenario. VIO could be used in such cases for navigation, but is prone to a lot of troubles due to intensity variations in outdoor scenarios.

The approaches for such type of navigation tasks are also very interesting to look into. I apologize if this sounds irrelevant to this thread, but I found this problem to be closer to this thread. The 3D Navigation thread was talking more about the GPS-based navigation.

A list of ROS compatible indoor navigation packages is available here: aerial_robotic_landscape/docs/aerial_autonomy_stacks.md at main · ROS-Aerial/aerial_robotic_landscape · GitHub

Feel free to add your own packages if you’re working on anything new and exciting!