This project provides the steps for developing a freespace segmentation model by leveraging several NVIDIA tools including Isaac Sim, Isaac ROS, and TAO. We use synthetic data to bootstrap training given the lack of datasets available for the task. This is followed by using a Transfer Learning approach for training with TAO toolkit. Finally, fine tune with a few real world images and deploy your model with Isaac ROS.
Many thanks for your excellent work on this project! What is your expectation around using this approach in an out-door environment, like a small garden, a lawn, or farm plot?
Thank you for your interest in the project. We have tested only in indoor scenes, however if you have real world data to fine tune the model which was obtained in an outdoor environment we believe that you should be able to get it to work, might have to experiment with the number of real world images though.
In addition to the reply above for an outdoor environment such as a lawn, garden, or farm, freespace builds in the semantic understanding of what is walkable or drivable. Implicit in this the robot has legs, or wheels, and is not airborne or in water.
The expectation would be that the network needs to be fine-tuned on real data, where the freespace labeled corresponds to the capability of the robot for what is walkable, or drivable over. For example, the robot can walk over this hose, a different robot cannot drive over the hose.
Many thanks for your detailed answer. What you are saying makes very good sense, and is a good approach to the problem. I was planning on using something like semantic edge detection to identify the boundary of lawn vs road, etc. However what you are saying is a better approach. Although it does require data acquisition and labeling, and to some degree I was hoping to avoid that.