ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Avoiding Small Obstacles on the Gruond


Hello, I am Working on a Differential Driven Robot who has to navigate through a very unstructured environment. One of the Biggest changes faced now is to avoid obstacles with height less than 3cm(~2inches). These Obstacles are :

1.Power Cables.
3.Small supportive Structures of Furniture.

As none of the depth sensors we have tried upto date has no capability of measuring heights accurately around this height, my suggestion was to use an image processing based approach to determine the obstacles of this nature. what sort of a methodology would be best for a substantially accurate detection of such an obstacle?


Hi @RajithaH,

According to ROS community support guidelines this
question should be better asked in

Please do not post questions on the they should go
to ROS Answers.

ROS Discourse is for news and general interest discussions. ROS
Answers provides a forum which can
be filtered by tags to make sure the relevant people are included and
not overload everyone.


I actually think this topic fits here. Is more like a general discussion not a things with just one answer.

On the topic:

From my experience the 3D sensors on the market (Asus Xtion, Orbbec Astra, Kinect) should be kind of accurate enough to go down to 0.3 cm.

I would say the main problem is the calibration of the sensors / sensor pose and I ran into some cases where the ground is actually not flat / horizontal.

On the other hand it is most likely easier to make the robot just be able to drive over such small obstacles. You will most likely want to be able to do that, for e.g. door sills.


Sorry, I guess I was being overzealous…

Adding to the discussion:

Indeed I can detect most objects that are 3 cm in height, using the usual
sensors. And, as @AlexReimann stated, the biggest issue I had was properly
filtering the ground, as many sensors have a distortion in measurements and
that error grows with its distance from the ground. I would not recommend
the Orbbecs for this task as they suffer from a drift problem that would
invalidate the calibration after a couple of hours (

But if you want go to under 3 cm that would be a bit harder… Maybe
installing the sensor very close to the ground and looking forward?
Or if you want to go with image processing, it would be relatively easy to
detect objects using color segmentation, for example.


Small correction: I guess you meant 3 cm (as per the original question), not 0.3 cm, right?


Hi @RajithaH,
I agree with @AlexReimann answer on the topic. We have made good experiences with structured light (e.g. Asus Xtion) and time-of-flight (e.g. Kinect v2) concerning the precision for mapping of rough terrain. Here’s a video of one experiment and our mapping software is available here:,,


Nope, 0.3 cm. I might stretch it a bit with that, but I did some tests with the Asus Xtion before where I got it down to detect reliably obstacles with around ~1 cm without trying too hard (calibration video, repo or more up to date repo, basically comes from this paper).
But this actually turned out to be infeasible because our sensor is mounted quite high and the uneven ground (at least in our office) tilts the robot so much that the sensor accuracy is higher than the variance we get from the ground :’(

With the Orbbec you probably have to look into the IR image calibration and see if you need additional calibration (and if you can figure out a way to do that, which is probably non-trivial).
I heard that some of the Intel guys are working on doing obstacle segmentation from ground by vision / with their 3D sensor. Maybe contact them.


I felt this as an open ended discussion which would be better for a discussion rather than a issue of a particular ROS system. Regards


Just an idea: if you could try to establish a known colour or texture of the floor using computer vision then avoid anything that doesn’t look like the floor. That’s what I do when I walk cautiously.

Probably the most robust floor detector for a “very unstructured environment” would be a CNN trained to segment floor vs not-floor. If you have the budget, one way to get a training dataset is to create one using amazon turk to manually label a lot of images.

Here’s a couple slides showing existing deep learning segmentation approaches and software that I think look promising.

Deep learning segmentation from Daniel Snider

Here’s a list of deep learning segmentation publications:
Another list of deep learning computer vision:


Consider also using the 3D information together with an image stream for the “not floor” detection. Filters out already a lot of hard to classify stuff if you only consider points on floor height.
Might even be able to come up with something that does not use a machine learning black box. I would look into frequency analysis of images to differentiate between regular floor patterns and obstacles.

Always depends what you are doing though, trying out machine learning for a research project is probably nice :stuck_out_tongue:


This was one of the approaches I had in mind. Thank You for sharing. Should try this :slight_smile: