Deep Neural Net Object Recognition Node with Monocular Camera - Package Announcement

We are pleased to announce our Deep Neural Net object detection node dnn_detect.

For robots to be useful servants, they need to be able to recognize objects so that actions on and with those objects can be programmed. For example, in our robotic cocktail waiter application, the robot has to be able to find people in the room to serve.

Because this is important functionality, we have developed a generalized deep neural net node that can recognize twenty common household objects using monocular camera data. The results look like this:
dnnatworldfairnano
catdnn

Our node uses the [Deep Neural Network module in OpenCV] (https://docs.opencv.org/3.3.0/d2/d58/tutorial_table_of_content_dnn.html) to find a variety of objects using a pretrained model. The class of the objects detected, bounding boxes, and the confidence of the classifications are published as a ROS topic. This allows the robot to interact with its environment in meaningful ways.

Have an issue, or an idea for improvement? Open an issue or PR on the GitHub repo.

This package will be part of the robots that we recently released via crowdfunding on Indiegogo.
https://indiegogo.com/projects/magni-build-your-robot-app-in-hours-not-months

The Ubiquity Robotics Team
https://ubiquityrobotics.com

5 Likes

Very interesting node Dave!
I think that this is a very interesting subject for an interview at the ROS Developers podcast. Would you like to be interviewed at the podcast to explain there how it works and how to use it?

Hello

In this command
roslaunch dnn_detect dnn_detect.launch camera:=/my_camera

What should be the value of /my_camera for a USB cam of the laptop?
How to run it with USB cam of laptop?

Anis,

You would need to run a camera node, such as usb_cam

Then run this for the camera

rosrun usb_cam usb_cam_node

and this for dnn_detect

roslaunch dnn_detect dnn_detect.launch camera:=/usb_cam image:=image_raw

and look at the results with

rqt_image_view

If you have problems, please open an issue

Jim

Great.
works out of the box now.
It will be good to update the wiki page doc.
I just noticed that sometimes recognition is not perfect.
I showed it a mobile phone and it says a bottle.

Yes, it reports both the object and the confidence that the object is what it says it is. It will continue reporting objects even if the confidence is quite low - frequently as low as 30%. If you want to improve specificity for your application then filter out objects that report with low confidence.

It is a much better choice to have the algorithm report objects and be honest about how likely it thinks it is that actual object, than to not report the object at all.

Yes absolutely, I love your podcast, I will close via email or phone.

Very nice work, Dave.
Did you consider using vision_msgs instead of new custom messages? This would make this and other detector more interoperable, see also this discussion.

Sounds cool! We will look at it. We absolutely believe that we should try to follow standards laid down by others.

David

Hi, i want to modify the minimum confidence value, so the command should be like this?

roslaunch dnn_detect dnn_detect.launch camera:=/usb_cam image:=image_raw min_confidence:=0.7

based on the section 1.1.4 from dnn_detect wiki

Hi @francorzo,

Questions about the dnn_detect package should really be asked here: https://github.com/UbiquityRobotics/dnn_detect/issues/new

min_confidence is a parameter of the node. However, not all of the node’s parameters are exposed as arguments to the sample launch file. If you want to try different parameters, you should create your own launch file and add

<param name="min_confidence" value="0.7"/>

to the node. Hope this helps,

Jim

Thank you very much @jimv , i’ll try that.