ROS + Kinect for autonomous grasp

I’m implementing visual computing system in a anthropomorphic robot to automatically detect objects and grasp it. Like this (https://www.youtube.com/watch?v=_xM9RQAbxEY)

I want to develop a simulation using ROS. Should I use gazebo to do this as well?
Do you know any material that can helps me in the beginning of this journey?

thank you in advance.

I can divide your problem in some subproblems as follows.

First for object recogition ( ps it will not give precise pose ) but it will provide you with the bounding box around different objects in a clustered scene : https://pjreddie.com/darknet/yolo/ .
ROS wrapper : https://github.com/kunle12/dn_object_detect .

From there you can start with the pose estimation.

You can find some of the pose estimation libraries here : 1- http://wiki.ros.org/tabletop_objects
2- http://wiki.ros.org/cob_object_detection : this one is having also a gazebo simulation ready made ( i didn’t test it yet)
There are many more that i am sure they are available but either i dont remember or i am not aware of its existance.

Currently i am trying to develop my own object pose estimation as i wasn’t satisfied with most of the available libraries but that is a side project so it will take much time to come to life.

Moveit can be used to path and motion planning of the arm http://moveit.ros.org/ .

Also i urge you to follow up with some work of the teams participating in Amazon picking challenge. In here you will find list of the repos for these teams : https://github.com/amazon-picking-challenge .

This paper is nice also : https://pdfs.semanticscholar.org/57cb/dae58b8718db0209885a5fb170eaddd619b9.pdf .
I hope this can help you with anything.

1 Like

Hi Caio,
I don’t know if this already working setup made by Shadow Robot can be useful for you. It uses a Kinect + UR5 + Shadow Hand
It works online and you have everything working already, including Python API for grasping. Of course you can modify it at will.

Cheers