ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

ROS + Kinect for autonomous grasp

I’m implementing visual computing system in a anthropomorphic robot to automatically detect objects and grasp it. Like this (

I want to develop a simulation using ROS. Should I use gazebo to do this as well?
Do you know any material that can helps me in the beginning of this journey?

thank you in advance.

I can divide your problem in some subproblems as follows.

First for object recogition ( ps it will not give precise pose ) but it will provide you with the bounding box around different objects in a clustered scene : .
ROS wrapper : .

From there you can start with the pose estimation.

You can find some of the pose estimation libraries here : 1-
2- : this one is having also a gazebo simulation ready made ( i didn’t test it yet)
There are many more that i am sure they are available but either i dont remember or i am not aware of its existance.

Currently i am trying to develop my own object pose estimation as i wasn’t satisfied with most of the available libraries but that is a side project so it will take much time to come to life.

Moveit can be used to path and motion planning of the arm .

Also i urge you to follow up with some work of the teams participating in Amazon picking challenge. In here you will find list of the repos for these teams : .

This paper is nice also : .
I hope this can help you with anything.

1 Like

Hi Caio,
I don’t know if this already working setup made by Shadow Robot can be useful for you. It uses a Kinect + UR5 + Shadow Hand
It works online and you have everything working already, including Python API for grasping. Of course you can modify it at will.