ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Part placement


#1

I’m curious at the correct process on getting the position to place the part on either tray for qual1.

From my understanding you can use tf to get the current pose from the world frame

(trans, rot) = self.tf_listener.lookupTransform('world', frame, rospy.Time(0))

Now I assumed I could simply offset the trans using the supplied values in the order, i.e

xyz: [0.1, -0.2, 0]

However for the position offset:

xyz: [0.15, 0.15, 0]

The arm can not find a goal state using moveit, it seems to be out of reach, is this designed that way. I’ve just gone about shortening the position in the end to be within in reach.

To be honest I had a look at the parts tutorial and the figures seem to point to different placements then where I achieve at the moment anyway.


#2

Hi @cdrwolfe,

The positions specified in the order should be reachable by the arm. The issue might be that you are using the wrong reference frame (the TF frame of the kit tray is not necessarily the same as the agv{N}_load_point_frame - you can use the logical camera to get the kit tray TF frame).

Alternatively the issue might be with how you are calculating the pose in the world frame. When calculating the goal pose in the world frame you must take the orientation of the tray into account so that you move “in the right direction”. You can use the transform method of a tf2_ros.Buffer() object to transform a pose from one frame into the world frame.

Keep in mind that the pose of the parts on the tray is not evaluated for the first qualifier, they just need to be anywhere on the tray.

By the way, it looks like you are using the python interface to the deprecated tf library. If this is working for you then that’s fine, but ARIAC does use tf2. If you run into any issues (e.g. this issue) then you may consider switching to the tf2 library. We’ve updated the documentation to clarify this.


#3

Thanks dhood,

Yes I was using the agv frame, however this begs the question why do you need a camera to find the kit tray frame in the first place?

It costs points to use a camera, whats to stop anyone from simply adding a camera, getting the trays offset to the agv frame and hard coding the necessary transform, and then removing the camera later? of course i assume this is possible :).

The pose was calculated as per last post using lookup transform frame = ‘agv1_load_point_frame’

I’ll upgrade to TF2, thanks for the tip.


#4

That’s a valid question – the distinction is not very clear in the first qualifier. The AGV frame specifies the point in the workcell where trays will be when the AGV is docked, but AGVs may or may not be there. In qual1 the AGVs don’t move, but in future tasks, the AGVs will move, and so the sensors can be used to detect when the AGV has returned with a new kit tray.

The pose I was referring to is the one that specifies where to move the arm to, i.e. what you do once you have looked up the pose of the AGV frame. This is where the transform method can come in handy.


#5

Hi dhood,

Thought I would quickly say thanks for putting in the pull request example for the TF2 transformation.

Onwards to Qual2 now :).

Cdr


#6

Hey, no problem! We’re looking to add it as a general TF2 tutorial but I’m glad that you found it in the meantime. Thanks for the message.