Robotic Humanoid hand

i installed visp_auto_tracker and the first results are that it is more stable than ar_track_alvar and it consumes only 80% of my 4 cores.

https://www.youtube.com/watch?v=5xYyCqz5Ruc

the bad things are, that it does only recognize this big qr-code out-of-box and only 40cm infront of the camera and that the detected pose is not correct.But the visp online videos show better results, hope i get this.

You can also check chilitags, https://github.com/chili-epfl/chilitags It is based on opencv and has quite good performance. I use it on Nao robot.

in the first visp_auto_tracker test i printed a wrong qr-code (not square, no good borders). I printed it new , but it does not get detected good, even it is of this paper-size
https://www.youtube.com/watch?v=2WJsd6y7dTo

perhaps there are some more options i can find.

@OkanAsik thanks, that looks cool. You also used very small tags in your videos, what is the smallest size it can detect ? Is 1 - 2 cm border size, big enough ?

@OkanAsik i installed chilitags and ros_markers, without doing any configuration it runs very well. My CPU is running on 100% but the detection is still good
https://www.youtube.com/watch?v=5ecTl4JUoIU

Is there a way to use another style of markers in rviz ? I want to glue 5 tags on a cube, and then there are a lot of these markers on a tiny place, and then its bad to see.

good software, thanks for the tip

after hours and hours of testing i got a markers_configuration_sample.yml file together which represents a cube with 5 tags. The config is not really correct i think, but it works so far. If someone could help me here, would be nice. Here is the cube with chilitags. The cube has 20x20mm side-lenght and the black border of the tag is 13x13mm.

my config of this cube for chilitags is :
distal_phalanx:
- tag: 15
size: 13
translation: [0., 0., 0.]
rotation: [0., 0., 0.]
- tag: 3
size: 13
translation: [16.5, 0., -3.5]
rotation: [0.,-90., 0.]
- tag: 4
size: 13
translation: [13.0, 0., 13.0]
rotation: [0., -180., 0.]
- tag: 2
size: 13
translation: [-3.5, 0., 16.5]
rotation: [0., 90.0, 0.]
- tag: 14
size: 13
translation: [0., -3.5, 16.5]
rotation: [-90.0, 0., 0.]

And here a little video, consider that my cpu is running on 100% and i use the chilitags camera intrinsics, not the one of the kinect. (next thing todo)
https://www.youtube.com/watch?v=YHrjVUnd9T0

i created a second cube and config, now i got two tf-frames from chilitags.
https://www.youtube.com/watch?v=lwT_od_mcyg

How can i use these two tf-frames from chilitags for my urdf model ?

The urdf model consist of two bodies and one joint, like in the video the two cubes have also one joint, but is there a tool or anything that i simply can use the chilitags-tf-frames-info for the urdf-tf-frames ? tf_echo to joint_state and then publish this to robot_state_publisher ? Perhpas the moveIt! people use qr-tags on robot arms and then do something the same as i want ?

Didn’t found any tools for that, so i wrote a node which gets the angle of the two tf-frames of the chilitags cubes and publishes them to /joint_states.
https://www.youtube.com/watch?v=O6Pre6iVmo8

The devel branch of iai_kinect2 supports now rgb-only streams without depth, which reduces my cpu load from 95% to 60%, but the chilitags detection is still flickering ?

Bought a logitech c525 webcam to lower my cpu load and it does, running everything is now about 20% cpu load. I build a setup to learn a neural network.
https://www.youtube.com/watch?v=AtFYajQjPYA

As input i got 2 force values for every wire-rope (one wire-rope to pull to the left and one wire-rope to pull to the right), 2 velocity values for the servos, and the actual pose. The goal pose is then the goal to go.
When the computer does a path-planning before with OMPL or so, the neural network could get its loss/costs from the program itself, supervised learning whereby the computer itself is the supervisor :slight_smile: The output should be two servo velocity values.

Has someone experience in something like that ?

I write some tensorflow code with ROS, perhaps someone is interrested to help/look
https://github.com/flobotics/flobotics_tensorflow_controller

yes, it directly starts with a loss of 0 :slight_smile:

its getting alive :slight_smile:

and it moves
https://www.youtube.com/watch?v=opbjvah84ls

ok, the algo must be mader better, for continous action a3c seemed better, but first it needs to be setup correct. Help is welcome

the training of a real hardware instead of e.g. pong-game is a little bit different. Doing an action takes with real hardware some seconds and so getting enough samples to train on, means running the hardware for weeks, just doing mostly random things.Which then produces many,many gigabytes of data i cannot store anymore :slight_smile:

So i wrote a game, that uses the same sensor data, but i dont need real hardware. I hope i can move on faster now. here is the code, you can simply run with “python flobotics_game.py” on console and with tensorboard --logdir=/tmp/train you can watch in webbrowser the training results. Have fun and help is always welcome

EDIT: The game has now a TKinter graphical window to watch it train or play it

tensorflow is really cool, you can use it like lego. I got some working models which do its job after training only 1 hour on CPU (no GPU). Here some screenshots of 1 hour training and then the model does the job for 1 hour alone.


I post these pictures here, because there are not much out there, and i firstly thought my model would not work, because of the strange look of e.g. the loss. But it does :slight_smile: All code is on github.

After working a while with these visual qr-tag detectors, i must say, that they dont really work well with ROS. But, after thinking a while, i got it, that it does not work with a neural network trained model. The reason is, the neural network needs the degrees of the phalanxes of one finger, even if there is no camera possible in this situation, or the tags are so big, that they disturb the movements.

My solution to this problem are 9-DOF sensor, MCPU-9150. The chip is only 4x4 mm in size and could be easily used on every phalanx. The smallest so far breakout-board i found is 16mm x 21mm x 1,5mm and i could print some adapters. Here a picture with the first and big adapters.

Now i get the position of all phalanxes, even if its dark-night, where a camera couldnt see anymore :slight_smile: And the neural network can work. I dont know of a finger or hand, that is able to detect its own positions of the phalanxes

Hi @flobotics,

Sorry I’m late to Discourse and to this discussion to suggest actually that.

At Centro Piaggio “E. PIaggio”, we used to use a IMU-based glove on top of the Pisa/IIT SoftHand, because there were no way to add encoders, since there are no rotating shafts between phalanxes in that hands. There is actually a publication about it, in case you want to read more:

Another releated and commercial project is Perception Neuron:

But it does :slight_smile: All code is on github.

On the other hand, I’m interested in the part where you used TensorFlow, how did you do the training, using the real hw?

Did you wrote somewhere all that process?

Thanks

hi @carlosjoserg , its never too late for some good info :slight_smile: With tensorflow i use the deep-q net from deepmind, which played GO against Lee Sedol. I made it like a game.

In the image (which is mostly black and old) there are six arrays displayed. I overlayed them in this picture to see it more clearly for humans. The white-strip shows the goal-angle and the white-box shows where it actually really is.
The next array shows the force or tension on the first wire-rope. Again, the white-strip shows the goal-tension and the white-box shows the actual tension. The same with the next array, which shows the tension of wire-rope number 2.

The tensorflow-ros code makes one-step, that means, its not continous.(A3C seemed better for that) But anyway, after that step it checks the actual state, which means angle, tension1 and tension2. It gets it rewards. Then it does the “normal” tensorflow way.

For better reproducibilty, i now use dynamixel servos, which got a load parameter instead of flexi-force sensors.

For an easy start, you can check https://github.com/flobotics/flobotics_tensorflow_game/tree/master/pixel_hunter_game/10x2 It does nearly the same “game”, but its faster (does not need a week to train, just an hour on cpu)

hope that was informal

SEARCHING INVESTOR for robotic humanoid hand. To build 5 fingers of different length (as a human has), getting 38 (58-2) servos to control it like, or even better, as a human and build 19 (54-1) smaller 9-DOF sensors (to connect to each phalanx). If someone is interested, please mail to info@flobotics-robotics.com. Thanks

I made a picture of a hardware-one-finger-setup and a gimp manipulated picture of hardware-five-finger-setup to show that hardware with 38 servos is not so big as someone thinks. They would easily fit into a small box, and the cabling could go around curves, so that the box could be mounted everywhere. 38 servos, because the thumb has one phalanx less than the other fingers.

Click on the pictures to see it in full size.

Original picture of hardware-one-finger-setup

Manipulated hardware-five-finger-setup:

The advantage of 38 servos is, you can move all fingers in every position a human can do and beyond. That means, you can move every phalanx e.g… 90 degrees to the opposite side of the hand, try that with your hand :slight_smile:

Controlling of all 38 servos is then done with artificial intelligence, like described here http://arxiv.org/pdf/1603.06348.pdf or here http://arxiv.org/abs/1603.02199

Created a 3d-printed base for all five fingers and put the wire-ropes into 1.8mm small tube. The base should be much smaller and made with aluminium and with screw threads, so one can easily fix all the fingers to the base with a small screw. Or perhaps even better, the base made out of a harder silicon, so that its more like human.
You can see that the fingers have all the same length and so the hand is not really like a human hand, but can do all the moves.

Here i have put the fingers and the base into a silicon hand.

Now i need 38 servos :frowning:

That is how it should look like. O.K. with a nicer box and 38 servos in it, it would look much nicer, but to get a image of it.

You need to click on the picture to see it full

i made a new and smaller hand-base with a hole to fix a small pipe into. As you can see, i used the double joint from the flobotics finger as a wrist-joint, which would add 4 more servos to move the wrist-joint, but its possible.
I also cut the thumbs metacarpal “bone” and removed one phalanx .

It all fits into the silicon hand, only the tube-hole, which could be made smaller if everything would be aluminium.

i made it out of aluminium with grub-screws to easily fix the fingers. I am not a cnc expert, so professionally it would be nicer and perhaps even smaller.

For the thumb i needed to add the removed phalanx or it had not the freedom like a normal hand, e.g. it couldnt touch the other fingers tip. I cut the last phalanx, because the thumb has one less then other fingers, but the count of joints are the same as a normal finger.

The next part would be the box with the servos inside.