Robotic Humanoid hand

That is how it should look like. O.K. with a nicer box and 38 servos in it, it would look much nicer, but to get a image of it.

You need to click on the picture to see it full

i made a new and smaller hand-base with a hole to fix a small pipe into. As you can see, i used the double joint from the flobotics finger as a wrist-joint, which would add 4 more servos to move the wrist-joint, but its possible.
I also cut the thumbs metacarpal “bone” and removed one phalanx .

It all fits into the silicon hand, only the tube-hole, which could be made smaller if everything would be aluminium.

i made it out of aluminium with grub-screws to easily fix the fingers. I am not a cnc expert, so professionally it would be nicer and perhaps even smaller.

For the thumb i needed to add the removed phalanx or it had not the freedom like a normal hand, e.g. it couldnt touch the other fingers tip. I cut the last phalanx, because the thumb has one less then other fingers, but the count of joints are the same as a normal finger.

The next part would be the box with the servos inside.

The proto-prototype of the servo-box is better than i thought. As always, with some cnc-made aluminium parts, the box could shrink in its size to around 40 cm in height and 30 cm in width. Then build into a nice electrical cabinet. On the picture you can see, that it has 5 steps, on every step there will be 8 servos for one finger. Then you can use it on e.g. warehouse vehicles and grab stuff.

I got a hardware setup to power 40 servos and only use one USB2Dynamixel adapter. So i can address all 40 servos directly with one control software, only need to handle one usb-port.

Here is a picture, where you can see that every 6-port-hub is connected with its own power supply and every 6-port-hub is then inter-connected with each other only with the data-line of the 3-pin connector.

On every 6-port-hub then are 4-hubs available for servo-use.

Wow, thats how i thought it should be. A little nicer optics would be better, but it works. I put everything together inside a little box and put the robotic humanoid hand on top of it. The wire cables can be moved very easily. All 8 servos are recognized through one usb-port.

Pinging motor IDs 1 through 25…
Found 8 motors - 8 AX-12 [1, 2, 3, 4, 5, 6, 7, 8], initialization complete.


Nice things the finger can learn with ai . Learn to move the finger to a numpad number and press it, response/reward comes from keyboard itself. Or i build a little box with buttons, switches and so on, connect it to a arduion/rpi and get respons/reward for a convnet. Tipps and help is always welcome.

Back to the roots. @samiam The AX12-A servo does only send load value while it is moving, when you stop it, it sends a zero value. Thats not very good for usage. So i use again the flexiforce sensors to get the wire cable tension of every single wire cable. So a ai model can use this information for howto move the hand and also could “feel” if someone/something touches a finger.

Here a picture of the new setup with flexiforce sensors, the box is not getting bigger, which is positive, but more hardware is used.

1 Like

Here i move the finger only per manual control, thats why its so slow and not in one-move. You can move it as fast as the servo is. Here you can see how it moves :slight_smile: Like i said, a little deep learning would be good, if someone knows a implementable code, please let me know

https://youtu.be/9GWMH63Ey6I

Tipps for deep learning would be nice. I can tell, that i got 8 force values, and 8 values (negative and positve) for 8 servos to control speed. Also i could use a video-stream from the finger as input.

As output the best would be a regression-output of 8 different values for each servo. But nearly every neural net is a classification net, and then i would need nearly endless outputs for every speed and every servo, also it could be that more than one servo is moving at the same time, so i would need even more classification outputs ?

To put all this information into a picture, i thought about appending the info to the video-stream of the finger. I could add two other “info-pictures” on top of the finger-video-stream with the information about what the finger should do (press button 1, press button 2, or release finger from button). This i would do with a image of a number.

The next information is the force value (or wire tension value) from the force-sensors and a want-to-have force value from the user. So i simply would use 16 lines of 1pixel height and 1024 pixel in length (because there are 1024 force values). The first pixel line is the force i want-to-have on sensor1 and pixel line 2 is the actual force value from the sensor.

This video-stream i could then feed into a neural network.

So the neural net could get its rewards for matching force values and for the signal that comes from the keyboard if a button is pressed. Punishments if the force values are less then the want-to-have force value.

Here are some pictures that shows it. The numbers are what button it should press and X means it should release the button. The black pixels showing the force values.

I wrote a ROS node which captures the webcam stream with opencv and then i added two “images” above the webcam image.The first “image” is 50x50 pixels and shows the command. The second “image” is 32x512 pixels. I choose this, because the webcam video is 640x480 pixels, and so i can simply add 16 arrays of 1024 pixels. Then the webcam image is the third image. This image stream i then publish and can simply see it in rviz.

You can see the 16 black pixels above the webcam image and under the command-image. But as you can see, they are only pixels and not good to see, so my question is now, would a convnet be able to “see” and keep apart these pixels, or are they too small?

I designed and printed a new flexiforce adapter. With this one, the sensor is fixed at a perfect position even if you move the cable. Also it is between two metal parts. Here you can see it.

I also had built a new box to get everything in a fixed place and see how it would fit. Here is the front of the box.

And here the back of the box with all electronics. You just need to plug in one USB cable and one Ethernet-cable for the raspberry-pi and power it up.

With this setup is absolutely easy to install the wire-cables for one robotic finger through these bowden cables. Plug in the cables and connect to rviz. ready to go

Hello all,
i started a kickstarter project to build a ready-to-production prototype of the Humanoid Robotic Hand. I hope you can support me and spread the message, or pledge something on my kickstarter site.

Thanks alot

Woohoo, I got the honor of being the first backer :slight_smile: Your campaign seems ambitious but I wish you all the best! Keep up the good work.

thank you @dbolkensteyn for your support. Have a nice day :slight_smile:

Looks like i fell asleep for a short while, but so does the robotics industry. I thought i could use rotation and linear motion to simply move a robotic finger and i tried and it works nice. For linear motion i use linear drawer slides. Next would be to build something to hold servos to move it.

Here are some videos.

https://www.youtube.com/watch?v=CF0qYk1lGtM

https://www.youtube.com/watch?v=gGd50u3dBYI

hi,
someone knows a cheap current control servo ? Not the servos that start at 250 euro or more.

Or someone knows of “something” that could be used with a normal servo but makes a current control servo out of it ? some circuit board i connect infront of the servo or something i dont know of ?

thanks in advance

simpler is better. Setup for one finger.


Someone knows cheap vr-gloves or other gloves to get haptic or force/touch or joint-degree values ? thanks

This bowden outer shell is even smaller. Inner diameter is 0.6mm , outer diameter is 1.2mm. My 0.5mm steel wire fits perfect. And the connection between finger and “rest” is not bigger anymore (in diameter) then the finger itself.

Next thing is still getting a way that the finger could “sense”, “feel” by itself with force-sensors or anything to train a ML model. Perhaps this looks good for simulation Google AI Blog: Speeding Up Reinforcement Learning with a New Physics Simulation Engine .

new hardware control setup, as always simpler is better. I use a aluminium T-shape to mount the n20 geared motors and the flexiforce sensors together. The biggest thing now is the rpi4 with its hats and the power supplies.

Next steps are building a simulation with it and then run the model on real hw and realtime ROS 2.