Robotic Humanoid hand

To put all this information into a picture, i thought about appending the info to the video-stream of the finger. I could add two other “info-pictures” on top of the finger-video-stream with the information about what the finger should do (press button 1, press button 2, or release finger from button). This i would do with a image of a number.

The next information is the force value (or wire tension value) from the force-sensors and a want-to-have force value from the user. So i simply would use 16 lines of 1pixel height and 1024 pixel in length (because there are 1024 force values). The first pixel line is the force i want-to-have on sensor1 and pixel line 2 is the actual force value from the sensor.

This video-stream i could then feed into a neural network.

So the neural net could get its rewards for matching force values and for the signal that comes from the keyboard if a button is pressed. Punishments if the force values are less then the want-to-have force value.

Here are some pictures that shows it. The numbers are what button it should press and X means it should release the button. The black pixels showing the force values.