Thanks very much i like to share : )
About a year and something, along with writing my university project and learning ros through udacity robotics nanodegree!
It’s from the udacity’s 3d perception project, and i changed some topics and transforms to make it compatible. There are instructions on how to train and run it on the repo. Basically train it in a simple gazebo world, with objects spawing in random orientation - extract features (hsv color and surface normals histograms) -> dumb a training_set.sav file. Then train an SVM classifier with sklearn -> dumb a model.sav. Then use this trained classifier to recognize the objects (after you preprocess and segment the point cloud)
Yes, that right but is somewhat expensive 300 with the dc motors with encoders.
This base had some problems and the motors were mediocre but did the job, also a little hard to get accurate TF.
On Hindesight i would recommend a 2 leveled wooden base spaced out with rods.
On the bottom level lidar (elevated above the electronics) - battery- arduino etc…
(below the bottom level dc brackets - dc motors with encoders - FIT0493 this might be a better option from the lynxmotion motors that their encoders don’t mount well…)
Above level - the jetson nano - arm - dc-dc converters
And finally a top top elevated platform for kinect. (explanation here a little rush)
This should come cheap ~ 30-50 and for the motors 120-200
Good to see this. Nice work @MakeMe
Great work @MakeMe, I will definitely try this project. Thank you for sharing it with everyone.
Thank you! I would like to see your design I should make better blueprints to ease the process! email me if you have problems
Good job, Panagiotis!
I would suggest when using already implemented packages (i.e. m-explore and others) to just add a dependency to your packages, to avoid bloated repositories for easier navigation and reading. Of course, this does not apply if you have altered them in any way, but you could still use a fork.
Keep it up, and we will keep an eye on you for more!
Thank you gstravinos! and for the advice
Yes it is a little bloated, (my git and (c++) could use a little refining)
but i’m not sure how to add a package depedency that is not officialy supported in ros (i.e amcl),
which btw i have not icluded , e.g m-explore i don’t think it is.
or if you fork and change a repo, how do you then add it as a depedency? some magic with cmake or package.xml?
I also had to delete their .git files after i clone them, otherwise with nested gits they were appearing gray without files.
One simple option would be to make a file depedencies and put there “these secondary” packages to reduce the bloatware a little.
As you said, you can add dependencies on CMakeLists and package.xml for all other packages. AMCL and m-explore are both available as ROS packages (https://github.com/ros-planning/navigation and https://github.com/hrnr/m-explore)
Some packages might not have a release, but most of the times they can be cloned and used normally.
The way I handle dependencies on forks, is by mentioning them in the repo README. Maybe not the best way for blindly “rosdep install” users, but reading the README is something everyone does when facing problems, so I guess it works.
Generally though, including dependencies on your repo is bad practice not only because of bloat, but also because you will have to update them too, in future releases. By having a dependency (or a fork) to the original package you can always target a commit/tag, or just use the latest.
I build a new cheap easily reproducable base which it seems much more calibrated and navigation is good
with only 8mm thick wood pieces (2pcs x (20x31.8)) and (1 pcs x (10x20) for kinect base)
8mm metal rods and nuts.
equipment: black decker w/ 8mm wood cutter tip. (for the rods to pass through and cables)
(for cables open up bigger holes), glue for securing the nuts, measure 10cm for upper platform, 40 for kinect platform, and finally strong double sided tape
for lidar i cut 6 pieces 5x7 to reach a height of 4.8cm and then i double taped the ydlidar…
i will push more detailed instructions on the git sometime
cool project! Would love to see some more specs about it!
Have you ever consider high-jacking a cheap iRobot Roomba and integrate it into ROS?
These are getting into homes more often and provide a simple base with (power, odom, bumper, ir?)
My plan is to slap one of those solid state Lidars on top: https://www.kickstarter.com/projects/1697979147/lidar-for-everyone-hybo-ilidar/description (currently waiting for it to ship / overcome beta-test stage)
And I have two of those: https://www.kickstarter.com/projects/1128055363/7bot-a-powerful-desktop-robot-arm-for-future-inven (Also a Arduino Mega in control of the arm with a custom serial interface) They promised some ROS drivers but never delivered them. The company went out of business. But there a few cheap ready made arms that I would love to see being integrated into ROS.
This was a very promising candidate: https://www.kickstarter.com/projects/1383636492/the-smallest-servomotor-robotic-arm A little bit more expensive for a build under 1000$ but looking great. Sadly it ended in being sort of a scam. Main devs left company during crowdfunding and then some sort of Chinese “mafia” robbed the business. Really some sort of shady business going on hear. Glad Kickstarter holds back the money during the first month after a successfully campaign…
As far as AI goes I had those boards here in mind: https://openmv.io
There already exist quite a few fast follow-the-line race-bots build with them.
Whats the single board computer (SBC) you are using with your robot?
Have you considered building this with ROS2 (micro-ROS + nav2 + MoveIt2(?) )
I hadn’t really thought of hijacking a roomba, if it’s laying around it’s probabbly worth to mod it for educational purposes but i wanted an easily reproducable solution.
this lidar is amazing at only 99!!! I wonder how you can rotate it with steppers or smthing to have a 100 dollar 3d lidar!!
Also my arm costed 150-200 while this 350 dollar arm is much more precise and can do smthing usefull, i want it P
yeah i heard it about Gluon i was excited for this 1000 dollar arm to be a reality
My implementations in the project makes all arms that have no feedback to easily integrate them by changing the urdf
This camera is nice too! what it can do with that processor?
@flo i am not very familiar with ros2 except of the basics, maybe after my exams if i have time i will try to make a ros2 experimental branch to learn micro-ROS nav2 moveit2 roscontrol2 etc P
That’s why I was thinking of a Roomba… A 600 series cost about 200$ and they should be available used quite globally. You can interact with them through a serial interface. There exists even software for them: https://github.com/youtalk/create_autonomy/tree/dashing-devel @youtalk is working on some ROS2 ports and offered last year on github to continue maintaining this package.
But I’m unsure how the power from the platform can be used to power an ARM SBC and an robot arm. The power would be drawn pretty fast. But with an very cheap mini DIN 8 connector (under a dollar) and some soldering wires hooked up to an UART on any SBC/ARM Board you are good to go…
Probably changing my setup to a wooden box and some cheap hover board motors… But this would then involve some beefier hardware… [future projects…]
We actually did that at my university. It was a bachelor thesis from a college utilizing a Sick Tim series laser scanner… It was controlled with 2 Dynamixel and an Arduino Pro mini with an additional BNO055 IMU. I might be able to ping him. But consider that you need some beefy GPU or cloud resources to process 3D laserscanner data, depended on your application. Also It is really hard to utilize 3D Data in such a field. Other researches surrounding this topic at my university have been focusing to work with far more expensive Velodyne 3D scanners. It should be possible to do it with a cheaper setup, but if you want to develop things on top of 3D data it is better to have a reliable system to work with.
I have 2… so, if you want I can sell you one?
That’s really cool… Do you share your code about this anywhere? The guy from the cheap-3D Laser scanner is currently working on programming a self build robot arm consisting out of multiple dynamixel drivers. I think they don’t provide feedback during motion. So maybe it could help us here… I’m sadly no expert in robot arms…
Quite a lot if you ask me…! I have 2 versions from it. The first cannot do to much but the more recent boards really continue to impress me… Check out there twitter for some recent projects: https://twitter.com/openmvcam/status/1119709886810480640
Try it! It really improved recently with connectivity and a lot is happening with navigation2. It’s really cool, once you utilize what ROS2 is providing. The Documentation is lacking some parts, but they are improving here as well. Mostly with example projects like yours, it gets easier to learn what is there
It is with object recognition (node running on pc), it let goes when it goes parallel xD
200 seems low cost what sensors does it use? maybe an extra battery for powering extra things
Havent thought about all in one wheel motors, do this brushless motors provide feedback somehow for pid and odometry?
yes 3d data are recourse consuming, so with the two servos you rotate it up and down? I was thinking like 50 hz rotations all around xD
i just mean that on the ros control hardware interface if you pass in the read() function the commands from the write() function e.g assuming perfect execution - no feedback -> this has the effect that the robot THinks the arm is moving whether you have connected a arm or not. Then you can make a node that subscribes to joint states (e.g arm clone of joint states) and transform thems in order to send them for execution e.g to servo driver
Yeah i will try it with Ignition Gazebo too!
Some… Depends on model. The git I posted early gives you quite an overview what you can get from them. Speaking mostly about used robots at these price points.
You can hack the used ones and let the PID happen on the motor controller boards. You can get used / broken ones really really cheap these days. Checkout this git: https://github.com/EmanuelFeru/hoverboard-firmware-hack-FOC It’s the currently most advanced I would say. Here are some “visuals”: https://hackaday.com/2016/06/10/reverse-engineering-hoverboard-motor-drive/ https://hackaday.io/project/158256-hoverbot/log/146067-reprogramming-hoverboards-for-robots
Yeah… go for it ^^ Maybe cheaper to add 2-4 more to create some sort of solid state 360 Array…?
10/10 nice hack…! I need to test that some day or at least share it…
Pick and Place server with 3D Perception and Moveit Visual Tools
(trajectory lines + text) (P.S I use the next prompt button)
like and sub dont forget to hit the bell icon P