LDS-01 LIDAR that comes with turtlebot3 is good for small indoor places. But when I try to use it autonomously in larger indoor places it starts to behave unreliable.
Seems like LDS-01 range is not good enough. My plan is to upgrade my Turtlebot with better LIDAR.
What LIDAR would you suggest me for price up to $350?
consider adding video. Landmark navigation works well for longer distances and outdoor environments. After all this is what humans do when driving a car. They look out the window and see that big green building and know that is where to make the left hand turn. I think vision is best for a certain scale and lidar best for getting through a doorway without hitting the jambs.
There is another class of vision-based navigation that is different, this is where they convert a stereo 3D image to simulated LIDAR data. I am not talking about that. I mean that you recognize the landmarks and enter them into a list. Then when the landmark is re-recognized the list is consulted. This can be robust if multiple landmarks are in sight and location is determined by triangulation.
The same camera data can be used by other algorithms for “visual odometry” possibly using optical flow
Then as the robot approaches a wall or door the lidar data becomes useful again.
That said, a lidar upgrad is conceptually simpler
thanks for replay
I also plan to use Intel R200 in combination with LIDAR. But I plan to convert its 3D data into 2D laser scan. With LIDAR I also have issues when robot cannot recognize obstacles that are higher than LIDAR can detect.
By adding R200 I hope it will detect these obstacles more reliably.
Do you have more information about using 3D camera as you described above?
I still think that my LDS-01 is bottleneck of my robot. RPLIDAR A2M8 looks like very good LIDAR for its price.
Range is 3.5x better, sample rate is 4.5x better and accuracy also looks better.
What do you think, is there any better LIDAR with similar price?
Yes, that is a common “trick”, you convert 3D camera data so that it looks like LIDAR data then you don’t have to change the SLAM software. But the ONLY advantage is not having to change the SLAM software.
There is another use of the vision camera where the most simple explanation goes like this: The robot rolls into a kitchen and vision based software recognizes the front of a stove. Then the SLAM system says “in am in front of the stove”. There is no point cloud in this method.
If you clapse the 3D image to 2D then you are also tosing out most of the information