I think I would need a bit more context to help you, but I’ll try to reply anyways:
I googled a bit and I’m going to assume you want to climb a steel beam that looks like this:
And you use some kind of magnetic wheels that look like this:
How to accomplish beam alignment of the robot? I can detect the beam but are there ways of teaching a robot its orientation to a 2D image, maybe detect the beam and give it a coordinate system?
I assume you are using a RGB camera, some kind of normal usb cam/mipi cam plugged into a Nvidia Jetson, which is all carried on top of your robot. I’m imagining something like this random robot I found:
If you calibrate your camera (you can use the ROS camera calibrator, it’s pretty easy, and if you have never done it before, you’ll learn how basically every camera intrinsics calibration works conceptually) you will be able to determine your focal length (f) in between other parameters like the image center coordinates (or you may already have this from the datasheet of the camera).
If you have a bounding box of your metal beam (I suppose that is the output of your detector), you can calculate the center of the bounding box (Bx, By), then calculate the difference between the center of the image (Cx, Cy) and the bounding box center (Dx, Dy). With that you can calculate the angle from the camera to the steel beam:
angle_x = atan(Dx / f)
angle_y = atan(Dy / f)
If that is enough for you to drive your robot towards the steel beam, well, great!
You could go step further if your detector also tells you the angle of the steel beam. I imagine that the robot is on the floor, trying to go towards the steel beam (which is a beam that is “perfectly” vertical standing on the floor), to then start climbing it, so you want to have the robot looking perpendicular to the steel beam. So if the detector can tell you that you are looking at a beam at… 90 degrees, you know you want to approach it so you end up facing the flat face.
You could estimate how far the steel beam is without using other sensors. I imagine you could put the robot in front of the steel beam at a known angle and distance, say, 2 meters, and then measure, in pixels, how wide the beam is in pixels. Then, you can use that measure in pixels to extrapolate how far the beam is.
However, I may be overthinking here!
- Is Lidar for proper navigation/home finding?
Using a LIDAR will help you a lot. Here I am assumind a 2D LIDAR, like a Sick/Hokuyo/RPLidar. You can use the ROS navigation stack to make your robot able to make a map (in real time, doing SLAM, if so, I would recommend hector_slam in ROS1 for it, it works very well with the cheapest LIDARs, gmapping is also pretty good, but I’d recommend it for creating maps to use offline, you could also try others like cartographer! the possibilities are endless haha).
If you have a LIDAR you will be able to know the distance to the steel beam without problems. And you will be able to also use it to help you face the steel beam perpendicular (as you’ll get some points in the LIDAR that are clearly a flat face!).
In order to use a camera + LIDAR you will want to make a URDF model of your robot, so you will be able to take advantage of the TF tree. You can also visualize your robot in Rviz with it. The TF provides transformations & the math to do things like:
I have a detection in the camera sensor at this angle that is a steel beam, I can transform that angle into the LIDAR frame (or a common “map” frame). If I add the distance data from the LIDAR at that angle, I know quite well where exactly the steel beam is (and if you have a map, you know the exact place you want to go to, and you could also find with what orientation you’d want to get there to face it perpendicularly, which the ROS navigation stack will allow you to send a goal to the robot to do so).
I hope this is helpful. A couple of additional comments:
You may want to use a more modern Ubuntu and ROS if you can, I’d recommend Ubuntu 20.04 with ROS Noetic at this point, as I still think it’s easier to play around with existing ROS packages, tinker in Python and so on there (instead of ROS 2). But this is my opinion only, others may disagree. The OS of your Jetson board may limit you here, and I understand that. You could look into using Docker to run any other version, but that may be a bit of an overkill at your learning stage (or the one I’m assuming by your post).