ROS2: How to achieve visual recognition with AgileX Limo ROS2?

Limo is a smart educational robot published by AgileX Robotics. More details please visit: https://global.agilex.ai/

Vision-based Line Following

Logic

  1. Firstly, the camera needs to be initialized. The image information is obtained by subscribing to the messages published by the camera, and the image is converted to the OpenCV format.
  2. The obtained image is preprocessed, including operations such as grayscale conversion, Gaussian blur, and edge detection.
  3. The preprocessed image is binarized to convert it into a black and white binary image.
  4. Morphological operations, such as dilation, erosion, and opening, are applied to the binary image to enhance line detection.
  5. Hough transform is used to detect lines, which are then drawn on the image.
  6. By analyzing the slope and position of the detected lines, the direction in which the robot needs to turn is determined, and the robot is controlled to move towards the target direction.

Implementation
Launch the camera.

ros2 launch astra_camera dabai.launch.py

Place the robot in the simulation table and activate the vision-based line following function.

ros2 run limo_visions detect_line 

follow_lane
The lane on the simulation table will be recognized.

Color Tracking

Visual color tracking is an object detection and tracking technique based on image processing, which allows real-time tracking and localization of objects of specific colors.

Logic

  1. Initialize ROS node and camera subscriber: First, you need to initialize a ROS node using the rclcpp library in ROS2, and create a subscriber to subscribe to image messages. Convert the image messages from ROS to OpenCV format using the cv_bridge library.
  2. Define color range and mask: In this code, we will take the blue color target as an example for tracking. First, define a range object in OpenCV to represent the color range. Then, use the inRange function in OpenCV to convert the image to a binary mask, which filters out the target region for further processing.
  3. Detect and draw bounding boxes: The target region in the mask may contain noise and other non-target regions. To identify the exact position of the target region, you can use the findContours function in OpenCV to find the contours and use the boundingRect function to calculate the bounding box of the target region. Then, use the rectangle function to draw the bounding box on the original image.
  4. Publish the target position: Lastly, you can use a publisher in ROS2 to publish the target position to other nodes for further control and navigation.

Implementation
Launch the camera.

ros2 launch astra_camera dabai.launch.py

Place the colored block within the view range of the limo and activate the color tracking function:

ros2 run limo_visions object_detect 

QR Code Tracking

A QR code is a graphic composed of black and white elements, which records data and symbol information. It is arranged on a plane according to specific rules and geometric shapes in a two-dimensional direction. In its encoding, it cleverly utilizes the concepts of “0” and “1” bit streams, which form the fundamental basis of internal computer logic. Several geometric shapes corresponding to binary are used to represent textual and numerical information, enabling automatic reading through image input devices or photoelectric scanning equipment to achieve automated information processing.

QR code shares common features with barcode technology, such as each coding system having its own specific character set, each character occupying a designated width, and incorporating specific verification functions. Additionally, it has the capability to automatically identify different lines of information and process changes resulting from graphic rotation.

In ROS2, the aruco_ros function package is utilized for QR code identification. aruco_ros is a function package developed based on OpenCV, written in C++, and provides a C++ interface.

Generate QR code:

URL to generate QR code: Online ArUco markers generator; you can generate different QR codes according to your own needs

The QR code used in this example is:
aruco-0

Implementation

Launch the camera.

ros2 launch astra_camera dabai.launch.py

Place the QR code within the field of view of Limo, and activate the QR code recognition function.

ros2 launch aruco_ros single.launch.py

Launch the QR code recognition function.

ros2 run limo_visions move_to_ar

Traffic light recognition

Logic:

  1. Initialize the ROS2 node and create an image subscriber and image publisher.
  2. Read the image and convert it to HSV color space.
  3. Define the color range of red and green and apply it to the image through the inRange function to obtain a binary image.
  4. Perform morphological operations on binary images to remove noise and fill holes.
  5. Find the contours in the image through the findContours function, and find the circumscribed circle of each contour through the minEnclosingCircle function.
  6. For each circumscribed circle, calculate its area and center coordinates. If the area is larger than the threshold and the center of the circle is within the predefined traffic light area, it can be marked as a traffic light.
  7. Draw the circumscribed circle of the traffic light in the original image and publish it to the ROS2 topic.
  8. Repeat the above steps in a loop and wait for the next image to arrive.

Implementation
Launch the camera.

ros2 launch astra_camera dabai.launch.py

Place the QR code within the field of view of Limo, and activate the traffic light recognition function.

ros2 run limo_visions detect_traffic

About Limo

If you are interested in Limo or have some technical questions about it, feel free to join AgileX Robotics . We can talk about it!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.