We’re happy to announce the initial release of ROS2 Grasp Library, with OpenVINO enabling and MoveIt compliance.
ROS2 Grasp Library (https://github.com/intel/ros2_grasp_library) enables state-of-the-art CNN based deep learning grasp detection algorithms on ROS2 for visual based industrial robot manipulation. This package provide ROS2 interfaces compliant with the MoveIt motion planning framework which is supported by most of the robot models in ROS industrial. This package delivers
- A ROS2 Grasp Planner providing grasp planning service, as an extensible capability of MoveIt (moveit_msgs::srv::GraspPlanning)
- A ROS2 Grasp Detector generic interface, collaborating with Grasp Planner for grasp detection. Also a specific back-end algorithm enabled under this interface: Grasp Pose Detection with Intel® OpenVINO™ toolkit
- Grasp transformation from camera frame to a specified target frame expected in the visual manipulation; Grasp translation to the MoveIt Interfaces (moveit_msgs::msg::Grasp)
- A ‘service-driven’ grasp detection mechanism (via configure auto_mode) to optimize CPU load for real-time processing
The package was verified with Ubuntu 18.04 Bionic and ROS2 Crystal release.
Verification with ROS2 MoveIt 2.0+ is still working in progress. Before this, we have verified the grasp detection with MoveIt 1.0 Melodic branch (tag 0.10.8) and our visual pick & place application.
 Brief introduction to Intel® DLDT toolkit and Intel® OpenVINO™ toolkit
Intel® DLDT is a Deep Learning Deployment Toolkit common to various architectures. The toolkit allows developers to convert pre-trained deep learning models into optimized Intermediate Representation (IR) models, then deploy the IR models through a high-level C++ Inference Engine API integrated with application logic. Additionally, Open Model Zoo provides more than 100 pre-trained optimized deep learning models and a set of demos to expedite development of high-performance deep learning inference applications. Online tutorials are availble for
Intel® OpenVINO™ (Open Visual Inference & Neural Network Optimization) toolkit enables CNN-based deep learning inference at the edge computation, extends workloads across Intel® hardware (including accelerators) and maximizes performance. The toolkit supports heterogeneous execution across various compution vision devices – CPU, GPU, Intel® Movidius™ NCS, and FPGA – using a common API. Online tutorials are available for