ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Publication: Open Problems in Robotic Anomaly Detection

A research effort in collaboration between Carnegie Mellon’s Software Engineering Institute and the US Government resulted in a publication I wanted to share with the ROS 2 community. Abstract below, and link to arXiv at the bottom:

Failures in robotics can have disastrous consequences that worsen rapidly over time. This, the ability to rely on robotic systems, depends on our ability to monitor them and intercede when necessary, manually or autonomously. Prior work in this area surveys intrusion detection and security challenges in robotics, but a discussion of the more general anomaly detection problems is lacking. As such, we provide a brief insight-focused discussion and frameworks of thought on some compelling open problems with anomaly detection in robotic systems. Namely, we discuss non-malicious faults, invalid data, intentional anomalous behavior, hierarchical anomaly detection, distribution of computation, and anomaly correction on the fly. We demonstrate the need for additional work in these areas by providing a case study which examines the limitations of implementing a basic anomaly detection (AD) system in the Robot Operating System (ROS) 2 middleware. Showing that if even supporting a basic system is a significant hurdle, the path to more complex and advanced AD systems is even more problematic. We discuss these ROS 2 platform limitations to support solutions in robotic anomaly detection and provide recommendations to address the issues discovered.

Link below:
https://arxiv.org/abs/1809.03565

1 Like

I think it’s worth highlighting the recommendations stated:

V. RECOMMENDATIONS FOR ROS 2
(…)
Given these core ingredients, we make the following
recommendations for ROS 2 based on our findings:
(…)

  1. Introduce strict value ranges into messages (II-B).
    (…)
  2. Provide automatic subscription to new topics and mes-
    sages in
    rosbag
    (III-B).
    (…)
  3. Develop a better profiling environment
    (…)
  4. Integrate best-known-state tracking and recovery (II-F)
    (…)
  5. Introduce state introspection (II-D, II-F)
    (…)
  6. Provide a safe mode (II-F)
    (…)