This effort aligns with Alias’ mission to “remove 0-days from robotics” and is the first public step we take towards implementing it. Briefly, we share the belief that vulnerability disclosure is a two-way street where both vendors and researchers, must act responsibly. We thereby adhere to a 90-day disclosure deadline for new vulnerabilities (read more about our disclosure policy here) while other flaws such as simple bugs or weaknesses could be filed at any point in time. We notify vendors of vulnerabilities immediately, cooperate with them and favour a coordinated disclosure where details are shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix.
This policy is strongly in line with our desire to improve the robotics industry response times to security bugs, but also results in softer landings for bugs marginally over deadline. According to our research, most vendors are ignoring security flaws completely. Similar to us, we call on all security researchers to adopt disclosure deadlines in some form, and feel free to use our policy verbatim (we’ve actually done so, from Google’s) if you find our record and reasoning compelling. Creating pressure towards more reasonably-timed fixes will result in smaller windows of opportunity for blackhats to abuse vulnerabilities. Given the direct physical connection with the world that robots have, in our opinion, vulnerability disclosure policies such as ours result in greater security in robotics and an overall improved safety. A security-first approach is a must to ensure safe robotic operations.
The RVD is an attempt to register and record robot security bugs including both weaknesses and vulnerabilities (refer to Appendix A for terminology) The current content has been built over the past months and includes at the time of writing more than 280 flaws overall:
Open
Closed
All
Vulnerabilities
Weaknesses
Others
As contributors of ROS and ROS 2, we have create a particular section for ROS (currently only highlighting ROS 2 flaws) available here. We have committed resources to maintain this list and process flaws while reporting about the status of vulnerabilities at the corresponding ROS 2 Security WG meetings. We invite everyone in the community to contribute and help processing security flaws. Currently and as recorded by our team at RVD, ROS 2 presents 236 security weaknesses:
Open
Closed
All
ROS 2 Weaknesses
Over the coming months we expect to include several ROS and ROS 2 packages in our pseudo-automatic robot security pipelines and collaborate with maintainers while recording and addressing security vulnerabilities and weaknesses
We’d like to acknowledge and credit the support we received from the ROSin project which partially enabled the development of this work. In particular, RVD will be used to report the findings of ROSIN RedROS2-I and RedROS2-II FTPs, funded by the European Union’s Horizon 2020 research and innovation programme under the project ROSIN with the grant agreement No 732287.
Finally, a small disclaimer: Alias Robotics provides robot security solutions in close collaboration with original robot manufacturers. By no means we encourage or promote the unauthorized tampering with running robotic systems. This can cause serious human harm and material damages.
BTW, for those interested in learning more about our work but also about security in robotics, overall, we invite you all to attend the ROS 2 Security Workshop that will happen within ROSCon 2019 in Macau!
IMHO, provided those rules were defaults and/or could be forced into a robot or robot component (remotely or even locally, that would just affect the severity) and take advantage of those devices (as part of an exploit), then according to our classification, it could be considered a vulnerability. In that case you should be able to provide it with a score for its severity using RVSS. See the discusion here for a bit more of context in vulnerability/weakness.
Our goal with existing reported (and future) flaws is to provide means for reproducing and validating its exploitability. We’ve been prototyping for a while (refer to our early RCTF approach) and are currently working on a prototype based in Docker. We hope to release is shortly. The idea is that each researcher reporting a flaw should provide a docker-based image that allows reproducing the flaw, arguing about it and ultimately, facilitating its mitigation.
Interesting undertaking; we clearly need more attention paid to security issues in robotics.
To help better understand the goal and how the process works, I have some questions:
Why do we need a robot-specific database for cybersecurity vulnerabilities? Is there a shortcoming in the widely used CVE system? A search for “robot” in their database shows more than a few entries that would seem to be of the type that you’re proposing to keep track of.
If disclosure of vulnerabilities is meant to be delayed for up to 90 days, but the community submits new vulnerabilities to RVD as public github issues, then how does the delayed disclosure work?
When you notify a vendor of a vulnerability, does an email address (e.g., security@) suffice, or do you need require other communication mechanism?
I have a (maybe stupid) question: is it common practice for for these kind of vulnerability statistics to include test-only code? While problem in test code should certainly be addressed it looks to me that they aren’t really a defect in the shipped software (since they are not part of the packaged software after the build).
Another note regarding the published numbers here: the public GitHub tickets seems to contain a lot of redundancy. The very same defect is being ticketed up to one hundred times just since the functionality / API is used in numerous packages - for each a separate ticket is created. I would certainly consider that to be only a single vulnerability / flaw (even though it affects many packages).
Both of the above seems to “inflate” the reported numbers significantly therefore I would suggest to reconsider how to account for those in statistics like this. I would also be interested in the actual (non-inflated) numbers to see where we are roughly at.
CVEs are managed by CNAs, organizations authorized to assign CVE IDs. The release of RVD comes connected to the process of becoming a CNA for filing CVE IDs (see Requirements at CVE - CVE Numbering Authorities). We are taking (what we think are) the right steps to become a CNA and will soon start submitting CVEs (and provide CVE IDs to reported vulnerabilities within RVD). By no means Alias Robotics intends to replace CVE, we aim to empower it.
That said, while we look up to the work that Mitre and many other partners started within CVE, over the past year or so, Alias Robotics identified several limitations and started building RVD accordingly (a robot-specific database of vulnerabilities) and strong barriers to change things. Without getting into an extremely verbose reasoning on the things we’ve tried (and failed) and would like to see improved within CVE for robotics, see below some of the aspects we dislike and consider critical to move forward in securing robots and their components:
CVE robot-related results are scarce: while @gerkey is complete right about the fact that the current CVE List provides results when searching for robot (43 CVE entries), ROS (13 CVE entries) and even the more generic (and misleading) query Robot Operating System ( 892 CVE entries), a closer look into results (at least to us) into realizing that finding ROS-related flaws is a challenge. Contributing to categorizing this information better is something we’ve committed to but don’t believe will happen in the short term given the complexity/limitations of how the CVE List works (and where robot-related vulnerabilities are the minority, still) In RVD, we’re actively categorizing flaws in a robotics-specific manner and providing templates which hope to facilitate this process. There’s a lot of work to do on this still and we have internal tickets for it. We plan to separate the existing template for reports into two (weakness and vulnerability, facilitating escalation from weakness to vulnerabilities). We also hope to automate the process of reviewing flaws by using parsers that automatically and periodically review all tickets and report/tag those that are malformed.
CVE reports require more details (in robotics): Let’s take CVE-2019-13585 (and related sub-reports within the entry) as an example, for a security researcher to reproduce this flaw and provide a mitigation or simply patch it temporarily in their shop-floor, more information would be required. The intrinsic system integration of the robotics field demands for additional bits of information. Examples include a well defined and appropriate severity (to priorize flaws), a reproducible environment and instructions (if feasible) and likely (though this is a personal feeling), a channel for an open discussion where other researchers might triage/contribute/discard the flaw itself (you will find that most robot-related flaws within CVE List have barely been triaged). At RVD, each flaw is presented as a ticket/issue which favours the discussion.
facilitate reproducing flaws: Working with robots is very time consuming. From my experience, anyone that has built a robot with ROS understands the pain of rebuilding workspaces across platforms. This is not a criticism though, it’s likely an inherent characteristic of the complexity of the field and the tradeoff of the modularity of ROS. Mitigating a vulnerability or a weakness requires one to first reproduce the flaw. This can be extremely time consuming. Not so much providing the fix itself but ensuring that your environment is appropriate. At RVD, each flaw should include a row named as Module URL (e.g. see 459). This should correspond with the link to a Docker image that should allow anyone reproduce the flaw easily. We’re still working on it and hope to make it available to everyone very soon.
unfit severity scoring mechanism: CVE uses CVSS to report on the severity of vulnerabilities. As we discussed and published a while ago, CVSS has strong limitations when applied to robotics. Simply put, it fails to capture the interaction that robots may have with their environments and humans. This is critical when considering the severity of a flaw and has been discussed repeatedly in the security community. We’ve been thinking about all these aspects for a while, at RVD we make use of RVSS, a robot-specific scoring system that takes in consideration safety-related aspects.
more dynamic experience: while some might disagree, from our iterations we found that the process with CVE is somewhat slow. From our research we found that most robots and robots explored nowadays (specially industrial robots!) are highly vulnerable. We believe that a more dynamic path would facilitate mitigating many of these vulnerabilities and accomplish our mission of erasing 0-days from robotics by actively collaborating with manufacturers and maintainers. At RVD, We hope to speed things up significantly by including a series of (Github) actions that will trigger every time a new flaw is reported (such as checking prior tickets and invalidate it if repeated, tag it as malformed and request for more information, etc.)
Alias Robotics has committed resources to all the things listed above but of course, support and contributions are more than welcome. We hope the security-wg and its members can find a way to support us in this endeavour. Of course, contributing to the CVE List is something we all should do but in our opinion, that falls short.
From our humble experience, we don’t see CVE changing several of these aspects (in a somewhat acceptable timeframe). This to us, justifies the launch of RVD. We hope to prove with RVD that our statements regarding robotics are somewhat correct and that more resources should be allocated to it. Hopefully this will provide a much stronger argument to Mitre and other parties within CVE.
The ultimate reason why we decided to launch RVD is because we hope to demonstrate that some of these features are worth integrating into the CVE List.
The disclosure policy applies to Alias Robotics and our engineers. We hope to inspire the community with this policy. It’d be great if other groups and individuals were to adopt it as well but we can’t enforce it.
Anyone can literally jump into the wild and publish vulnerabilities or even worse, sell them in dark markets (rather common from what we’re observing lately). RVD provides a channel to do it responsibly. An approach could be to list the flaw as a weakness (according to our classification, vulnerabilities are a more elevated “degree” but all vulnerabilities are weaknesses) and then reach out the vendor/maintainer privately providing more information about the flaw and offer support for its mitigation. Eventually, either after 90 days or after a fix has been shipped, the weakness ticket could be enhanced and the complete exploit could be disclosed turning the weakness into a vulnerability.
I don’t think this is stupid at all, you’re right. Without saying that we discard these issues (having good flawless tests is relevant), most of the flaws affecting tests that we’ve processed affect underlying software layers (and not explicitly the test code itself). Also, testing on test-only code gives a first valuable intuition. Of course, use-case specific tests are more appropriate but I doubt vendors or integrators would be willing to open source these up (and if they do we’ll do our best to pick them up!)
I haven’t processed all the RVD tickets but from the intuition acquired building it, I’d say that currently most of the flaws at RVD do report about these underlying defects. We have limited bandwidth and try to focus on what’s more critical.
True. This can be further seen with a closer look at the processed flaws for ROS 2, mitigating a flaw closed several tickets. We’re working on this. As mentioned in my previous comment above, we’ve got some internal tasks already allocated to do this automatically parsing syntactically tickets daily and reaching a compromise. We don’t have a solution ready unfortunately but it’s coming.
All right, noted. We’re slowly building up though and have already filtered out quite a bit. Note that the current tickets represent only a very small subset of the ROS 2 packages (ROS core and navigation2 mostly, we disabled the rest for now) with a very limited set of tests. Our security pipelines include several static and dynamic tests. Including the autogenerated reports from static testing tools will increase the current number of flaws by an order of magnitude at least (which again, would be hard to interpret).
Any advice or disagreements (reasoned please) would be very helpful but the general intuition we’re trying to develop is:
there will be a significant amount of weaknesses reported for ROS 2, several referring to quality bugs (opposed to security ones)
flaws that are exploitable will be listed as vulnerabilities (note that we don’t have yet a single vulnerability for ROS 2 (neither closed ones, which to us means, mitigated))
A relevant number to obtain a quick intuition for the insecurity of ROS 2 would be the number of vulnerabilities open (not mitigated). Does this make sense to you @dirk-thomas? Also, would it help to point to this thread of conversation from the RVD README.md file for further intuition?
Happy to join the great (and very relevant) discussion points on this thread. Just sharing some thoughts:
I may add, that conversely to what @gerkey was stating, CVE covers a yet somehow extremely poor amount of Robot-specific vulnerabilities. Very little commitment has been shown so far both by security researcher and robot manufacturers at least when it comes to reporting CVE’s and there is vast amounts of work to be done. RVD is an attempt to systematize this workflow, which complements and feeds vulnerability records maintained by the competent authorities and serves as supporting documentation.
I’d like to share as well additional challenges we faced ourselves in Alias Robotics when digging into the actual records in vulnerabilities. For example, when we type “robot” in the CVE browser, most of the references will refer back to ROBOT (Return Of Bleichenbacher’s Oracle Threat) which as 19-year old vuln on the RSA encryption which in most cases, does not necessarily apply to a robotic system (won’t in all cases I’ve inspected). Making an emphasis in the fact that we report actual “robot - specific vulnerabilities” is and will be an additional challenge to segregate from other more “IT related” flaws, as @vmayoral points out.
Similarly, I do believe that ROS2 adoption can greatly benefit from the transparency in the security workflow proposed within RVD. Weaknesses can be separately inspected and mitigations adopted, all in a trackable and reproducible manner, so ROS2 resources and be kept up to date and secure when used. Of course, there is tons of work to be done still and community contributions will be super-welcome!
I think this is a very important and highly needed initiative. The potential consequences of an insecure robot are very concerning.
I support the idea of a robot-specific collection but I also agree that it needs to be well maintained.
However, I think it es even more important to raise the visibility of such a platform. Otherwise ist usefulness will be very limited. OEMs, System Integrators and researchers alike should be aware of this and ideally actively taking part in the process.
I absolutely welcome that Alias is taking the lead here, but elevating this initiative to a broader support by other players would be important. All the issues discussed above (90-days deadlines, scoring, …) could be agreed-upon rules. What are your plans for this and what would be options?
In any case, we will also actively contribute to RVD in the future.
If RVD in intended to act as a more responsive and more detailed front-end to CVE, then the concern I’m describing below can be mostly ignored. In that case maybe we can eventually team up with MITRE to improve the CVE feature set based on what our community finds useful in RVD.
I’m concerned that we might be claiming snowflake status by saying that robots need their own security flaw scoring and reporting systems. Robots are complex, sure, but there are plenty of physical, actuated things in the world that are controlled by software that might contain vulnerabilities. Are we following the example of other domains that have their own scoring and reporting systems or are we striking out on our own here? What do organizations working in automotive, building infrastructure, factory equipment, medical devices, or other “cyber-physical” fields do?
Regarding the poor search results available for robot/ROS in CVE today: can that be attributed to the fact that approximately nobody is yet reporting flaws in these systems anywhere? Presumably once we get the community to consistently report their findings, the CVE database would come to contain much more useful information.
To be clear: I’m very enthusiastic about finding, reporting, and mitigating security flaws! But after 20 years of personally arguing that robots are special and so we need our own X (for many values of X), then living with the resulting maintenance burdens, I’m also eager to reuse existing systems and approaches wherever possible.
I still have not seen a good reason why we need to strike out on our own. I would rather leverage the work of the NVD and MITRE so people can reuse existing tooling, process and procedures.
I would say the NVD is lacking in robotics specific CVEs because people have not submitted issues. We have opened 3 CVEs with MITRE this year for ROS packages:
CVE-2019-13445 - potential integer overflow
CVE-2019-13566 - potential string overflow
CVE-2019-13465 - potential iterator cause buffer overflow
ROS is just packages on top of an operating system, it would be like Apache standing up a new vuln database just for Apache projects instead of using MITRE.
Slightly off-topic, but: this is something me and my colleagues @ChrisTimperley and @wasowski also started wondering. This will probably also come up in our ROSCon presentation (188 ROS bugs later: Where do we go from here?), but I just wanted to add that at this point we’re not sure whether CVE is sufficient for robot related bugs/vulnerabilities or whether issues in robot software are actually so different that they should get their own classification.
I guess I wonder why ROS is special and a CVE would not be sufficient to handle a security issue? I mean from a design perspective it sends messages over the network and runs on Linux(for the most part). How is this different from an issue with MQTT?
This seems like a bit of vanity project, “ROS is so special we need our own vulnerability db” .
I’d rather we leverage the work and efforts of MITRE.
I want to echo this point. There’s actually a PR aspect of this to consider: mature products have CVEs. It’s part of life these days. Security is only recently becoming more of a concern in ROS. As that grows, so will the CVEs, and the perceived maturity of ROS 2. That’s one of the reasons we’ve been submitting them!
I think this is a good way to put it. We certainly tried our best to avoid reiventing the wheel. Our intention is to get aligned as soon as possible with MITRE and the CVE List. Becoming a CNA will help voice out our opinions (to some extend) and we hope to remain constructive on what needs to change to facilitate securing robots and robot components. RVD is a fast-track we’re taking.
Our experience (coming from a robotics background and) having tried these tools for a period of time is that they’re not sufficient. I’d be interested in other roboticists from the community sharing their views as well.
When it comes to scoring mechanisms for the severity of vulnerabilities, RVSS (source code) was built by researching what other (robotics) related areas were demanding and wasn’t being met. [1] or [2] are among the ones cited while building it. The white paper above discusses it in more detail and proposes a scoring mechanism that takes in consideration aspects that directly apply to self-driving cars and other similar autonomous devices.
Any help interfacing with MITRE will certainly be very helpful!
I applaud this action. This is great and we certainly encourage everyone involved in security to follow a similar approach and commit resources to file reports in the CVE List. As pointed out above, we certainlly will. The big question we asked ourselves when designing RVD was, “As a roboticist/security researcher, what do I need and find more useful to mitigate flaw A in a robot?” (ROS specifically, here)". The core is mitigation.
I did a quick search on the first ID (CVE - CVE-2019-13445) you listed above but it’s undisclosed. My guess is that the same patterns critized above about reports in the CVE List are in those reports (or maybe not and I’d be gladly surprised!).
This is not what’s being proposed here, at least not within RVD. It’s implicit on its name “Robot Vulnerability Database”. It’s not ROS-specific, it’s for robots. You may claim the same and that robots are a sub-class of hardware which doesn’t deserve its own treatment. Well, I would then object and indicate that according to several sources CVE has currently serious issues capturing vulnerabilities that affect to hardware.
One only needs to parse the CVE List, compare the density (of hardware vs software reports this year) and the “value” on the content of these reports to draw some conclusions.
One point I’d like to make is that RVD mechanisms attempt to report vulnerabilities in a way that help/favours mitigation. This is not the feeling we get with the CVE List. We advocate to bring these mechanisms to the CVE List. When reading the CVE List, we felt that many reports where terrible and didn’t help at all reproducing the flaw.
Nobody is trying to replace CVE @joe, see reasoning above.
Consider ICS-CERT, particularly their advisories landing page. Each advisory may contain multiple vulnerabilities but each vuln links to a CVE. Also important to note that each vuln has a standard CVSS score to support prioritizing. Both CVE and CVSS are mature, communicate well, are deeply integrated into vuln management tools.
I think I have seen several entries which are only identifying flaws in test code, e.g. using the public API incorrectly / insufficiently. So in these cases the defect is not in the used code but in the test itself.
Anyway my suggestion would be that it would be helpful if those would at least be moved / accounted for in a separate category to draw a more precise picture how many problems actually affect the code used by applications.