EU Cybersecurity regulation might make developing open-source software more difficult

I’ve just found this article explaning possible impact of EU Cyber resilience act on open-source software: Open-source software vs. the proposed Cyber Resilience Act .

The prepared legislation can be found here with feedback period open until 21 January 2023.

If the authors of the article draw correct conclusions, the act would require all developers of critical products to hire an external auditing company to audit the development process and vulnerability management, in order to get the required CE mark for the software. This is an excerpt from the list of critical products:

It seems the lawmakers tried to make some good for open-source developers, however, it it not clear if that’s enough:

In order not to hamper innovation or research, free and open-source software developed or supplied outside the course of a commercial activity should not be covered by this Regulation. […] In the context of software, a commercial activity might be characterized not only by charging a price for a product, but also by charging a price for technical support services, by providing a software platform through which the manufacturer monetises other services, or by the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software.

I agree with the authors of the above-mentioned article that this really sounds that as soon as you start selling support for a FOSS or open a sponsoring account, you have to start following the regulation and getting the CE mark. And I’m not sure how this is supposed to work if some commercial company would like to use an unpaid FOSS dependency in their software - whether they would need to get the CE mark also for this dependency, or not.

Now, I think it might not be that hard for Open Robotics to get the CE mark for the ROS core repos they maintain. However, the same cannot be said about the hundreds of individual developers or small research teams around.

Has anyone already examined this regulation in detail? Is there something I’ve missed? Or should I start contacting my EU parliament rep?


Having briefly skimmed this (but not being an expert), I have a similar question: How many degrees of separation does this legislature permit? My reading is that it’s about whatever ends up in what you attempt to distribute in the EU, but surely an audit doesn’t expect to cover all the way down to the silicon.

Additionally, let’s say I’m a company making use of some ROS package, and my product has passed its audit. That ROS package then adds a new dependency: am I not permitted to use this new version of the ROS package in my product? This new version or dependency hasn’t passed the audit, but it may contain safety/security improvements. How many layers down do we consider relevant to the audit?

Regarding one of your questions:

I’m not sure how this is supposed to work if some commercial company would like to use an unpaid FOSS dependency in their software - whether they would need to get the CE mark also for this dependency, or not

I think the company would hand a bundle of code over to the third-party auditor, and it would be up to the audit to investigate that dependency… Having the CE mark on that dependency would speed up the audit, but I believe it currently is up to the company to deal with that. So ideally developers/researchers wouldn’t have to deal with this themselves, but there would be lot of duplicated effort across these auditors.

It feels to me that this legislature is well-intentioned, but doesn’t fully address the logistics of software development… but, happy to hear from folks with more knowledge/experience on this one.

Thanks for bringing this up. We bumped into this earlier this year with our cybersecurity group at Alias Robotics and while working with a customer in healthcare (a sensitive one) that required studying deeper the security EU legal landscape for compliance with the European Medical Device Regulation (which is becoming effective in 2023).

I personally find the following image quite helpful to understand the Cyber resilience act:

My personal take on this (but looking to learn more from others):

  • It’s still not clear for me how to interpret the Criteria for the definition of “critical products”. Even after speaking with legal advise about this, my understanding is that the examples in the Annex you include above can be reinterpreted based on the robotics use case criticality.
  • I think this a very positive move from Europe and likely to be adopted by other countries. It’ll put positive pressure on manufacturers and technology solution providers, many of which are currently not managing security at all (not patching, or simply doing it with with incredibly big/long windows of opportunity for black-hats).
  • I think it’s unrealistic for things to change or be enforced rapidly. Even if legislators could, my understanding is that nation states will move at their own speed. There’s still very (very!) little industry awareness about the insecurity in robotics (and frankly, also in Operational Technology (OT), in general). It’ll take years for the supply chain to catch up.
  • I don’t think that contractors and support services around open source will be required to comply with CE marking by law. This is unrealistic IMHO in the short/mid term. What is likely though is that companies contracting services, if selling in Europe, may require their contractors to comply with the EU Cyber Resilience Act to simplify the process of integrating deliverables into products. But that’s a different story, it’s a new requirement, and you could charge extra for it while partnering up with security guys.
  • From an assessment point of view (Critical “Class II”, most demanding), threat modeling and cybersecurity vulnerability assessments are generally made on a particular end scenario. It’s in general of little use to make general assessments. In my view this means that getting CE marking for ROS 2 (or similar forks) isn’t that useful if the manufacturer then needs to re-assess it for its particular use case. It might be interesting to have a handful of “community” assessments conducted for the most popular use cases of ROS 2 though.

I have mixed feelings about this. On the one side, I believe it is groud breaking legislation with good reason. On the other side, most orgs especially in the automation industry are so far away from fulfilling any of this, that it will kill a lot of businesses if it is introduced to quickly.

With regards to ROS, I believe it will seriously hit the industrial use of ROS. I mean right now most questions and fears we get about ROS already evolve around safety standards and their impact. For safety there are some nice ways around in most cases. But for security at least as it is planned, there are not. This regulation means that people using ROS in their products need to review each component they use with regards to security. In this case, they’ll likely use a commercial package with “security” guarantees and liability instead of an open source package even if the package might not be as secure. What this will lead to is that only big tech will be able to afford to use open source packages or ROS in their products.

We’ll probably need a legal entity that distributes ROS 2 with security guarantees…


I sincerely believe we all might agree any open source components and software will never be commercial grade and therefore will never but free of security issues.

Hey all,

Jumping here a bit late but answering our your last comment @DGB there is no product that is free from security defects. It’s sometimes a matter on how you reply when a new cybersecurity risk arises. Our experience is that it’s a matter and a meter of the cybersecurity maturity of the vendor involved.

The legislation above (CRA) puts emphasis at the process of cybersecurity to ensure that (a) products are tested appropiately and/or in compliance to cybersecurity norms and best practices. (b) sets the obligation and duty of care of such products for cybersecurity in the lifetime of the product and (c) aims to have harmonized rules for such.

One last bit I am also putting on the table for your consideration is the new proposal for directive for liability for defective products which also strengthens requirements for appropiate cybersecurity processes, including testing, norm compliance and cybersecurity support. Most importantly perhaps, this norm aims explicitly at company level responsabilities and liabilities over any damages caused over, for instance, a safety harm caused by a cybersecurity breach.

Modernising liability rules for products in the digital age: allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.

As @vmayoral mentions above, at Alias Robotics we’ve got a track record working with clients of various types @compliance to norms of various types, from industrial to medical. Happy to assist there!


This whitepaper, published a few years ago and revved in 202 holds true about what it takes to secure hardware devices, and has direct relevance to Robotics and this thread. In addition to the vendor, it’s around the complete e2e design of the system, from silicon to service.

The Seven Properties of Highly Secured Devices (2nd Edition) - Microsoft Research




1 Like

I’m not a cybersecurity expert, but Its pretty clear that certifying all FOSS software used in order to obtain certification is impractical and likely impossible, and that an audit approach to cybersecurity is likely to be ineffective in actually identifying anything but the most glaring issues.

I like the microsoft paper you linked, because in most cases if uncertified software is compartmentalised away from any external interface, and things like a secure hardware root of trust is in place as well as interfaces being cryptographically locked, then I think the risk of use of such software is low.

It would only require that the root of trust, containerisation and any interface apis would need to be certified instead of all the software running on the device. I’m happy to stand corrected.

It would be good that the audit process specify the best practice instead of just requiring all software be CE certified because I don’t think that is going to practical or effective.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.

Thought this deserved an update.

Insightful article:

And follow-up:

Going to be interesting times.

Personally I welcome the fact that software finally gets some regulation like hardware (and other ‘physical things’) have had for quite some time, but it’s highly unclear whether this is going in the right direction.


Thanks. Went through all. Changed my mindset about my current ROS-based projects.

just as I stumbled upon this… here are some considerations by the Eclipse Foundation regarding the Cyber Resilience Act (CRA) and its impact on the Open Source Community. I am not an expert on the topic, but maybe this is of interest for some people in the ROS community as well…

Blog posts by Mike Milinkovich :

… and an Open Letter to the European Union Co-signed by the Executive Directors, Board Chairs, and Presidents on behalf of their respective organisations (Associaçāo de Empresas de Software Open Source Portuguesas (ESOP), CNLL, the French Open Source Business Association, The Document Foundation (TDF), Eclipse Foundation, European Open Source Software Business Associations (APELL), COSS - Finnish Centre for Open Systems and Solutions, Linux Foundation Europe, Open Forum Europe (OFE), Open Source Business Alliance (OSBA), Open Source Initiative (OSI), Open Systems and Solutions (COSS), OW2, Software Heritage Foundation):

1 Like