ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Simulation software requirements

Not only selling the game, but ANY revenue produced with Unreal Engine triggers royalties, where the product is included. I’m posting the whole piece of the FAQ here, to remove all misunderstanding.

What do I need to do when releasing a product?

You must notify Epic when you begin collecting revenue or ship your product; see here for more details.

If I release a commercial product, what royalties are due to Epic, and when?

Generally, you are obligated to pay to Epic 5% of all gross revenue after the first $3,000 per game or application per calendar quarter, regardless of what company collects the revenue. For example, if your product earns $10 from sales on the App Store, the royalty due is $0.50 (5% of $10), even though you would receive roughly $7 from Apple after they deduct their distribution fee of roughly $3 (30% of $10).
Royalty payments are due 45 days after the close of each calendar quarter. Along with the payment, you must send a royalty report on a per-product basis. For more information, see here.

What about downloadable content, in-app purchases, microtransactions, virtual currency redemption, and SUBSCRIPTION FEES, as well as in-app advertising and affiliate program revenue?

Revenue from these sources is included in the gross revenue calculation above.

Why does Epic think it’s fair to ask for a percentage of a developer’s product revenue?

Our aim is to provide powerful tools, a scalable and productive workflow, advanced features, and millions of lines of C++ source code that enables developers to achieve more than they would otherwise be able to, so that this structure works to everyone’s benefit.

In this business model, Epic succeeds only when developers succeed using UE4. Many of the industry’s leading developers and publishers have signed up to license the Unreal Engine with royalty-based terms over the years, and now this level of access is open to everyone. And, don’t forget, we continue to offer custom terms.

Do I need to report royalties forever?

No, you only need to report royalties when you are making more than $3,000 per quarter from your product. If your game no longer is being sold, or no longer makes that amount of money, no royalty reports are due.

What if my project requires custom licensing terms?

If you require terms that reduce or eliminate the 5% royalty in exchange for an upfront fee, or if you need custom legal terms or dedicated Epic support to help your team reduce risk or achieve specific goals, we’re here to help. See the custom licensing page for details.

What if my product is released through a publisher or distributor?

You’re free to release Unreal Engine products through a publisher or distributor, and the EULA gives you the right to sublicense the necessary parts of the Unreal Engine to them so they can release your game.
When negotiating terms with publishers, please keep in mind that the royalty remains 5% of the product’s gross revenue after the first $3,000 per game per calendar quarter from users. In this scenario, feel free to refer your publisher to Epic during discussions, as it may be advantageous to all if the publisher obtains a custom-negotiated, multi-product Unreal Engine license covering your product.

What if my project wins cash awards?

You do not have to pay royalties on award winnings.

What if my product obtains crowdfunding via Kickstarter or another source?

Royalties are due on revenue from Kickstarter or other crowdfunding sources when the revenue is actually attributable to your product. For example, if the user is required to purchase a particular funding package to obtain access (now or later) to your product, or if that package gives the buyer benefits within the product such as in-game items or virtual currency.
Here’s an example of what we mean by “attributable”: Assume you provide two tiers of offers, a signed poster for $20, and a signed poster plus game access for $50. No royalties are due on ancillary products like posters, so no royalty is due on the $20 tier. On the $50 tier, the user is paying for the poster with a $20 value, and that implies that the remaining $30 of value is attributable to the product. So, for each $50 tier sale, you’d pay a royalty of $1.50 (5% of $30).

Just based on that I suggest to remove CARLA simulator from the candidates list for Autoware.Auto in order to protect final users and OURSELVES of being trapped into unexpected royalties by Epic after sharing demo build with customer.

I agree there is no competition and we should include as many simulators as possible as far as it is not posessing a danger to Autoware.Auto end users.

1 Like

@esteve @German_Ros @zelenkovsky thanks for your replies but lets stop here discussing about licenses. Instead I suggest for AWF to hire a lawyer that can read Unreal, Unity and LG licenses and come back with the legal advice that would also include answers to all of the questions above. Do you agree?

Instead lets move back to the technical discussion and get to the bottom of questions like this: Simulation software requirements.

In addition @German_Ros wrt your

Where is the integration code and some tutorials? Because I could find nothing in or


1.heavy means simulator calculation compare to other simulators such as LGSVL Simulator. I felt your simulator is too heavy when we use CARLA with GUI in Linux.

2.I try to build in Linux/Windows and it was very hard… So, I felt it is very hard to modify source code and contribute to your sim.

3.Sorry for my vague answer, but when I drive a vehicle in your sim, and I felt that the vehicle turns to much rapidly compare to real cars. I think friction calculation or something like that is strange.

4.It sounds good!! It improves CARLA usability.


I forget to say a big problem in CARLA.
CARLA Lidar collision detection for the Ego Vehicle is a Box Collider.
I think it is not good for autonomous driving software development.
Lidar Data is very very important data for Autoware.

Please provide details of your testing method and the results you got for LGSVL and CARLA. Otherwise neither simulator will be able to validate your claims, find problems, and fix them.

Please be specific. What was hard? Where did you encounter difficulties? If you provide this information, the CARLA developers can improve their documentation and perhaps their build process. If you don’t, nothing can be improved.

Can you provide a video of this? And a video of another simulator to show how it compares.

As a side note, StreetDrone just opensourced their URDF model to be used in simulation in Gazebo.

This is an example of the collaboration we’d like to foster between simulator teams, hardware providers and developers.

Here’s the pull request that integrates their model with Autoware:

Kudos to @Efimia_Panagiotaki and @Antonis_Skardasis!



@zelenkovsky, where is your information coming from? May you be misinterpreting UE4 licensing? There are already people selling cloud services around CARLA and they are not paying any fees to Epics.

Have you checked with a group of industrial lawyers? We have had these conversations many times for many projects and partners. I think it is not as simple as reading through the license. You need the right legal background to interpret the license. Has your legal team looked into that recently?

Regarding the free software being free as in freedom we all agree :slight_smile:

In any case, the whole proposition value has always been “nobody tries to sell CARLA”. If a given party can sell a service or product done with/over CARLA without selling CARLA. In fact, many companies are doing that at the moment. This model seems to work.

So overall, I understand your concern, since we had it too before starting the project. However, after many discussions with different legal teams, we are confident this model works.

After several conversations with Epicgames, what it seems to be their near goal is to tap into this new sector of autonomous driving robotics. So, they are planning to find formulas to monetarize UE4 there, probably with new licenses or business models. But that is yet to be discovered.

Best regards,

For what it is worth, I decided not to consider using Unreal or Unity in rviz (when we were porting to ROS 2) partially due to the same license uncertainty. If it’s not licensed under something recognized by the OSI (, then it’s not sufficient for us. Now, if it’s optional that would be ok by me, and the ignition-rendering API may allow us to do something like that in the future (where something like Ogre3D or directly using OpenGL is the default, but Unreal is an alternative).

That sounds like they’d be moving towards a solution that is less acceptable for a top-to-bottom OSS stack, for those who care about that. Which was one of my concerns when initially evaluating it, that they were more likely to change their license to be further away from an OSI approved license rather to one that is approved. So you’d then be locked in to some degree into a proprietary solution.


I understand these concerns. To have a completely open source simulator that does not depend on any license deviating from MIT / BSD / etc. would be ideal. A dream come true.

From the CARLA project perspective we care about bringing state-of-the-art simulation to the community (both academia and industry). Here state-of-the-art means to enable a set of critical features identified by the driving community:

  • Realistic sensor simulation of: cameras, LIDARs, RADAR, IMUs and other yet-to-be-invented devices
  • Corner case discovery
  • Traffic simulation / Traffic scenarios based on AI
  • Automatic ingestion of maps and scenarios

The uses cases requiring these features are typically i) full-stack verification and ii) assisting in the R&D cycle of the perception stack and the planner.

In order to enable some of these features a state-of-the-art game engine, such as Unity or UE4, brings an important value. Cameras are progressively becoming more critical in AV stacks and decent/good real-time multi-camera simulation with a PBR workflow is required.

The only totally open-source alternative to these engines coming to my mind right now (for the purpose of real-time PBR rendering) would be the Eevee engine used by Blender:

However, Eevee is not mature enough yet to be integrated in CARLA due to performance constraints in multiple-camera setups.

Long story short, the community needs state-of-the-art tools to make progress. Right now the largest open-source efforts to provide these simulation tools are CARLA and LGSVL and AirSim. All of them are using a game engine. We need to accept this or else go with an alternative that does not provide all the needed functionalities.

In the case of CARLA --the platform that I know best-- the only limitation we have at the moment is if people try to sell CARLA itself. In that case they would have to contact Epicgames. For practical purposes this doesn’t seem to be a limitation. There is no need to “sell” CARLA, you just use it and build on top.Toyota, GM, Valeo and others have not found this aspect to be limiting so far.

In the future, there will be new open source real-time PBR-based rendering engines and we will integrate CARLA to use those. But right now we are trying to be reasonable and provide solutions to existing Autonomous Driving problems.


Apologies if this is a slight deviation from the main discussion, but I thought it would be worthwhile to highlight Gazebo and Ignition.

Both Gazebo and Ignition are general purpose robotic simulation tools and are completely open source under the Apache 2.0 license. Gazebo has been under development since 2002, and has been successfully used in a wide variety of applications in both the commercial and government domains. Ignition represents the latest development efforts by Open Robotics in the simulation space. A good analogy is: ROS2 is to ROS1 as Ignition is to Gazebo.

While not targeted directly for car simulation, I believe Ignition covers many of the needs listed in this thread. Here are some useful links for your reading enjoyment:

  1. Features
  2. Docs
  3. Roadmap

The most immediate use-case for Ignition is the SubT Challenge, which requires real-time (ideally faster than real-time) simulation of worlds that span multiple kilometers with dozen of UAVs and UGVs. To meet these needs we have implemented, and are continuing to make improvements to, distributed simulation.This in particular might be attractive to the autonomous car domain.

We are also using Ogre 2.1 with PBR rendering, QtQuick with a plug-in interface for graphical tools, and DART for physics.

I’d be happy to answer any questions in this thread or elsewhere.

1 Like

The Autoware Maps WG met to discuss requirements for simulators from the perspective of maps, see Autoware Maps WG: Meeting Minutes 1 Aug 2019.

I hope this information will feed into this discussion about the simulation software requirements.

Hi which Desktop can you suggest me to RUN in the same machine:

  • Autoware.AI
    -LG SVL simulator

From here: FAQ - LGSVL Simulator

I get info just about the minimum requirements for the Simulator. And I am having issues to run Autoware.AI or Apollo 6.0 with SVL simulator. Mainly Because of Perception module with a subsequent failure of other modules (planning, etc).

I am going to rent a machine here:


I would like to know from the experts the minimum system requirements to run a simulation with a simple evasive maneuver on BorregasAve map. I mean, I am going to use few NPC and Pedestriang AGENTS. Can someone please list me a suitable system that would work?

Thanks in advance

Running the simulator on the same machine as the autonomy stack is not recommended, especially if you want to see realistic performance of the autonomy stack. However, it is possible. I currently use an AMD Ryzen 5 5600X with 64GB of RAM and an Nvidia 2060 SUPER graphics card. I see very high CPU utilization from the simulator, especially with multiple lidars, but performance from the Autoware stack is adequate. Additionally, I would recommend running these on a Linux operating system instead of Windows as Linux generally consumes less RAM and CPU resources than Windows.

Since I have not run the simulator in a hosted machine, I would recommend that @haditab or one of his colleagues weigh in on this discussion as well.

I would also like to mention Webots (even though this a very old thread) just because Webots solves most of the problems mentioned above. It is an open-source general robot simulator, but it has support for automobile simulation as well.

A few years ago, sponsored by Renault Technocentre and a few other projects, Webots has received improved automobile support.

Here are a few points related to the problems raised in the thread:

  • The installation process is very straightforward (native support for Linux, Windows, and macOS + you can install it through apt, snap, or brew).
  • It is under Apache license and there are no proprietary parts.
  • It is deterministic.
  • You can control it step-by-step and it easily runs faster than real-time.
  • You can run in CI (we have been running Webots in GitHub Actions and we used to run it in Travis and AppVeyor).
  • A sensor noise can be specified. Besides the basic white noise supported by all sensors, there are e.g. motion blur camera or look-up table for distance sensor.
  • It has a ROS interface (ROS 1 and ROS 2).
  • We have recently tested 10 VGA cameras at 30Hz and 3 HDL Lidars running faster than real-time (on a laptop).
  • It doesn’t use a lot of RAM.
  • A map can be imported from OpenStreetMaps.
  • A traffic can be simulated through SUMO.
  • It is very easy to script a pedestrian behavior (or any world changes) using Webots API (C/C++/Python/MATLAB/Java) and integrated IDE.
  • Rendering is very realistic due to the extensive use of shaders and PBR rendering.
  • It includes many calibrated vehicles.

Thanks for the suggestion! Would Cyberbotics be interested in creating a Vehicle Interface for the simulator in Autoware.Auto?

Yes, certainly! I will send you a private message for follow-up on this discussion.

@Marcus_Vinicius_Leal sorry for the late reply. I don’t have good information on the minimum requirements for a hosted setup but if you are still interested I can put you in touch with one of my colleagues.

Overall though, if you are not necessarily interested in testing out perception I would suggest using the modular testing feature which essentially takes perception out of the loop by providing ground truth information directly to Apollo so that you can disable Perception. It’s great for testing planning, control, and modules other than perception. Here’s an example: SVL Simulator Feature: Modular Testing - YouTube