ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Real2Sim: Realistic simulation environments

In ROS A Europe Meeting 20201126 there were a lot of discussions regarding how to generate realistic data from simulated environments.
Let’s start our discussion with a simple challenge which is real LiDAR perception, and which is simulated LiDAR Perception.

Left is real, right is simulated to know more information please check:

Real2Sim is one of the hot topics in the autonomous driving industry. utilizing simulation environments in the development processes will for sure help the autonomous Lawn tractor program that is running within this community, simply it costs less, improves the test and development cycles, and provides flexibility to your teams.

Methods used in Real2Sim:

  1. Classical methods:
    simply you choose a set of features and characteristics that if they are present the simulation is realistic. there are different characteristics for each sensor, for example, LiDARs how close is your representation of intensity and echo power to how the sensor is actually behaving, are you representing the range limits … and the list of features goes on. the list of features is different from a sensor to another, for example for ultrasonics are different from cameras, and cameras are different from LiDARs…etc.


  • Need-oriented.
  • Can be as simple as a lockup table.


  • complexity increase exponentially with the represented features.
  • execution time increases exponentially with the represented features.
  • requires an expert-level designer.
  • if a parameter of the sensor changes, you need to do the whole process from the begining.
  1. GANs
    GANs is one of the methods used in Real2Sim as it is simply an unsupervised transformation from domain A to domain B. it is used quite much in the gaming industry.


  • fast prototyping.
  • a huge community is investing a lot of time and money in it.
  • fast inference.


  • you don’t actually know where it will land you in domain B after applying the transformation from domain A to Domain B.
  • for new inputs, you just pray for good results.
  • GAN output will contain unpredicted patterns that might be good for a human eye but will result in unpredicted output for a classification or detection DNN/algorithm.
    (note: the image above is used in adversarial attacks, but it just shows the case I am talking about, simple patterns will change detection and classification DNN output)
  1. DNN + statistics
    using a supervised DNN to map features from domain A to domain B, aided with a statistical decision-making algorithm. the stack of the DNN and statistical decision algorithm simply represents the lockup table explained in the classical method.


  • super fast.
  • fast prototyping.
  • accuracy is measurable.
  • if the sensor changes, just retrain your DNN


  • I actually didn’t face any!

how real is it?
we judge how realistic the simulation environment by two types of criterion first is qualitative which is how well does it look, does it look real, second is quantitative which is a set of KPIs that describes how close or far are the predicted features from the desired one.

Finally, the above article is my experience, so you may agree or disagree, the main objective of this article is to share knowledge and experience, I have added in my to-do list testing the Farm simulator and integrate it with Roboware, so I would say stay tuned :slight_smile: