Hi, I’m thinking about shifting to unreal engine as a simulation engine because it makes it more flexible to interact with other tools for instance the use of vr.
And I’m concerned on why do we keep using gazebo as the community could benefit from multiple tools already implemented in most popular game engines. What are the pros and cons between using gazebo or a game engine as unreal ?
To add a data point here-- the lab I work for does use Unity for our simulation software. Personally I prefer Gazebo, we found some obstacles with ROS integration into Unity that I think we would have avoided with Gazebo. But, we’re interested in doing experiments with people in VR so that was the source of that decision. The other benefit is that “Unity Developer” was a position we could put out job offerings for, which is valuable.
In my opinion, Gazebo is important (and worth prioritizing) even if only because it is open source. Beyond that, Gazebo has an advantage in that game engines focus on making behavior look correct: there are often shortcuts taken to make this happen efficiently, which a physics simulation doesn’t encounter. Being built with robot integration as a priority is valuable, too.
But, Unity has the benefit of making VR a priority (as well as mobile apps, and I’ve seen some really cool robot interfaces and augmented reality tools come out of that). I think Gazebo could provide VR integration by adding another camera source tied to the position of a headset, but I don’t know enough about either Gazebo or VR to say that for sure.
You mention ‘multiple tools already implemented in most popular game engines’-- which other tools are you referring to?
Have you seen the CARLA simulator ? It does just what you’re talking about.
And it supports a ROS bridge.
We’ve used it quite successfully with both ROS1 and ROS2.
To my knowledge, CARLA does not support VR, which seems to be the main reason to look at Unity or Unreal over Gazebo in this case (though there are community-driven workarounds).
Disclaimer, I work at NVIDIA.
This is exactly why we use Isaac SIM for robot simulations.
When working on autonomous vehicles we started with classic game engines and discovered the shortcuts to make a game high performance resulted in limitations in simulation. We developed Omniverse which is the back end to DRIVE SIM for autonomous vehicles, and Isaac SIM for robotics addressing many of these compromises. Omniverse supports VR, with raytracing.
This Omniverse XR intro video covers VR.
Gazebo is also great purpose built for robotics.
This isn’t an order of strength of argument:
- Legacy: when ROS started out, there weren’t many reliable open-source engines to base a robotics simulator on
- License: using anything that doesn’t come with a free-only license incurs extra work for any organization trying to adopt it
- Simulation: Both unity and unreal engine are game engines, meaning you’d be retrofitting something that seems to work the way you expect vs something that actually does what you expect (point already presented above)
- Drivers, GPUs: Any game engine will require a decent GPU and proper drivers. Everybody hates fiddling with nvidia drivers for linux or finding the right vulkan version for version X for unity. If you have simple needs to e.g. simulate an arm moving something from a table, you shouldn’t need a powerhouse for simulating that.
- Portability: “just works” (small letter) most of the time
Purpose-built simulators will of course excel in their domain, for everything else, Gazebo typically does a good job.
I’m bias, so I’ll refrain from giving any opinions.
On the topic of VR, I’ll just mention that Gazebo classic supports at least (an old version of?) the Oculus Rift.
Gazebo Fortress can use the same underlying rendering engine as Gazebo classic, and also other engines. I haven’t seen anyone using it with VR yet, but it’s not impossible. Someone just has to put time into integrating it, and ideally make that freely available to everyone else
There is an aspect I consider the most essential: As simple and as important as sharing the same axis convention.
Robotics, aircraft, cars and industry in general use the axis-convention: x-forward, y-left, z-up. (there could be exceptions, but this is the most typical approach)
For example, unity is not like that, and that is a problem.
If the axis convention of your robot control software and your simulator is different, you have a problem. I have been suffering this for a long time when working with robotics and game engines such as unity.
- The developer needs to convert messages and arguments from one representation to the other. Any matrix, vector or math operation is a source of potential errors. It is true that the change between reference frames could be done with one single function, but even so, changing between conventions is prone error. The problem could become even more complicated if you need to work with rotation matrices or homogeneous matrices to describe poses on the simulator side. The developer must think continuously in two different geometrical mindsets, must track systematically the convention used for any geometrical data and may need to switch continuously from one to the other convention.
Gazebo and Unreal Engine and ROS (aircrafts, cars, robotics) share the same axis convention. That is why I think both could be a very good option for robotics. That is not the case of unity and many others.
My two cents.
I’ve been developing in Unity and ROS this year, and its been frustrating. One benefit of using a custom robotics-based simulator is support for common robot use cases. I spent a long time trying to get a skid steer robot to drive around in Unity, but the physics engine is not tailored for that use case, and as a result, I couldn’t use their physics for forward kinematic simulation.
That sounds similar to our experience. What we do is really just use Unity as a “front end”: we tell Unity where to put the robot and robot joints, and Unity provides us with the ability to drop people into VR in that setting. From an implementation perspective, this means running a navigation stack where odometry data is spoofed (we record the command velocities from move_base and report them as having been followed perfectly), and this new position for the robot is fed into Unity.
Perhaps a bit of a me-too post at this point (not that me-too), but we’ve had the same problem (with this).
That is to say: using game engines for simulation is possible, but it depends on the type of simulation.
Are you into object recognition and localisation? Great, use PBR-based rendering and all the other great improvements that have been made in computer graphics over the years.
Does your work need (super) realistic physics? That’s not really happening most of the time.
The question reminded me of Why do we use Rviz when we have Gazebo ? on ROS Answers. The answers are also similar: sometimes a generic tool isn’t sophisticated enough, and a specialised one does a better job – even if it’s not all-around perfect.
Gazebo uses a game engine, OGRE, for the rendering/client side.
At least in theory, you could make another gazebo client using Unity or Unreal instead.
(Torchlight II is a notable game that uses OGRE.)
Personally speaking, Gazebo is just simply better right now not because of its software, but solely because ROS is deeply integrated within Gazebo. Without ROS-integration, you won’t be able to access a conglomerate of packages within Gazebo. You won’t able to use off-the-shelf robot models and differential drive packages. I know ROS is available for Unity and Unreal Engine too but it’s just not up to the standards of Gazebo as of now and there are still ROS-bridging issues to address. This could definitely change and if OpenRobotics decides to support and keep ROS up-to-date for all 3D Simulators.
Secondly, Unity and Unreal Engine doesn’t have a physics engine up to par to Gazebo. Sure, versatility is important, but if you don’t offer the right tools in a broad scope, it doesn’t matter. With that being said, not having a reliable physics engine can lead to many errors that could have been predetermined in the simulation.
Either way, All 3D Simulators has their pros and cons and it depends which software suits your needs. If you have access to IEEE-Explore, there is a research paper that analyzes and compares the 3D Simulators you mentioned in further detail.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.