These are all really good points. What are the specific use cases of simulators as it pertains to people working on Autoware right now? As with @Dejan_Pangercic’s post, I think it’s helpful from our perspective (the LGSVL Simulator team) at least to think about prioritization of features from users’ point of view. We know about some of the use cases of the Autoware Foundation, but it would be interesting to hear from any additional active Autoware developers and what their needs are for their specific use case. As @gbiggs entioned, maybe it will help more clearly define different simulators’ roles and their differences as well.
As for LGSVL Simulator specifically, I’ve addressed some of the feature requests above as they relate to our roadmap.
Currently, our immediate priorities are:
- API implementation
- New map generation/import
Some requests (hardware specification description, which exact sensor models are supported) are a documentation need, and we will continuously add to the documentation (https://www.lgsvlsimulator.com/docs).
Control/scripting capability of simulator:
We are currently working on implementing a Python API to run the simulator - this includes start/stop, run-time configuration, stepping through/running faster than real-time, controlling the environment, controlling ego/non-ego agents, state/sensor data retrieval.
This will let us more easily define or use standardized scenario descriptions.
LGSVL Simulator is based on Unity, and to my knowledge the PhysX Engine in Unity does guarantee the same simulation result when all inputs are the same. This is certainly a whole topic itself as to the degree of determinism that is required for use cases - the act of running on different machines (if in cloud, for example) can change behavior, and the extent to which it does and whether it’s acceptable is probably an important question.
Headless mode for CI:
We do plan to enable running the simulator in non-rendering mode.
Parallelizing simulation runs:
This is on our roadmap (we need to upgrade Unity version and implement multi-threading)
Behavior models of non-ego agents:
@Dejan_Pangercic Do you mean dynamics models of agents as well or higher-level behavior?
This can be enabled by the API (agents can have specified behavior such as following waypoints or randomness).
Currently by default, traffic vehicles follow the vector map, and pedestrians walk random paths.
We are currently working on being able to support scenarios, and after this definitely plan to look at generating test suites of scenarios, integrating with existing scenario databases, etc.