This is more of a general sonar question than an ROS question, however I am guessing there are some smart folks here who may be able to help. I simply need to simulate sonar returns as sound, ideally through ray-tracing or other forms of close-to-reality simulation. I am looking to simulate, not emulate.
I have an application in which an advanced underwater sensing system will be needed, perhaps a novel one. I have a background in machine learning, and I think there is a big opportunity to develop a machine learning based solution as a low cost alternative to current sonar systems. To that end I have been trying to design a sonar simulator that can produce a data set large enough to pre-train and prove out a neural network before it is fine-tuned in the real world.
I think it will be possible to get the data I need out of this library with a lot of work. Again, I simply need the recorded sound of simulated returns.
Has anyone else here already done this work who would be willing to share, or know of any project that gets closer than WaveQ3d?
The work was attempting to implement physical aspects of the multi-beam sonar which is not included in ‘image processing’ fashion of methods including machine learning approach. I wonder if there’s way to implement physics in machine learning to resolve diffractions and interference between beams.
you could also take a look at HoloOcean (Welcome to HoloOcean’s documentation! — HoloOcean 0.5.0 documentation). They implemented multiple different simulated sonar sensors. HoloOcean has a straightforward Python API similar to OpenAI Gym and it is based on HoloDeck which was designed for training machine learning models.
Note: we are currently working on a ROS2 interface for HoloOcean, we will announce it here when it is ready.