Howdy, its your Friendly Neighborhood Navigator here!
I just completed a 3 week project in which I gutted a plurality of MPPI to improve run time performance. I was able to shave off 45% of the compute time – down to ~6.8ms from 12.3ms in my benchmark!
I touched alot of code to make that possible and I’d like a few people to give it a really close look over and test on their platforms to make sure I didn’t introduce any glaring regressions before I update docs and release it.
If you had some time this week or next, I’d appreciate some reviews! Its in a draft PR, so documentation and some linting is off, but will be updated before merge.
With a 45% improvement, that means you could notionally run it 2x as fast, with 2x as many samples, or 2x the prediction horizon for the same compute time! Its frankly more than I bargained for when I started - I would have been happy with 10-20% - but this should allow a whole new class of compute platforms to leverage MPPI (i.e. Jetson Orin, perhaps even Raspberry Pis), bordering on the performance of a simpler framework like DWB.
Please give it a whirl and let me know your thoughts and experiences!
Happy Predicting,
Steve