Nav2 MPPI - 45% Performance Boost - Beta Testing Requested

Howdy, its your Friendly Neighborhood Navigator here!

I just completed a 3 week project in which I gutted a plurality of MPPI to improve run time performance. I was able to shave off 45% of the compute time – down to ~6.8ms from 12.3ms in my benchmark!

I touched alot of code to make that possible and I’d like a few people to give it a really close look over and test on their platforms to make sure I didn’t introduce any glaring regressions before I update docs and release it.

If you had some time this week or next, I’d appreciate some reviews! Its in a draft PR, so documentation and some linting is off, but will be updated before merge.

With a 45% improvement, that means you could notionally run it 2x as fast, with 2x as many samples, or 2x the prediction horizon for the same compute time! Its frankly more than I bargained for when I started - I would have been happy with 10-20% - but this should allow a whole new class of compute platforms to leverage MPPI (i.e. Jetson Orin, perhaps even Raspberry Pis), bordering on the performance of a simpler framework like DWB.

Please give it a whirl and let me know your thoughts and experiences!

Happy Predicting,



Sick! How’d you manage that performance boost? Just curious, as an optimization addict

Take a look at the PR :wink:

Love it! I am starting to integrate this as part of a POC to convince my team to migrate from ROS1 to ROS2. Happy to share my feedback whenever I can :slight_smile: