Yeap, you’re right @smac, @Pepis’s probably more interested in functional benchmarking data, but that can also be enabled with our approach described above and that’s the point that I wanted to make:
Probe Probe
+ +
| |
+--------|------------|-------+ +-----------------------------+
| | | | | |
| +--|------------|-+ | | |
| | v v | | | - latency <--------------+ Probe
| | | | | - throughput<--------------+ Probe
| | Function | | | - memory <--------------+ Probe
| | | | | - power <--------------+ Probe
| +-----------------+ | | |
| System under test | | System under test |
+-----------------------------+ +-----------------------------+
Functional Non-functional
+-------------+ +----------------------------+
| Test App. | | +-----------------------+ |
| + + + + | | | Application | |
+--|-|--|--|--+---------------+ | | <------------+ Probe
| | | | | | | +-----------------------+ |
| v v v v | | |
| Probes | | <------------+ Probe
| | | |
| System under test | | System under test |
| | | <------------+ Probe
| | | |
| | | |
+-----------------------------+ +----------------------------+
Black-Box Grey-box
With a bit of effort I believe we can actually make these topics related and consistent (and hopefully re-usable across ROS stacks). @christophebedard did a good work creating a data model that could be used to determine functional aspects a posteriori (after the computational graph has run, using the trace data). I think btw this is a good complement to the tools you linked above and believe it’d be interesting to compare the same benchmarks. Happy to partner up on this!