The current workflow for identifying regressions and benchmarking packages is not ideal.
Some comments/ideas for improvement:
Path forward: Currently, the focus on spatialdata is at the specs and APIs level, not performance. Therefore, I would consciously place less emphasis on systematic benchmarks for the time being and also disable the benchmarks workflow in napari-spatialdata (which can still be used on-demand, similar to spatialdata). When we shift our focus to performance, we should revamp the benchmarks suite, enable it universally (not just in napari-spatialdata), document it for easier onboard for new devs, and enhance it for systematic use.
The current workflow for identifying regressions and benchmarking packages is not ideal.
Some comments/ideas for improvement:
spatialdataandnapari-spatialdataboth utilizeasv, but in different ways: in particular,spatialdatarequires manual runs, whilenapari-spatialdatahas workflows enabled.napari-spatialdatais complex and should be simplified.Path forward: Currently, the focus on
spatialdatais at the specs and APIs level, not performance. Therefore, I would consciously place less emphasis on systematic benchmarks for the time being and also disable the benchmarks workflow innapari-spatialdata(which can still be used on-demand, similar tospatialdata). When we shift our focus to performance, we should revamp the benchmarks suite, enable it universally (not just innapari-spatialdata), document it for easier onboard for new devs, and enhance it for systematic use.