Description
The project has pytest-profiling in dev dependencies but no dedicated benchmarks. For a library that processes network configurations — which can be thousands of lines on large devices — having baseline performance data would be valuable.
Areas where performance characteristics are unknown or potentially interesting:
- Parsing large configs (10,000+ lines) via
_load_from_string_lines() vs get_hconfig_fast_load()
config_to_get_to() on configs with many differences
all_children_sorted() on deeply nested hierarchies (repeated sorted() calls)
HConfigChildren.rebuild_mapping() cost on frequent deletions
- Memory usage for large config trees (benefit of
__slots__)
Proposed Improvement
- Add a
benchmarks/ directory or pytest benchmark fixtures
- Create representative large config samples (or generators)
- Measure parsing, diffing, and iteration performance
- Document expected performance characteristics in docs
Description
The project has
pytest-profilingin dev dependencies but no dedicated benchmarks. For a library that processes network configurations — which can be thousands of lines on large devices — having baseline performance data would be valuable.Areas where performance characteristics are unknown or potentially interesting:
_load_from_string_lines()vsget_hconfig_fast_load()config_to_get_to()on configs with many differencesall_children_sorted()on deeply nested hierarchies (repeatedsorted()calls)HConfigChildren.rebuild_mapping()cost on frequent deletions__slots__)Proposed Improvement
benchmarks/directory or pytest benchmark fixtures