From 640e0082e5edaa3b2c130f605f26cda663abe571 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Erik=20Pf=C3=B6rtner?= Date: Sat, 4 Apr 2026 23:18:40 +0200 Subject: [PATCH] Standardize hyphen usage across documentation --- README.md | 2 +- SECURITY.md | 16 ++++++++-------- docs/advanced/performance-optimization.md | 18 +++++++++--------- 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index a0efd3a..279c9c5 100644 --- a/README.md +++ b/README.md @@ -558,4 +558,4 @@ To report a vulnerability, see our [Security Policy](SECURITY.md). This project is licensed under the [MIT License](LICENSE). -Copyright (c) 2025–2026 [Splatgames.de Software](https://software.splatgames.de) and Contributors. +Copyright (c) 2025-2026 [Splatgames.de Software](https://software.splatgames.de) and Contributors. diff --git a/SECURITY.md b/SECURITY.md index e39fe80..976d04a 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -23,10 +23,10 @@ If you are using an older version, we **strongly recommend upgrading** to the la This project uses multiple automated security tools: -- **GitHub CodeQL** – Static Application Security Testing (SAST) -- **OWASP Dependency-Check** – Known vulnerability detection in dependencies -- **GitHub Dependency Review** – Pull request dependency analysis -- **Dependabot** – Automated dependency updates +- **GitHub CodeQL** - Static Application Security Testing (SAST) +- **OWASP Dependency-Check** - Known vulnerability detection in dependencies +- **GitHub Dependency Review** - Pull request dependency analysis +- **Dependabot** - Automated dependency updates All scans are executed automatically in CI pipelines on every pull request and release build. @@ -93,10 +93,10 @@ If you discover a security vulnerability in **Aether Datafixers**, please report | Severity | Acknowledgment | Fix Timeline | |--------------------------|----------------|--------------| -| Critical (CVSS 9.0–10.0) | 24 hours | 72 hours | -| High (CVSS 7.0–8.9) | 48 hours | 14 days | -| Medium (CVSS 4.0–6.9) | 48 hours | 30 days | -| Low (CVSS 0.1–3.9) | 72 hours | Next release | +| Critical (CVSS 9.0-10.0) | 24 hours | 72 hours | +| High (CVSS 7.0-8.9) | 48 hours | 14 days | +| Medium (CVSS 4.0-6.9) | 48 hours | 30 days | +| Low (CVSS 0.1-3.9) | 72 hours | Next release | --- diff --git a/docs/advanced/performance-optimization.md b/docs/advanced/performance-optimization.md index 5a2b7bc..364a2c5 100644 --- a/docs/advanced/performance-optimization.md +++ b/docs/advanced/performance-optimization.md @@ -50,7 +50,7 @@ Measures the cost of applying one `DataFix` to a `Dynamic` value. **Key takeaways:** - **Framework overhead is ~0.25 µs** per fix invocation (identity fix baseline). -- A simple rename adds only ~0.03–0.06 µs on top of the framework overhead. +- A simple rename adds only ~0.03-0.06 µs on top of the framework overhead. - Payload size has minimal impact on simple field operations - the cost scales with fix complexity, not data size. - A realistic domain fix (player data with multiple field transformations) takes ~9 µs. - End-to-end migration (including schema lookup and type routing) adds ~22 µs overhead on top of the raw fix. @@ -155,7 +155,7 @@ Measures raw field read, set, and object generation performance per `DynamicOps` **Key takeaways:** -- **SnakeYamlOps is the fastest for field reads** - 2–3x faster than Jackson-based implementations. This is because SnakeYaml uses native Java `Map`/`List` types with direct `HashMap.get()` lookups, while Jackson and Gson use tree node wrappers (`ObjectNode`, `JsonObject`). +- **SnakeYamlOps is the fastest for field reads** - 2-3x faster than Jackson-based implementations. This is because SnakeYaml uses native Java `Map`/`List` types with direct `HashMap.get()` lookups, while Jackson and Gson use tree node wrappers (`ObjectNode`, `JsonObject`). - **SnakeYamlOps is also the fastest for field sets** - 6x faster than Jackson at SMALL payloads, because Java `HashMap.put()` is an in-place mutation, while Jackson's `ObjectNode.set()` involves tree copying. - **All Jackson-based formats perform identically** for in-memory operations. JacksonJsonOps, JacksonYamlOps, JacksonTomlOps, and JacksonXmlOps share the same `ObjectNode`/`ArrayNode` tree model - format differences only matter during serialization/deserialization. - **GsonOps is consistently the slowest** for field operations due to `JsonObject.deepCopy()` on mutations. @@ -173,7 +173,7 @@ Measures end-to-end `DataFixer.update()` throughput per format (single rename fi | GsonOps | 3.628 | 3.314 | 3.231 | | JacksonXmlOps | 3.620 | 3.644 | - | -**Key takeaway:** Migration throughput is nearly identical across all formats (~3.6–3.7 ops/µs). The DataFixer framework overhead dominates over format-specific differences. Choose your format based on your application's needs, not migration speed. +**Key takeaway:** Migration throughput is nearly identical across all formats (~3.6-3.7 ops/µs). The DataFixer framework overhead dominates over format-specific differences. Choose your format based on your application's needs, not migration speed. ### Cross-Format Conversion @@ -224,7 +224,7 @@ Measures encode and decode throughput for individual primitive values. - String, Integer, and Boolean codecs operate at ~4 ns per operation - effectively free in the context of a migration. - Float, Long, and Double are ~40% slower due to boxing and number conversion overhead but still under 7 ns per operation. -- Encoding is consistently ~5–10% faster than decoding. +- Encoding is consistently ~5-10% faster than decoding. ### Collection Scaling @@ -272,7 +272,7 @@ Measures migration throughput under concurrent load. | Operation | 2 Threads (ops/µs) | |-----------------|---------------------:| -| Latest lookup | 481.9 – 490.7 | +| Latest lookup | 481.9 - 490.7 | | Registry lookup | 111.1 | **Key takeaways:** @@ -290,9 +290,9 @@ Measures migration throughput under concurrent load. | Workload | Records | Recommended Heap | JVM Flags | |------------|--------------------:|------------------:|-------------------------------------------------------| -| Small | < 1,000 | 256 MB – 512 MB | `-Xms256m -Xmx512m` | -| Medium | 1,000 – 100,000 | 1 GB – 2 GB | `-Xms1g -Xmx2g` | -| Large | 100,000 – 1,000,000 | 2 GB – 4 GB | `-Xms2g -Xmx4g -XX:+UseG1GC` | +| Small | < 1,000 | 256 MB - 512 MB | `-Xms256m -Xmx512m` | +| Medium | 1,000 - 100,000 | 1 GB - 2 GB | `-Xms1g -Xmx2g` | +| Large | 100,000 - 1,000,000 | 2 GB - 4 GB | `-Xms2g -Xmx4g -XX:+UseG1GC` | | Very large | > 1,000,000 | 4 GB+ | `-Xms4g -Xmx8g -XX:+UseG1GC -XX:MaxGCPauseMillis=200` | ### GC Tuning for Migration Workloads @@ -461,7 +461,7 @@ if (elapsed > THRESHOLD_NANOS) { Based on the [schema lookup benchmarks](#schema-lookup-performance): - Use `getCurrentVersion()` for the latest schema - **O(1), 5.7 ns** -- Use `getSchema(version)` for a specific version - **O(log n), 9–52 ns** +- Use `getSchema(version)` for a specific version - **O(log n), 9-52 ns** - Avoid iterating schemas sequentially - **O(n), up to 10.8 µs at 500 schemas** ---