Conversation
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
There was a problem hiding this comment.
Code Review
This pull request focuses on extensive documentation updates, example consolidation, and internal API refactoring. Key changes include restructuring the GPT-OSS-120B benchmarking examples, updating CLI argument references in READMEs, and transitioning the CPU affinity configuration to a boolean flag. Additionally, the internal issue_query method was renamed to issue. Feedback is provided regarding the removal of conditional checks in the endpoint client configuration, which inadvertently prevents the programmatic override of adapters and accumulators.
| adapter_path = ADAPTER_MAP.get(self.api_type) | ||
| if not adapter_path: | ||
| raise ValueError(f"Invalid or unsupported API type: {self.api_type}") | ||
| module_path, class_name = adapter_path.rsplit(".", 1) | ||
| module = import_module(module_path) | ||
| object.__setattr__(self, "adapter", getattr(module, class_name)) | ||
|
|
||
| accumulator_path = ACCUMULATOR_MAP.get( | ||
| self.api_type, ACCUMULATOR_MAP[APIType.OPENAI] | ||
| ) | ||
| module_path, class_name = accumulator_path.rsplit(".", 1) | ||
| module = import_module(module_path) | ||
| object.__setattr__(self, "accumulator", getattr(module, class_name)) |
There was a problem hiding this comment.
The removal of the if self.adapter is None and if self.accumulator is None checks prevents programmatic overrides of these components. While this ensures they stay in sync with the api_type, it breaks the ability to inject custom implementations (e.g., for testing or specialized backends) via the constructor or update() method. Consider restoring the conditional checks to allow manual overrides when they are explicitly provided.
| adapter_path = ADAPTER_MAP.get(self.api_type) | |
| if not adapter_path: | |
| raise ValueError(f"Invalid or unsupported API type: {self.api_type}") | |
| module_path, class_name = adapter_path.rsplit(".", 1) | |
| module = import_module(module_path) | |
| object.__setattr__(self, "adapter", getattr(module, class_name)) | |
| accumulator_path = ACCUMULATOR_MAP.get( | |
| self.api_type, ACCUMULATOR_MAP[APIType.OPENAI] | |
| ) | |
| module_path, class_name = accumulator_path.rsplit(".", 1) | |
| module = import_module(module_path) | |
| object.__setattr__(self, "accumulator", getattr(module, class_name)) | |
| if self.adapter is None: | |
| adapter_path = ADAPTER_MAP.get(self.api_type) | |
| if not adapter_path: | |
| raise ValueError(f"Invalid or unsupported API type: {self.api_type}") | |
| module_path, class_name = adapter_path.rsplit(".", 1) | |
| module = import_module(module_path) | |
| object.__setattr__(self, "adapter", getattr(module, class_name)) | |
| if self.accumulator is None: | |
| accumulator_path = ACCUMULATOR_MAP.get( | |
| self.api_type, ACCUMULATOR_MAP[APIType.OPENAI] | |
| ) | |
| module_path, class_name = accumulator_path.rsplit(".", 1) | |
| module = import_module(module_path) | |
| object.__setattr__(self, "accumulator", getattr(module, class_name)) |
There was a problem hiding this comment.
Pull request overview
This PR updates documentation and examples to align with recent endpoint-client refactors and consolidates the GPT-OSS-120B end-to-end example (configs + accuracy scripts) so it runs cleanly again.
Changes:
- Update docs/READMEs to reflect refactored endpoint client API (
issue/poll/drain, updated worker internals, CLI flag updates). - Consolidate GPT-OSS-120B example guidance (vLLM + SGLang) and remove the separate SGLang-only example README.
- Add standalone evaluation scripts (GPQA/AIME25/LiveCodeBench) and plumb
--force-regeneratethrough dataset generation in the example runner.
Reviewed changes
Copilot reviewed 13 out of 16 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
src/inference_endpoint/profiling/README.md |
Updates profiling callouts to match refactored client/worker method names. |
src/inference_endpoint/endpoint_client/config.py |
Adjusts default resolution so adapter/accumulator are always derived from api_type. |
src/inference_endpoint/endpoint_client/README.md |
Updates client API usage from issue_query to issue. |
src/inference_endpoint/dataset_manager/README.md |
Fixes import path for DatasetFormat. |
examples/README.md |
Updates GPT-OSS-120B example description and removes separate SGLang example entry. |
examples/07_GPT-OSS-120B_SGLang_Example/README.md |
Removes redundant SGLang-only README (content consolidated into 04 example). |
examples/04_GPTOSS120B_Example/run.py |
Adds force_regenerate passthrough for dataset generation. |
examples/04_GPTOSS120B_Example/gptoss_120b_example.yaml |
Updates default endpoint port to 30000. |
examples/04_GPTOSS120B_Example/eval_livecodebench.py |
Adds standalone LiveCodeBench re-scoring script from an existing report dir. |
examples/04_GPTOSS120B_Example/eval_gpqa.py |
Adds standalone GPQA re-scoring script from an existing report dir. |
examples/04_GPTOSS120B_Example/eval_aime.py |
Adds standalone AIME25 re-scoring script from an existing report dir. |
examples/04_GPTOSS120B_Example/Readme.md |
Consolidates end-to-end instructions for vLLM/SGLang + accuracy suite + troubleshooting. |
examples/02_ServerBenchmarking/README.md |
Updates CLI example flags to current --endpoints/--dataset/--model usage. |
docs/CLI_QUICK_REFERENCE.md |
Updates init command guidance to include concurrency and removes redundant line. |
docs/CLIENT_PERFORMANCE_TUNING.md |
Updates CPU affinity docs to match enable_cpu_affinity + --no-cpu-affinity. |
AGENTS.md |
Updates repository structure notes for core/ layout and record location. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| adapter_path = ADAPTER_MAP.get(self.api_type) | ||
| if not adapter_path: | ||
| raise ValueError(f"Invalid or unsupported API type: {self.api_type}") | ||
| module_path, class_name = adapter_path.rsplit(".", 1) | ||
| module = import_module(module_path) | ||
| object.__setattr__(self, "adapter", getattr(module, class_name)) | ||
|
|
| accumulator_path = ACCUMULATOR_MAP.get( | ||
| self.api_type, ACCUMULATOR_MAP[APIType.OPENAI] | ||
| ) | ||
| module_path, class_name = accumulator_path.rsplit(".", 1) | ||
| module = import_module(module_path) | ||
| object.__setattr__(self, "accumulator", getattr(module, class_name)) |
What does this PR do?
Consolidate the gpt example plus pipe clean to make sure it works end to end after recent refactors.
Type of change
Related issues
Testing
Checklist