-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
Hi, thanks for open-sourcing this project.
I am trying to reproduce ComputerRL on Google Cloud, but I ran into an issue with the model deployment instructions in the README. The documented model entry points seem inconsistent or no longer publicly accessible.
According to the README:
- open-source text-only model:
ModelScope/ComputerRL - open-source multimodal model:
ModelScope/ComputerRL-V
But the sglang.launch_server example uses:
python -m sglang.launch_server \
--model zai-org/autoglm-os-9b \
--host 0.0.0.0 --port 30000 --served-model-name autoglm-osHowever, I observed the following:
zai-org/autoglm-os-9bdoes not work as a public Hugging Face model ID for me.
huggingface_hubreturns a 401 / repository not found style error, andsglang.launch_serverfails with:
OSError: zai-org/autoglm-os-9b is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
ModelScope/ComputerRLalso does not seem to exist.
Trying to download it withmodelscope.snapshot_download('ModelScope/ComputerRL', ...)returns 404.
So currently:
zai-org/autoglm-os-9bis not accessible to me as a public modelModelScope/ComputerRLreturns 404- the README contains both names, so it is unclear which model path is the correct one today
For reference, my environment is:
- Google Cloud
- A100 80GB
- CUDA 12.4
- Python 3.10
The base GPU environment is working, so the blocker seems specifically to be the model identifier / access path.
Could you clarify:
- What is the correct current model path to use with
sglang.launch_server? - Is
zai-org/autoglm-os-9bprivate / gated / renamed? - Are
ModelScope/ComputerRLandModelScope/ComputerRL-Vstill valid, or have they moved?
Thanks!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels