vLLM v0 has been deprecated, but currently LocalVVLMBackend does not support v1. As a result, vllm appears to be pinned to release 0.9.1 (from June 2025).
I'd propose adding VLLM v1 support for the LocalVLLMBackend. I believe that should also allow for Remote VLLM (via OpenAI API) to work, but will need to verify that.
vLLM v0 has been deprecated, but currently LocalVVLMBackend does not support v1. As a result, vllm appears to be pinned to release 0.9.1 (from June 2025).
I'd propose adding VLLM v1 support for the LocalVLLMBackend. I believe that should also allow for Remote VLLM (via OpenAI API) to work, but will need to verify that.