fix(deps): update module github.com/ollama/ollama to v0.21.2#66
Open
renovate[bot] wants to merge 1 commit intomainfrom
Open
fix(deps): update module github.com/ollama/ollama to v0.21.2#66renovate[bot] wants to merge 1 commit intomainfrom
renovate[bot] wants to merge 1 commit intomainfrom
Conversation
71e1230 to
4bf37c1
Compare
4bf37c1 to
3937772
Compare
3937772 to
4addd23
Compare
4addd23 to
766eb34
Compare
766eb34 to
fb45a8c
Compare
fb45a8c to
c081f43
Compare
c081f43 to
65d7b07
Compare
65d7b07 to
01d0591
Compare
01d0591 to
3104147
Compare
Contributor
Author
ℹ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
3104147 to
c626610
Compare
c626610 to
ede6e14
Compare
ede6e14 to
1d65f7d
Compare
1d65f7d to
7d6c10a
Compare
7d6c10a to
73270f8
Compare
73270f8 to
53544b1
Compare
9a2971b to
875cd62
Compare
875cd62 to
b6b0355
Compare
b6b0355 to
7b61af1
Compare
7b61af1 to
e5ba3de
Compare
e5ba3de to
bc8d98a
Compare
bc8d98a to
fa63a40
Compare
fa63a40 to
534d339
Compare
534d339 to
b7b5a4f
Compare
7f3e16c to
6b3723d
Compare
d046c0f to
9fbb897
Compare
9fbb897 to
c90da18
Compare
c90da18 to
a5e112f
Compare
a5e112f to
2935ce3
Compare
Contributor
Author
ℹ️ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v0.3.6→v0.21.2Release Notes
ollama/ollama (github.com/ollama/ollama)
v0.21.2Compare Source
What's Changed
ollama launchollama launchnow appear in a fixed, canonical orderNew Contributors
Full Changelog: ollama/ollama@v0.21.1...v0.21.2
v0.21.1Compare Source
What's Changed
Kimi CLI
You can now install and run the Kimi CLI through Ollama.
Kimi CLI with Kimi K2.6 excels at long horizon agentic execution tasks through a multi-agent system.
think=falseFull Changelog: ollama/ollama@v0.21.0...v0.21.1
v0.21.0Compare Source
Hermes Agent
Hermes learns with you, automatically creating skills to better serve your workflows. Great for research and engineering tasks.
What's Changed
ollama launch. Added both integrations, which can now be configured in one command alongside the rest of the supported coding agents.ollama launch opencodenow writes its config inline rather than to a separate file, matching how other integrations are handled.ollama launchno longer rewrites config when nothing changed. Pressing → on a configured multi-model integration, or passing--modelwith the current primary, used to trigger a confirmation prompt and rewrite both the editor's config file andconfig.json. Now it's a no-op when the resolved model list matches what's already saved.ollama launch openclaw --yesso it correctly skips the channels configuration step, so non-interactive setups complete cleanly.generate, which was breaking cmake builds on some Xcode versions.go build.Full Changelog: ollama/ollama@v0.20.7...v0.21.0
v0.20.7Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.20.6...v0.20.7
v0.20.6Compare Source
What's Changed
New Contributors
@matteocelani made their first contribution in #15272
Full Changelog: ollama/ollama@v0.20.5...v0.20.6
v0.20.5Compare Source
OpenClaw channel setup with
ollama launchWhat's Changed
ollama launch openclawollama launch opencodenow detects curl-based OpenCode installs at~/.opencode/bin/savecommand for models imported from safetensorsNew Contributors
Full Changelog: ollama/ollama@v0.20.4...v0.20.5
v0.20.4Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.20.3...v0.20.4
v0.20.3Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.20.2...v0.20.3
v0.20.2Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.20.1...v0.20.2
v0.20.1Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.20.0...v0.20.1
v0.20.0Compare Source
Gemma 4
Effective 2B (E2B)
Effective 4B (E4B)
26B (Mixture of Experts model with 4B active parameters)
31B (Dense)
What's Changed
Full Changelog: ollama/ollama@v0.19.0...v0.20.0-rc0
v0.19.0Compare Source
Ollama is now powered by MLX on Apple Silicon in preview
Ollama on Apple silicon is now built on top of Apple’s machine learning framework, MLX, to take advantage of its unified memory architecture.
mlx-coding-agent.mov
Read more: https://ollama.com/blog/mlx
What's Changed
ollama launch pinow includes web search plugin that uses Ollama's web searchgrokmodelsqwen3-next:80bnot loading in OllamaNew Contributors
Full Changelog: ollama/ollama@v0.18.3...v0.19.0
v0.18.3Compare Source
Visual Studio Code
Microsoft Visual Studio Code now directly integrates with Ollama via GitHub Copilot.
If you have Ollama installed, any local or cloud model from Ollama can be selected for use within visual studio code.
What's Changed
Full Changelog: ollama/ollama@v0.18.2...v0.18.3
v0.18.2Compare Source
What's Changed
npmandgitare installed before installing OpenClawollama launch openclaw --model <model>Full Changelog: ollama/ollama@v0.18.1...v0.18.2
v0.18.1Compare Source
Web Search and Fetch in OpenClaw
Ollama now ships with web search and web fetch plugin for OpenClaw. This allows Ollama's models (local or cloud) to search the web for the latest content and news. This also allows OpenClaw with Ollama to be able to fetch the web and extract readable content for processing. This feature does not execute JavaScript.
When using local models with web search in OpenClaw, ensure you are signed into Ollama with
ollama signinYou can install web search directly into OpenClaw as a plugin if you already have OpenClaw configured and working:
Ollama web search plugin
Non-interactive (headless) mode for ollama launch
ollama launchcan now run in non-interactive mode.Perfect for:
Docker/containers: spin up an integration as a pipeline step to run evals, test prompts, or validate model behavior as part of your build. Tear it down when the job ends.
CI/CD: Generate code reviews, security checks, and other tasks within your CI
Scripts/automation: Kick off automated tasks with Ollama and claude code
--modelmust be specified to run in headless mode--yesflag will auto-pull the model and skip any selectorsTry with:
ollama launch claude --model kimi-k2.5:cloud --yes -- -p "how does this repository work?"Use non-interactive mode in OpenClaw
You can ask your OpenClaw to run tasks using claude with subagents:
What's Changed
ollama launch openclawwill now use the official Ollama auth and model provider for OpenClaw./cmd/benchollama launch openclawwill now skip--install-daemonwhen systemd is unavailableFull Changelog: ollama/ollama@v0.18.0...v0.18.1
v0.18.0Compare Source
Ollama 0.18 includes improved performance for OpenClaw and Ollama’s cloud models, including the new Nemotron-3-Super model by NVIDIA designed for high-performance agentic reasoning tasks.
Improved OpenClaw performance with Kimi-K2.5
This release of Ollama improves performance of cloud models and their reliability.
Ollama is now a provider in OpenClaw
Ollama can now be selected as an authentication and model provider during OpenClaw onboarding (thanks @BruceMacD for contributing and @steipete for reviewing!)
More information: https://docs.openclaw.ai/providers/ollama
Nemotron-3-Super
Nemotron-3-Super: is a new 122B parameter model with strong reasoning and tool calling capability, while having top performance when run on modern hardware:
ollama run nemotron-3-super:cloudollama run nemotron-3-superto run locally (requires 96GB+ of VRAM)Nemotron-3-Super scores highest of any open model on PinchBench, a benchmark suite that measures how successful models are at completing tasks when used with OpenClaw.
Or using OpenClaw’s onboarding:
Non-interactive task support
ollama launchnow supports non-interactive tasks by passing in--yes. This enables using Claude, Codex, Pi and more in scripts, GitHub Actions, and other non-interactive environments.Lower latency on MiniMax-M2.5 and Qwen3.5 on Ollama’s cloud
For customers in North America, MiniMax-M2.5 and Qwen3.5 on Ollama’s cloud now respond much faster, up to 10x and up to 2x faster respectively, and often in less than a second. This is ideal for tasks that require a fast Time To First Token (TTFT) when needing quick answers from OpenClaw or quick back-to-back coding tasks.
Driver updates required for ROCm 7
This version of Ollama ships with ROCm 7, and requires updating drivers to the latest version for continued support.
What's Changed
ollama pull. Setting:cloudas a tag will now automatically connect to cloud models.--yesflag forollama launchthat skips all prompts, making it possible to run AI assistants and other tools in non-interactive environmentsollama launch claudeNew Contributors
Full Changelog: ollama/ollama@v0.17.7...v0.18.0
v0.17.7Compare Source
What's Changed
"medium"to correctly interpreted in Ollama's API for all thinking modelsollama launchFull Changelog: ollama/ollama@v0.17.6...v0.17.7
v0.17.6Compare Source
What's Changed
New Contributors
Full Changelog: ollama/ollama@v0.17.5...v0.17.6
v0.17.5Compare Source
New models
What's Changed
qwen3.5models:ollama pull qwen3.5:35bfor example)ollama run --verbosewill now show peak memory usage when using Ollama's MLX engineFull Changelog: ollama/ollama@v0.17.4...v0.17.5
v0.17.4Compare Source
New models
What's Changed
Full Changelog: ollama/ollama@v0.17.3...v0.17.4
v0.17.3Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.17.2...v0.17.3
v0.17.2Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.17.1...v0.17.2
v0.17.1Compare Source
What's Changed
ollama createwill no longer default to affine quantization for unquantized models when using the MLX engineFull Changelog: ollama/ollama@v0.17.0...v0.17.1
v0.17.0Compare Source
OpenClaw
OpenClaw can now be installed and configured automatically via Ollama, making it the easiest way to get up and running with OpenClaw with open models like Kimi-K2.5, GLM-5, and Minimax-M2.5.
Get started
ollama launch openclawWeb search in OpenClaw
When using cloud models, websearch is enabled - allowing OpenClaw to search the internet.
What's Changed
New Contributors
Full Changelog: ollama/ollama@v0.16.3...v0.17.0
v0.16.3Compare Source
What's Changed
ollama launch clineadded for the Cline CLIollama launch <integration>will now always show the model pickerNew Contributors
Full Changelog: ollama/ollama@v0.16.2...v0.16.3
v0.16.2Compare Source
What's Changed
ollama launch claudenow supports searching the web when using:cloudmodelsollamain PowerShellollama servemanually, setOLLAMA_NO_CLOUD=1.Full Changelog: ollama/ollama@v0.16.1...v0.16.2-rc0
v0.16.1Compare Source
What's Changed
curlinstall script on macOS will now only prompt for your password if its requiredieminstall script in Windows will now show progressOLLAMA_LOAD_TIMEOUTvariableFull Changelog: ollama/ollama@v0.16.0...v0.16.1
v0.16.0Compare Source
New models
New
ollamaThe new
ollamacommand makes it easy to launch your favorite apps with models using OllamaWhat's Changed
ollama launch piFull Changelog: ollama/ollama@v0.15.6...v0.16.0
v0.15.6Compare Source
What's Changed
ollama launch droidollama launchwill now download missing models instead of erroringollama launch claudewould cause context compaction when providing imagesFull Changelog: ollama/ollama@v0.15.5...v0.15.6
v0.15.5Compare Source
New models
Improvements to
ollama launchollama launchcan now be provided arguments, for exampleollama launch claude -- --resumeollama launchwill now work run subagents when usingollama launch claudeollama launch opencodeWhat's Changed
ollama launchfor planning, deep research, and similar tasksollama signinwill now open a browser window to make signing in easierollama signinwill now open the browser to the connect pagenum_predictin the APInum_predictNew Contributors
Full Changelog: ollama/ollama@v0.15.4...v0.15.5
v0.15.4Compare Source
What's Changed
ollama launch openclawwill now enter the standard OpenClaw onboarding flow if this has not yet been completed.Full Changelog: ollama/ollama@v0.15.3...v0.15.4
v0.15.3Compare Source
What's Changed
ollama launch clawdbottoollama launch openclawto reflect the project's new nameollama launchwill now use the value ofOLLAMA_HOSTwhen running itNew Contributors
Full Changelog: ollama/ollama@v0.15.2...v0.15.3
v0.15.2Compare Source
What's Changed
ollama launch clawdbotcommand for launching Clawdbot using Ollama modelsFull Changelog: ollama/ollama@v0.15.1...v0.15.2
v0.15.1Compare Source
What's Changed
ollama launchwould not detectclaudeand would incorrectly updateopencodeconfigurationsNew Contributors
Full Changelog: ollama/ollama@v0.15.0...v0.15.1
v0.15.0Compare Source
ollama launchA new
ollama launchcommand to use Ollama's models with Claude Code, Codex, OpenCode, and Droid without separate configuration.What's Changed
ollama launchcommand for Claude Code, Codex, OpenCode, and Droid"""would not work when usingollama runollama runv0.14.3Compare Source
New models
What's Changed
ollama createandollama showcommands for experimental models/api/generateAPI can now be used for image generationollama rmwould only stop the first model in the list if it were runningFull Changelog: ollama/ollama@v0.14.2...v0.14.3
v0.14.2Compare Source
New models
What's Changed
/v1/responsesAPI to better confirm to OpenResponses specificationNew Contributors
Full Changelog: ollama/ollama@v0.14.1...v0.14.2
v0.14.1Compare Source
Image generation models (experimental)
Experimental image generation models are available for macOS and Linux (CUDA) in Ollama:
Available models
More models coming soon:
What's Changed
New Contributors
Full Changelog: ollama/ollama@v0.14.0...v0.14.1
v0.14.0Compare Source
What's Changed
ollama run --experimentalCLI will now open a new Ollama CLI that includes an agent loop and thebashtool/v1/messagesAPIREQUIREScommand for theModelfileallows declaring which version of Ollama is required for the modelNaNor-InfzstcompressionNew Contributors
Full Changelog: ollama/ollama@v0.13.5...v0.14.0-rc2
v0.13.5Compare Source
New Models
What's Changed
bertarchitecture models now run on Ollama's engineNew Contributors
Full Changelog: ollama/ollama@v0.13.4...v0.13.5
v0.13.4Compare Source
New Models
What's Changed
New Contributors
Full Changelog: ollama/ollama@v0.13.3...v0.13.4-rc0
v0.13.3Compare Source
New models
What's Changed
/api/embedand/v1/embeddingsFull Changelog: ollama/ollama@v0.13.2...v0.13.3
v0.13.2Compare Source
New models
What's Changed
mistral-3,gemma3,qwen3-vland more. This improves memory utilization and performance when providing images as input.deepseek-v3.1would always think even with thinking is disabled in Ollama's appNew Contributors
Full Changelog: ollama/ollama@v0.13.1...v0.13.2
v0.13.1Compare Source
New models
What's Changed
nomic-embed-textwill now use Ollama's engine by defaultcogito-v2.1Unmarshal:errorsNew Contributors
Full Changelog: ollama/ollama@v0.13.0...v0.13.1
v0.13.0Compare Source
New models
DeepSeek-OCR
DeepSeek-OCR is now available on Ollama. Example inputs:
New
benchtoolOllama's GitHub repo now includes a
benchtool that can be used to test model performance. For the time being this is a separate tool that can be built in the Ollama GitHub repository:First, install Go. Then from the root of the Ollama repository run:
For more information see the tool's documentation
What's Changed
New Contributors
Full Changelog: ollama/ollama@v0.12.11...v0.13.0
v0.12.11Compare Source
Logprobs
Ollama's API and OpenAI-compatible API now support log probabilities. Log probabilities of output tokens indicate the likelihood of each token occurring in the sequence given the context. This is useful for different use cases:
To enable Logprobs, provide
"logprobs": trueto Ollama's API:When log probabilities are requested, response chunks will now include a
"logprobs"field with the token, log probability and raw bytes (for partial unicode).{ "model": "gemma3", "created_at": "2025-11-14T22:17:56.598562Z", "response": "Okay", "done": false, "logprobs": [ { "token": "Okay", "logprob": -1.3434503078460693, "bytes": [ 79, 107, 97, 121 ] } ] }top_logprobsWhen setting
"top_logprobs", a number of most-likely tokens are also provided, making it possible to introspect alternative tokens. Below is an example request.This will generate a stream of response chunks with the following fields:
{ "model": "gemma3", "created_at": "2025-11-14T22:26:10.466324Z", "response": "The", "done": false, "logprobs": [ { "token": "The", "logprob": -0.8361086845397949, "bytes": [ 84, 104, 101 ], "top_logprobs": [ { "token": "The", "logprob": -0.8361086845397949, "bytes": [ 84, 104, 101 ] }, { "token": "Okay", "logprob": -1.2590975761413574, "bytes": [ 79, 107, 97, 121 ] }, { "token": "That", "logprob": -1.2686877250671387, "bytes": [ 84, 104, 97, 116 ] } ] } ] }Special thanks
Thank you @baptistejamin for adding Logprobs to Ollama's API.
Vulkan support (opt-in)
Ollama 0.12.11 includes support for Vulkan acceleration. Vulkan brings support for a broad range of GPUs from AMD, Intel, and iGPUs. Vulkan support is not yet enabled by default, and requires opting in by running Ollama with a custom environment variable:
On Powershell, use:
For issues or feedback on using Vulkan with Ollama, create an issue labelled Vulkan and make sure to include server logs where possible to aid in debugging.
What's Changed
"required"field in tool definitions will now be omitted if not specified"tool_call_id"would be omitted when using the OpenAI-compatible API.ollama createwould import data from bothconsolidated.safetensorsand other safetensor files.OLLAMA_VULKAN=1. For example:OLLAMA_VULKAN=1 ollama serveNew Contributors
Full Changelog: ollama/ollama@v0.12.10...v0.12.11
v0.12.10Compare Source
ollama runnow works with embedding modelsollama runcan now run embedding models to generate vector embeddings from text:Content can also be provided to
ollama runvia standard input:What's Changed
qwen3-vl:235bandqwen3-vl:235b-instruct/api/chatAPIollama runnow works with embedding modelsNew Contributors
Full Changelog: ollama/ollama@v0.12.9...v0.12.10
v0.12.9Compare Source
What's Changed
Full Changelog: ollama/ollama@v0.12.8...v0.12.9
v0.12.8Compare Source
What's Changed
qwen3-vlperformance improvements, including flash attention support by defaultqwen3-vlwill now output less leading whitespace in the response when thinkingdeepseek-v3.1thinking could not be disabled in Ollama's new appqwen3-vlwould fail to interpret images with transparent backgroundsollama rmNew Contributors
Full Changelog: ollama/ollama@v0.12.7...v0.12.8
v0.12.7Compare Source
<img width="600" alt="Ollama screenshot 2025-10-29
Configuration
📅 Schedule: (UTC)
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.