Skip to content

Commit bb5bf6f

Browse files
committed
feat: update Ollama model recommendations with modern alternatives
Add Qwen 3, Qwen 3 Coder, DeepSeek R1, Gemma 3, and Phi 4 as recommended Ollama models. Remove outdated models (CodeLlama, WizardCoder, Phind CodeLlama, old DeepSeek Coder, Llama 3) from Ollama provider options. Update onboarding default to Qwen 3 8B and refresh docs accordingly.
1 parent 294ba93 commit bb5bf6f

7 files changed

Lines changed: 205 additions & 66 deletions

File tree

core/config/onboarding.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ import { ConfigYaml } from "@continuedev/config-yaml";
33
export const LOCAL_ONBOARDING_PROVIDER_TITLE = "Ollama";
44
export const LOCAL_ONBOARDING_FIM_MODEL = "qwen2.5-coder:1.5b-base";
55
export const LOCAL_ONBOARDING_FIM_TITLE = "Qwen2.5-Coder 1.5B";
6-
export const LOCAL_ONBOARDING_CHAT_MODEL = "llama3.1:8b";
7-
export const LOCAL_ONBOARDING_CHAT_TITLE = "Llama 3.1 8B";
6+
export const LOCAL_ONBOARDING_CHAT_MODEL = "qwen3:8b";
7+
export const LOCAL_ONBOARDING_CHAT_TITLE = "Qwen 3 8B";
88
export const LOCAL_ONBOARDING_EMBEDDINGS_MODEL = "nomic-embed-text:latest";
99
export const LOCAL_ONBOARDING_EMBEDDINGS_TITLE = "Nomic Embed";
1010

core/llm/toolSupport.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -219,6 +219,8 @@ export const PROVIDER_TOOL_SUPPORT: Record<string, (model: string) => boolean> =
219219
"glm-5",
220220
"deepseek",
221221
"dolphin",
222+
"gemma3",
223+
"phi4",
222224
].some((part) => modelName.toLowerCase().includes(part))
223225
) {
224226
return true;

docs/customize/model-providers/top-level/ollama.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,9 +99,9 @@ Continue may set a higher default context length than other Ollama tools, causin
9999

100100
```yaml title="config.yaml"
101101
models:
102-
- name: Deepseek R1
102+
- name: Qwen 3 8B
103103
provider: ollama
104-
model: deepseek-r1:latest
104+
model: qwen3:8b
105105
defaultCompletionOptions:
106106
contextLength: 2048
107107
```

docs/customize/model-roles/chat.mdx

Lines changed: 13 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -161,20 +161,16 @@ If you prefer to use a model from [Google](../model-providers/top-level/gemini),
161161
162162
For the best local, offline Chat experience, you will want to use a model that is large but fast enough on your machine.
163163
164-
### Llama 3.1 8B
164+
### Qwen 3 8B
165165
166-
If your local machine can run an 8B parameter model, then we recommend running Llama 3.1 8B on your machine (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)).
166+
If your local machine can run an 8B parameter model, then we recommend running Qwen 3 8B on your machine (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)).
167167
168168
<Tabs>
169169
<Tab title="Hub">
170170
<Tabs>
171171
<Tab title="Ollama">
172-
Add the [Ollama Llama 3.1 8b block](https://continue.dev/ollama/llama3.1-8b) from the hub
172+
Add the [Ollama Qwen 3 8B block](https://continue.dev/ollama/qwen3-8b) from the hub
173173
</Tab>
174-
{/* HUB_TODO nonexistent block */}
175-
{/* <Tab title="LM Studio">
176-
Add the [LM Studio Llama 3.1 8b block](https://continue.dev/explore/models) from the hub
177-
</Tab> */}
178174
</Tabs>
179175
</Tab>
180176
<Tab title="YAML">
@@ -186,9 +182,9 @@ If your local machine can run an 8B parameter model, then we recommend running L
186182
schema: v1
187183

188184
models:
189-
- name: Llama 3.1 8B
185+
- name: Qwen 3 8B
190186
provider: ollama
191-
model: llama3.1:8b
187+
model: qwen3:8b
192188
```
193189
</Tab>
194190
<Tab title="LM Studio">
@@ -198,43 +194,20 @@ If your local machine can run an 8B parameter model, then we recommend running L
198194
schema: v1
199195

200196
models:
201-
- name: Llama 3.1 8B
197+
- name: Qwen 3 8B
202198
provider: lmstudio
203-
model: llama3.1:8b
204-
```
205-
</Tab>
206-
<Tab title="Msty">
207-
```yaml title="config.yaml"
208-
name: My Config
209-
version: 0.0.1
210-
schema: v1
211-
212-
models:
213-
- name: Llama 3.1 8B
214-
provider: msty
215-
model: llama3.1:8b
199+
model: qwen3:8b
216200
```
217201
</Tab>
218202
</Tabs>
219203
</Tab>
220204
</Tabs>
221205
222-
### DeepSeek Coder 2 16B
206+
### Qwen 3 Coder 30B
223207
224-
If your local machine can run a 16B parameter model, then we recommend running DeepSeek Coder 2 16B (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)).
208+
If your local machine can run a larger model, then [Qwen 3 Coder](https://ollama.com/library/qwen3-coder) is an excellent code-specialized option (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)). The 30B-A3B variant uses mixture-of-experts and runs efficiently despite its size.
225209
226210
<Tabs>
227-
{/* HUB_TODO nonexistent blocks */}
228-
{/* <Tab title="Hub">
229-
<Tabs>
230-
<Tab title="Ollama">
231-
Add the [Ollama Deepseek Coder 2 16B block](https://continue.dev/explore/models) from the hub
232-
</Tab>
233-
<Tab title="LM Studio">
234-
Add the [LM Studio Deepseek Coder 2 16B block](https://continue.dev/explore/models) from the hub
235-
</Tab>
236-
</Tabs>
237-
</Tab> */}
238211
<Tab title="YAML">
239212
<Tabs>
240213
<Tab title="Ollama">
@@ -244,9 +217,9 @@ If your local machine can run a 16B parameter model, then we recommend running D
244217
schema: v1
245218

246219
models:
247-
- name: DeepSeek Coder 2 16B
220+
- name: Qwen 3 Coder 30B
248221
provider: ollama
249-
model: deepseek-coder-v2:16b
222+
model: qwen3-coder:30b-a3b
250223
```
251224
</Tab>
252225
<Tab title="LM Studio">
@@ -256,21 +229,9 @@ If your local machine can run a 16B parameter model, then we recommend running D
256229
schema: v1
257230

258231
models:
259-
- name: DeepSeek Coder 2 16B
232+
- name: Qwen 3 Coder 30B
260233
provider: lmstudio
261-
model: deepseek-coder-v2:16b
262-
```
263-
</Tab>
264-
<Tab title="Msty">
265-
```yaml title="config.yaml"
266-
name: My Config
267-
version: 0.0.1
268-
schema: v1
269-
270-
models:
271-
- name: DeepSeek Coder 2 16B
272-
provider: msty
273-
model: deepseek-coder-v2:16b
234+
model: qwen3-coder:30b-a3b
274235
```
275236
</Tab>
276237
</Tabs>

gui/src/pages/AddNewModel/configs/models.ts

Lines changed: 167 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -380,7 +380,7 @@ export const models: { [key: string]: ModelPackage } = {
380380
},
381381
},
382382
],
383-
providerOptions: ["ollama", "lmstudio", "llama.cpp"],
383+
providerOptions: ["lmstudio", "llama.cpp"],
384384
isOpenSource: true,
385385
},
386386
deepseekChatApi: {
@@ -421,6 +421,169 @@ export const models: { [key: string]: ModelPackage } = {
421421
providerOptions: ["deepseek"],
422422
isOpenSource: true,
423423
},
424+
deepseekR1Local: {
425+
title: "DeepSeek R1",
426+
description:
427+
"A powerful open-source reasoning model with chain-of-thought capabilities, available in distilled sizes for local use.",
428+
params: {
429+
title: "DeepSeek-R1-14b",
430+
model: "deepseek-r1:14b",
431+
contextLength: 64_000,
432+
},
433+
icon: "deepseek.png",
434+
dimensions: [
435+
{
436+
name: "Parameter Count",
437+
description: "The number of parameters in the model",
438+
options: {
439+
"1.5b": {
440+
model: "deepseek-r1:1.5b",
441+
title: "DeepSeek-R1-1.5b",
442+
},
443+
"7b": {
444+
model: "deepseek-r1:7b",
445+
title: "DeepSeek-R1-7b",
446+
},
447+
"8b": {
448+
model: "deepseek-r1:8b",
449+
title: "DeepSeek-R1-8b",
450+
},
451+
"14b": {
452+
model: "deepseek-r1:14b",
453+
title: "DeepSeek-R1-14b",
454+
},
455+
"32b": {
456+
model: "deepseek-r1:32b",
457+
title: "DeepSeek-R1-32b",
458+
},
459+
"70b": {
460+
model: "deepseek-r1:70b",
461+
title: "DeepSeek-R1-70b",
462+
},
463+
},
464+
},
465+
],
466+
providerOptions: ["ollama", "lmstudio", "llama.cpp"],
467+
isOpenSource: true,
468+
},
469+
qwen3Chat: {
470+
title: "Qwen 3",
471+
description:
472+
"Alibaba's latest model with hybrid thinking, strong coding and reasoning capabilities.",
473+
params: {
474+
title: "Qwen3-8b",
475+
model: "qwen3:8b",
476+
contextLength: 32_768,
477+
},
478+
icon: "qwen.png",
479+
dimensions: [
480+
{
481+
name: "Parameter Count",
482+
description: "The number of parameters in the model",
483+
options: {
484+
"0.6b": {
485+
model: "qwen3:0.6b",
486+
title: "Qwen3-0.6b",
487+
},
488+
"4b": {
489+
model: "qwen3:4b",
490+
title: "Qwen3-4b",
491+
},
492+
"8b": {
493+
model: "qwen3:8b",
494+
title: "Qwen3-8b",
495+
},
496+
"32b": {
497+
model: "qwen3:32b",
498+
title: "Qwen3-32b",
499+
},
500+
},
501+
},
502+
],
503+
providerOptions: ["ollama", "lmstudio", "llama.cpp"],
504+
isOpenSource: true,
505+
},
506+
qwen3Coder: {
507+
title: "Qwen 3 Coder",
508+
description:
509+
"Alibaba's latest code-specialized model with strong multi-language programming support.",
510+
params: {
511+
title: "Qwen3-Coder-30b",
512+
model: "qwen3-coder:30b-a3b",
513+
contextLength: 32_768,
514+
},
515+
icon: "qwen.png",
516+
dimensions: [
517+
{
518+
name: "Parameter Count",
519+
description: "The number of parameters in the model",
520+
options: {
521+
"1.5b": {
522+
model: "qwen3-coder:1.5b",
523+
title: "Qwen3-Coder-1.5b",
524+
},
525+
"8b": {
526+
model: "qwen3-coder:8b",
527+
title: "Qwen3-Coder-8b",
528+
},
529+
"30b-a3b (MoE)": {
530+
model: "qwen3-coder:30b-a3b",
531+
title: "Qwen3-Coder-30b",
532+
},
533+
},
534+
},
535+
],
536+
providerOptions: ["ollama", "lmstudio", "llama.cpp"],
537+
isOpenSource: true,
538+
},
539+
gemma3Chat: {
540+
title: "Gemma 3",
541+
description:
542+
"Google's latest open model with strong coding and instruction-following capabilities.",
543+
params: {
544+
title: "Gemma3-12b",
545+
model: "gemma3:12b",
546+
contextLength: 32_768,
547+
},
548+
dimensions: [
549+
{
550+
name: "Parameter Count",
551+
description: "The number of parameters in the model",
552+
options: {
553+
"1b": {
554+
model: "gemma3:1b",
555+
title: "Gemma3-1b",
556+
},
557+
"4b": {
558+
model: "gemma3:4b",
559+
title: "Gemma3-4b",
560+
},
561+
"12b": {
562+
model: "gemma3:12b",
563+
title: "Gemma3-12b",
564+
},
565+
"27b": {
566+
model: "gemma3:27b",
567+
title: "Gemma3-27b",
568+
},
569+
},
570+
},
571+
],
572+
providerOptions: ["ollama", "lmstudio", "llama.cpp"],
573+
isOpenSource: true,
574+
},
575+
phi4: {
576+
title: "Phi 4",
577+
description:
578+
"Microsoft's compact 14B model with strong reasoning and coding performance.",
579+
params: {
580+
title: "Phi-4",
581+
model: "phi4:14b",
582+
contextLength: 16_384,
583+
},
584+
providerOptions: ["ollama", "lmstudio", "llama.cpp"],
585+
isOpenSource: true,
586+
},
424587
deepseekCoder2Lite: {
425588
title: "DeepSeek Coder 2 Lite",
426589
description:
@@ -640,7 +803,6 @@ export const models: { [key: string]: ModelPackage } = {
640803
},
641804
],
642805
providerOptions: [
643-
"ollama",
644806
"lmstudio",
645807
"together",
646808
"llama.cpp",
@@ -832,7 +994,6 @@ export const models: { [key: string]: ModelPackage } = {
832994
},
833995
],
834996
providerOptions: [
835-
"ollama",
836997
"lmstudio",
837998
"together",
838999
"ovhcloud",
@@ -851,7 +1012,7 @@ export const models: { [key: string]: ModelPackage } = {
8511012
contextLength: 20_000,
8521013
title: "Granite Code",
8531014
},
854-
providerOptions: ["ollama", "lmstudio", "llama.cpp", "replicate"],
1015+
providerOptions: ["lmstudio", "llama.cpp", "replicate"],
8551016
icon: "ibm.png",
8561017
isOpenSource: true,
8571018
dimensions: [
@@ -910,7 +1071,7 @@ export const models: { [key: string]: ModelPackage } = {
9101071
},
9111072
},
9121073
],
913-
providerOptions: ["ollama", "lmstudio", "llama.cpp", "replicate"],
1074+
providerOptions: ["lmstudio", "llama.cpp", "replicate"],
9141075
isOpenSource: true,
9151076
},
9161077
phindCodeLlama: {
@@ -922,7 +1083,7 @@ export const models: { [key: string]: ModelPackage } = {
9221083
model: "phind-codellama-34b",
9231084
contextLength: 4096,
9241085
},
925-
providerOptions: ["ollama", "lmstudio", "llama.cpp", "replicate"],
1086+
providerOptions: ["lmstudio", "llama.cpp", "replicate"],
9261087
isOpenSource: true,
9271088
},
9281089
codestral: {

gui/src/pages/AddNewModel/configs/providers.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -491,7 +491,7 @@ Select the \`GPT-4o\` model below to complete your provider configuration, but n
491491
description:
492492
"One of the fastest ways to get started with local models on Mac, Linux, or Windows",
493493
longDescription:
494-
'To get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/download) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.json (e.g. `model="codellama:7b-instruct"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.',
494+
'To get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/download) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `qwen3:8b` or `deepseek-r1:14b`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.json (e.g. `model="qwen3:8b"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.',
495495
icon: "ollama.png",
496496
tags: [ModelProviderTags.Local, ModelProviderTags.OpenSource],
497497
packages: [

0 commit comments

Comments
 (0)