You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/customize/model-roles/chat.mdx
+13-52Lines changed: 13 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,20 +161,16 @@ If you prefer to use a model from [Google](../model-providers/top-level/gemini),
161
161
162
162
For the best local, offline Chat experience, you will want to use a model that is large but fast enough on your machine.
163
163
164
-
### Llama 3.1 8B
164
+
### Qwen 3 8B
165
165
166
-
If your local machine can run an 8B parameter model, then we recommend running Llama 3.1 8B on your machine (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)).
166
+
If your local machine can run an 8B parameter model, then we recommend running Qwen 3 8B on your machine (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)).
167
167
168
168
<Tabs>
169
169
<Tab title="Hub">
170
170
<Tabs>
171
171
<Tab title="Ollama">
172
-
Add the [Ollama Llama 3.1 8b block](https://continue.dev/ollama/llama3.1-8b) from the hub
172
+
Add the [Ollama Qwen 3 8B block](https://continue.dev/ollama/qwen3-8b) from the hub
173
173
</Tab>
174
-
{/* HUB_TODO nonexistent block */}
175
-
{/* <Tab title="LM Studio">
176
-
Add the [LM Studio Llama 3.1 8b block](https://continue.dev/explore/models) from the hub
177
-
</Tab> */}
178
174
</Tabs>
179
175
</Tab>
180
176
<Tab title="YAML">
@@ -186,9 +182,9 @@ If your local machine can run an 8B parameter model, then we recommend running L
186
182
schema: v1
187
183
188
184
models:
189
-
- name: Llama 3.1 8B
185
+
- name: Qwen 3 8B
190
186
provider: ollama
191
-
model: llama3.1:8b
187
+
model: qwen3:8b
192
188
```
193
189
</Tab>
194
190
<Tab title="LM Studio">
@@ -198,43 +194,20 @@ If your local machine can run an 8B parameter model, then we recommend running L
198
194
schema: v1
199
195
200
196
models:
201
-
- name: Llama 3.1 8B
197
+
- name: Qwen 3 8B
202
198
provider: lmstudio
203
-
model: llama3.1:8b
204
-
```
205
-
</Tab>
206
-
<Tab title="Msty">
207
-
```yaml title="config.yaml"
208
-
name: My Config
209
-
version: 0.0.1
210
-
schema: v1
211
-
212
-
models:
213
-
- name: Llama 3.1 8B
214
-
provider: msty
215
-
model: llama3.1:8b
199
+
model: qwen3:8b
216
200
```
217
201
</Tab>
218
202
</Tabs>
219
203
</Tab>
220
204
</Tabs>
221
205
222
-
### DeepSeek Coder 2 16B
206
+
### Qwen 3 Coder 30B
223
207
224
-
If your local machine can run a 16B parameter model, then we recommend running DeepSeek Coder 2 16B (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)).
208
+
If your local machine can run a larger model, then [Qwen 3 Coder](https://ollama.com/library/qwen3-coder) is an excellent code-specialized option (e.g. using [Ollama](../model-providers/top-level/ollama) or [LM Studio](../model-providers/top-level/lmstudio)). The 30B-A3B variant uses mixture-of-experts and runs efficiently despite its size.
225
209
226
210
<Tabs>
227
-
{/* HUB_TODO nonexistent blocks */}
228
-
{/* <Tab title="Hub">
229
-
<Tabs>
230
-
<Tab title="Ollama">
231
-
Add the [Ollama Deepseek Coder 2 16B block](https://continue.dev/explore/models) from the hub
232
-
</Tab>
233
-
<Tab title="LM Studio">
234
-
Add the [LM Studio Deepseek Coder 2 16B block](https://continue.dev/explore/models) from the hub
235
-
</Tab>
236
-
</Tabs>
237
-
</Tab> */}
238
211
<Tab title="YAML">
239
212
<Tabs>
240
213
<Tab title="Ollama">
@@ -244,9 +217,9 @@ If your local machine can run a 16B parameter model, then we recommend running D
244
217
schema: v1
245
218
246
219
models:
247
-
- name: DeepSeek Coder 2 16B
220
+
- name: Qwen 3 Coder 30B
248
221
provider: ollama
249
-
model: deepseek-coder-v2:16b
222
+
model: qwen3-coder:30b-a3b
250
223
```
251
224
</Tab>
252
225
<Tab title="LM Studio">
@@ -256,21 +229,9 @@ If your local machine can run a 16B parameter model, then we recommend running D
Copy file name to clipboardExpand all lines: gui/src/pages/AddNewModel/configs/providers.ts
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -491,7 +491,7 @@ Select the \`GPT-4o\` model below to complete your provider configuration, but n
491
491
description:
492
492
"One of the fastest ways to get started with local models on Mac, Linux, or Windows",
493
493
longDescription:
494
-
'To get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/download) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `codellama:7b-instruct` or `llama2:7b-text`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.json (e.g. `model="codellama:7b-instruct"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.',
494
+
'To get started with Ollama, follow these steps:\n1. Download from [ollama.ai](https://ollama.ai/download) and open the application\n2. Open a terminal and run `ollama run <MODEL_NAME>`. Example model names are `qwen3:8b` or `deepseek-r1:14b`. You can find the full list [here](https://ollama.ai/library).\n3. Make sure that the model name used in step 2 is the same as the one in config.json (e.g. `model="qwen3:8b"`)\n4. Once the model has finished downloading, you can start asking questions through Continue.',
0 commit comments