Skip to content

Commit 5967a12

Browse files
committed
refactor: use mistral instead of llama3
1 parent 4fe508b commit 5967a12

File tree

3 files changed

+7
-7
lines changed

3 files changed

+7
-7
lines changed

docs/sections/java-quarkus/02.1-additional-setup.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,12 +40,12 @@ After you completed the Azure setup, you can come back here to continue the work
4040
If you have a machine with enough resources, you can run this workshop entirely locally without using any cloud resources. To do that, you first have to install [Ollama](https://ollama.com) and then run the following commands to download the models on your machine:
4141

4242
```bash
43-
ollama pull llama3
43+
ollama pull mistral
4444
```
4545

4646
<div class="info" data-title="Note">
4747

48-
> The `llama3` model with download a few gigabytes of data, so it can take some time depending on your internet connection.
48+
> The `mistral` model with download a few gigabytes of data, so it can take some time depending on your internet connection.
4949
5050
</div>
5151

@@ -62,5 +62,5 @@ QDRANT_URL=http://localhost:6334
6262
Finally, you can start the Ollama server with the following command:
6363

6464
```bash
65-
ollama run llama3
65+
ollama run mistral
6666
```

docs/sections/java-quarkus/06-chat-api.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Let's start by configuring `ChatLanguageModelAzureOpenAiProducer`, using the Azu
4747

4848
<div class="info" data-title="Optional notice">
4949

50-
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Llama3 models on your machine with the `ollama pull llama3` command.
50+
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Mistral 7B model on your machine with the `ollama pull mistral` command.
5151

5252
To use the local Ollama model, you need to create a new chat model producer. At the same location where you've created the `ChatLanguageModelAzureOpenAiProducer`, create a new class called `ChatLanguageModelOllamaProducer` with the following code
5353

@@ -60,7 +60,7 @@ public class ChatLanguageModelOllamaProducer {
6060
@ConfigProperty(name = "OLLAMA_BASE_URL", defaultValue = "http://localhost:11434")
6161
String ollamaBaseUrl;
6262

63-
@ConfigProperty(name = "OLLAMA_MODEL_NAME", defaultValue = "llama3")
63+
@ConfigProperty(name = "OLLAMA_MODEL_NAME", defaultValue = "mistral")
6464
String ollamaModelName;
6565

6666
@Produces
@@ -84,7 +84,7 @@ So, if you want to use the Azure OpenAI model, you don't have to configure anyth
8484
quarkus.arc.selected-alternatives=ai.azure.openai.rag.workshop.backend.configuration.ChatLanguageModelOllamaProducer
8585
```
8686

87-
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `llama3` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
87+
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `mistral` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
8888

8989
</div>
9090

src/backend-java-quarkus/src/main/java/ai/azure/openai/rag/workshop/backend/configuration/ChatLanguageModelOllamaProducer.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ public class ChatLanguageModelOllamaProducer {
1717
@ConfigProperty(name = "OLLAMA_BASE_URL", defaultValue = "http://localhost:11434")
1818
String ollamaBaseUrl;
1919

20-
@ConfigProperty(name = "OLLAMA_MODEL_NAME", defaultValue = "llama3")
20+
@ConfigProperty(name = "OLLAMA_MODEL_NAME", defaultValue = "mistral")
2121
String ollamaModelName;
2222

2323
@Produces

0 commit comments

Comments
 (0)