You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sections/java-quarkus/02.1-additional-setup.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -40,12 +40,12 @@ After you completed the Azure setup, you can come back here to continue the work
40
40
If you have a machine with enough resources, you can run this workshop entirely locally without using any cloud resources. To do that, you first have to install [Ollama](https://ollama.com) and then run the following commands to download the models on your machine:
41
41
42
42
```bash
43
-
ollama pull llama3
43
+
ollama pull mistral
44
44
```
45
45
46
46
<divclass="info"data-title="Note">
47
47
48
-
> The `llama3` model with download a few gigabytes of data, so it can take some time depending on your internet connection.
48
+
> The `mistral` model with download a few gigabytes of data, so it can take some time depending on your internet connection.
Copy file name to clipboardExpand all lines: docs/sections/java-quarkus/06-chat-api.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ Let's start by configuring `ChatLanguageModelAzureOpenAiProducer`, using the Azu
47
47
48
48
<divclass="info"data-title="Optional notice">
49
49
50
-
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Llama3 models on your machine with the `ollama pull llama3` command.
50
+
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Mistral 7B model on your machine with the `ollama pull mistral` command.
51
51
52
52
To use the local Ollama model, you need to create a new chat model producer. At the same location where you've created the `ChatLanguageModelAzureOpenAiProducer`, create a new class called `ChatLanguageModelOllamaProducer` with the following code
53
53
@@ -60,7 +60,7 @@ public class ChatLanguageModelOllamaProducer {
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `llama3` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
87
+
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `mistral` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
Copy file name to clipboardExpand all lines: src/backend-java-quarkus/src/main/java/ai/azure/openai/rag/workshop/backend/configuration/ChatLanguageModelOllamaProducer.java
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ public class ChatLanguageModelOllamaProducer {
0 commit comments