You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sections/java-quarkus/06-chat-api.md
+39-44Lines changed: 39 additions & 44 deletions
Original file line number
Diff line number
Diff line change
@@ -22,12 +22,7 @@ Let's start by configuring `ChatLanguageModelAzureOpenAiProducer`, using the Azu
22
22
23
23
Before we can create the clients, we need to retrieve the credentials to access our Azure services. We'll use the [Azure Identity SDK](https://learn.microsoft.com/java/api/com.azure.identity?view=azure-java-stable) to do that.
Then add this code to retrieve the token to build the `AzureOpenAIChatModel`:
25
+
Add this code under the `TODO:` to retrieve the token to build the `AzureOpenAIChatModel`:
31
26
32
27
```java
33
28
AzureOpenAiChatModel model;
@@ -81,44 +76,44 @@ To use the fallback, add the following code in the catch statement and return th
81
76
82
77
<divclass="info"data-title="Optional notice">
83
78
84
-
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Mistral 7B model on your machine with the `ollama pull mistral` command.
85
-
86
-
To use the local Ollama model, you need to create a new chat model producer. At the same location where you've created the `ChatLanguageModelAzureOpenAiProducer`, create a new class called `ChatLanguageModelOllamaProducer` with the following code
log.info("### Producing ChatLanguageModel with OllamaChatModel");
104
-
105
-
returnOllamaChatModel.builder()
106
-
.baseUrl(ollamaBaseUrl)
107
-
.modelName(ollamaModelName)
108
-
.timeout(ofSeconds(60))
109
-
.build();
110
-
}
111
-
}
112
-
```
113
-
114
-
Notice the `@Alternative` annotation. This tells Quarkus that this producer is an alternative to the default one (`ChatLanguageModelAzureOpenAiProducer`). This way, you can switch between the Azure OpenAI and the Ollama model by enabling the `@Alternative` annotation in the properties file (`@Alternative` are not enabled by default).
115
-
So, if you want to use the Azure OpenAI model, you don't have to configure anything. If instedd you want to use the Ollama model, you will have to add the following property to the `src/backend/src/main/resources/application.properties` file:
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `mistral` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
79
+
> As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Mistral 7B model on your machine with the `ollama pull mistral` command.
80
+
>
81
+
> To use the local Ollama model, you need to create a new chat model producer. At the same location where you've created the `ChatLanguageModelAzureOpenAiProducer`, create a new class called `ChatLanguageModelOllamaProducer` with the following code
> log.info("### Producing ChatLanguageModel with OllamaChatModel");
99
+
>
100
+
>returnOllamaChatModel.builder()
101
+
> .baseUrl(ollamaBaseUrl)
102
+
> .modelName(ollamaModelName)
103
+
> .timeout(ofSeconds(60))
104
+
> .build();
105
+
> }
106
+
>}
107
+
>```
108
+
>
109
+
>Notice the `@Alternative` annotation. This tells Quarkus that this producer is an alternative to the default one (`ChatLanguageModelAzureOpenAiProducer`).This way, you can switch between the AzureOpenAI and the Ollama model by enabling the `@Alternative` annotation in the properties file (`@Alternative` are not enabled by default).
110
+
>So, if you want to use the AzureOpenAI model, you don't have to configure anything. If instedd you want to use the Ollama model, you will have to add the following property to the `src/backend/src/main/resources/application.properties` file:
> That's it. IfOllama is running on the default port (http://localhost:11434) and you have the `mistral` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
0 commit comments