Skip to content

Commit ad68719

Browse files
committed
docs: fix ollama section
1 parent 53c513e commit ad68719

File tree

1 file changed

+39
-44
lines changed

1 file changed

+39
-44
lines changed

docs/sections/java-quarkus/06-chat-api.md

Lines changed: 39 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,7 @@ Let's start by configuring `ChatLanguageModelAzureOpenAiProducer`, using the Azu
2222

2323
Before we can create the clients, we need to retrieve the credentials to access our Azure services. We'll use the [Azure Identity SDK](https://learn.microsoft.com/java/api/com.azure.identity?view=azure-java-stable) to do that.
2424

25-
Make sure this import is at the top of the file:
26-
27-
```java
28-
import com.azure.identity.DefaultAzureCredentialBuilder;
29-
```
30-
Then add this code to retrieve the token to build the `AzureOpenAIChatModel`:
25+
Add this code under the `TODO:` to retrieve the token to build the `AzureOpenAIChatModel`:
3126

3227
```java
3328
AzureOpenAiChatModel model;
@@ -81,44 +76,44 @@ To use the fallback, add the following code in the catch statement and return th
8176

8277
<div class="info" data-title="Optional notice">
8378

84-
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Mistral 7B model on your machine with the `ollama pull mistral` command.
85-
86-
To use the local Ollama model, you need to create a new chat model producer. At the same location where you've created the `ChatLanguageModelAzureOpenAiProducer`, create a new class called `ChatLanguageModelOllamaProducer` with the following code
87-
88-
```java
89-
@Alternative
90-
public class ChatLanguageModelOllamaProducer {
91-
92-
private static final Logger log = LoggerFactory.getLogger(ChatLanguageModelOllamaProducer.class);
93-
94-
@ConfigProperty(name = "OLLAMA_BASE_URL", defaultValue = "http://localhost:11434")
95-
String ollamaBaseUrl;
96-
97-
@ConfigProperty(name = "OLLAMA_MODEL_NAME", defaultValue = "mistral")
98-
String ollamaModelName;
99-
100-
@Produces
101-
public ChatLanguageModel chatLanguageModel() {
102-
103-
log.info("### Producing ChatLanguageModel with OllamaChatModel");
104-
105-
return OllamaChatModel.builder()
106-
.baseUrl(ollamaBaseUrl)
107-
.modelName(ollamaModelName)
108-
.timeout(ofSeconds(60))
109-
.build();
110-
}
111-
}
112-
```
113-
114-
Notice the `@Alternative` annotation. This tells Quarkus that this producer is an alternative to the default one (`ChatLanguageModelAzureOpenAiProducer`). This way, you can switch between the Azure OpenAI and the Ollama model by enabling the `@Alternative` annotation in the properties file (`@Alternative` are not enabled by default).
115-
So, if you want to use the Azure OpenAI model, you don't have to configure anything. If instedd you want to use the Ollama model, you will have to add the following property to the `src/backend/src/main/resources/application.properties` file:
116-
117-
```properties
118-
quarkus.arc.selected-alternatives=ai.azure.openai.rag.workshop.backend.configuration.ChatLanguageModelOllamaProducer
119-
```
120-
121-
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `mistral` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
79+
> As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Mistral 7B model on your machine with the `ollama pull mistral` command.
80+
>
81+
> To use the local Ollama model, you need to create a new chat model producer. At the same location where you've created the `ChatLanguageModelAzureOpenAiProducer`, create a new class called `ChatLanguageModelOllamaProducer` with the following code
82+
>
83+
> ```java
84+
> @Alternative
85+
> public class ChatLanguageModelOllamaProducer {
86+
>
87+
> private static final Logger log = LoggerFactory.getLogger(ChatLanguageModelOllamaProducer.class);
88+
>
89+
> @ConfigProperty(name = "OLLAMA_BASE_URL", defaultValue = "http://localhost:11434")
90+
> String ollamaBaseUrl;
91+
>
92+
> @ConfigProperty(name = "OLLAMA_MODEL_NAME", defaultValue = "mistral")
93+
> String ollamaModelName;
94+
>
95+
> @Produces
96+
> public ChatLanguageModel chatLanguageModel() {
97+
>
98+
> log.info("### Producing ChatLanguageModel with OllamaChatModel");
99+
>
100+
> return OllamaChatModel.builder()
101+
> .baseUrl(ollamaBaseUrl)
102+
> .modelName(ollamaModelName)
103+
> .timeout(ofSeconds(60))
104+
> .build();
105+
> }
106+
> }
107+
> ```
108+
>
109+
> Notice the `@Alternative` annotation. This tells Quarkus that this producer is an alternative to the default one (`ChatLanguageModelAzureOpenAiProducer`). This way, you can switch between the Azure OpenAI and the Ollama model by enabling the `@Alternative` annotation in the properties file (`@Alternative` are not enabled by default).
110+
> So, if you want to use the Azure OpenAI model, you don't have to configure anything. If instedd you want to use the Ollama model, you will have to add the following property to the `src/backend/src/main/resources/application.properties` file:
111+
>
112+
> ```properties
113+
> quarkus.arc.selected-alternatives=ai.azure.openai.rag.workshop.backend.configuration.ChatLanguageModelOllamaProducer
114+
> ```
115+
>
116+
> That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `mistral` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
122117
123118
</div>
124119

0 commit comments

Comments
 (0)