Skip to content

Commit 33fb565

Browse files
committed
docs: always show ollama
1 parent 4a44c63 commit 33fb565

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed

docs/sections/java-quarkus/02.1-additional-setup.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,8 @@ Before moving to the next section, go to the **Azure setup** section (either on
3333

3434
After you completed the Azure setup, you can come back here to continue the workshop.
3535

36+
</div>
37+
3638
#### Using Ollama
3739

3840
If you have a machine with enough resources, you can run this workshop entirely locally without using any cloud resources. To do that, you first have to install [Ollama](https://ollama.com) and then run the following commands to download the models on your machine:
@@ -47,18 +49,18 @@ ollama pull llama3
4749
4850
</div>
4951

52+
<div data-hidden="$$proxy$$">
53+
5054
Then, you can create a `.env` file at the root of the project, and add the following content:
5155

5256
```bash
5357
QDRANT_URL=http://localhost:6334
5458
```
5559

60+
</div>
61+
5662
Finally, you can start the Ollama server with the following command:
5763

5864
```bash
5965
ollama run llama3
6066
```
61-
62-
</div>
63-
64-

docs/sections/java-quarkus/06-chat-api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Let's start by configuring `ChatLanguageModelAzureOpenAiProducer`, using the Azu
4545
}
4646
```
4747

48-
<div class="info" data-title="Optional notice" data-hidden="$$proxy$$">
48+
<div class="info" data-title="Optional notice">
4949

5050
As seen in the setup chapter, if you have a machine with enough resources, you can run a local Ollama model. You shloud already have installed [Ollama](https://ollama.com) and downloaded a Llama3 models on your machine with the `ollama pull llama3` command.
5151

@@ -86,7 +86,7 @@ quarkus.arc.selected-alternatives=ai.azure.openai.rag.workshop.backend.configura
8686

8787
That's it. If Ollama is running on the default port (http://localhost:11434) and you have the `llama3` model installed, you don't even have to configure anything. Just restart the Quarkus backend, and it will use the Ollama model instead of the Azure OpenAI model.
8888

89-
<div>
89+
</div>
9090

9191
Now let's configure the `EmbeddingModelProducer`, using a local embedding model (less performant than using Azure OpenAI, but runs locally and for free):
9292

0 commit comments

Comments
 (0)