Skip to content
This repository was archived by the owner on Mar 20, 2025. It is now read-only.

Commit 699f26d

Browse files
committed
fix img links for pypi page
1 parent 6f305be commit 699f26d

File tree

3 files changed

+25
-13
lines changed

3 files changed

+25
-13
lines changed

README.md

+19-4
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@
33
<p align="center">
44
<!--- Insert a cover image here -->
55
<!--- <br> -->
6+
<img src="https://user-images.githubusercontent.com/31014960/224118318-02e49d8e-72e0-4850-93f7-d850c0f06ca1.png">
7+
68
<a href="https://pypi.python.org/pypi/langchain-prefect/" alt="PyPI version">
79
<img alt="PyPI" src="https://img.shields.io/pypi/v/langchain-prefect?color=0052FF&labelColor=090422"></a>
810
<a href="https://github.com/PrefectHQ/langchain-prefect/" alt="Stars">
@@ -18,9 +20,11 @@
1820
<img src="https://img.shields.io/badge/discourse-browse_forum-red.svg?color=0052FF&labelColor=090422&logo=discourse" /></a>
1921
</p>
2022

21-
Orchestrate and observe tools built with langchain using Prefect.
23+
## Orchestrate and observe langchain using Prefect
2224

25+
Large Language Models (LLMs) are interesting and useful  -  building apps that use them responsibly feels like a no-brainer. Tools like [Langchain](https://github.com/hwchase17/langchain) make it easier to build apps using LLMs. We need to know details about how our apps work, even when we want to use tools with convenient abstractions that may obfuscate those details.
2326

27+
Prefect is built to help data people build, run, and observe event-driven workflows wherever they want. It provides a framework for creating deployments on a whole slew of runtime environments (from Lambda to Kubernetes), and is cloud agnostic (best supports AWS, GCP, Azure). For this reason, it could be a great fit for observing apps that use LLMs.
2428

2529
## Example Usage
2630

@@ -38,7 +42,9 @@ with RecordLLMCalls():
3842
```
3943
and a flow run will be created to track the invocation of the LLM:
4044

41-
![](docs/img/LLMinvokeUI.png)
45+
<p align="center">
46+
<img src="https://user-images.githubusercontent.com/31014960/224114166-f2c7d5ed-49b6-406e-a384-bd327d4e1fe4.png" alt="LLM invocation UI">
47+
</p>
4248

4349
### Run several LLM calls via langchain agent as Prefect subflows:
4450
```python
@@ -52,7 +58,7 @@ tools = load_tools(["llm-math"], llm=llm)
5258
agent = initialize_agent(tools, llm)
5359

5460
@flow
55-
def my_flow(): # noqa: D103
61+
def my_flow():
5662
agent.run(
5763
"How old is the current Dalai Lama? "
5864
"What is his age divided by 2 (rounded to the nearest integer)?"
@@ -61,10 +67,19 @@ def my_flow(): # noqa: D103
6167
with RecordLLMCalls(tags={"agent"}):
6268
my_flow()
6369
```
64-
![](docs/img/LLMagentUI.png)
70+
71+
<p align="center">
72+
<img src="https://user-images.githubusercontent.com/31014960/224113843-b91f204b-8085-4739-b483-a45c4f339210.png" alt="LLM agent UI">
73+
</p>
6574

6675
Find more examples [here](examples/).
6776

77+
## How do I get a Prefect UI?
78+
- The easiest way is to use the [Prefect Cloud](https://www.prefect.io/cloud/) UI. You can find details on getting setup [here](https://docs.prefect.io/ui/cloud-quickstart/).
79+
80+
- If you don't want to sign up for cloud, you can use the dashboard locally by running `prefect server start` in your terminal - more details [here](https://docs.prefect.io/ui/overview/#using-the-prefect-ui).
81+
82+
6883
## Resources
6984
### Installation
7085

examples/openai/agent.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,19 @@
77
from langchain_prefect.plugins import RecordLLMCalls
88

99
llm = OpenAI(temperature=0)
10-
tools = load_tools(["llm-math"])
10+
tools = load_tools(["llm-math"], llm=llm)
1111
agent = initialize_agent(tools, llm)
1212

1313

1414
@flow
1515
def my_flow():
1616
"""Flow wrapping any LLM calls made by the agent."""
17-
agent.run(
17+
return agent.run(
1818
"How old is the current Dalai Lama? "
1919
"What is his age divided by 2 (rounded to the nearest integer)?"
2020
)
2121

2222

2323
with RecordLLMCalls(tags={"agent"}):
24-
my_flow()
24+
result = my_flow()
25+
print(result)

mkdocs.yml

+2-6
Original file line numberDiff line numberDiff line change
@@ -55,8 +55,6 @@ plugins:
5555
- gen-files:
5656
scripts:
5757
- docs/gen_home_page.py
58-
- docs/gen_examples_catalog.py
59-
- docs/gen_blocks_catalog.py
6058
- mkdocstrings:
6159
handlers:
6260
python:
@@ -73,11 +71,9 @@ watch:
7371

7472
nav:
7573
- Home: index.md
76-
- Blocks Catalog: blocks_catalog.md
77-
- Examples Catalog: examples_catalog.md
7874
- API Reference:
79-
- Tasks: tasks.md
80-
- Flows: flows.md
75+
- Plugins: plugins.md
76+
- Utilities: utilities.md
8177

8278

8379
extra:

0 commit comments

Comments
 (0)