Skip to content

Commit d6b1ce4

Browse files
authored
explain feedback trace_id= arg (#815)
1 parent ab48656 commit d6b1ce4

File tree

2 files changed

+67
-26
lines changed

2 files changed

+67
-26
lines changed

docs/evaluation/how_to_guides/attach_user_feedback.mdx

Lines changed: 66 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -4,47 +4,84 @@ sidebar_position: 1
44

55
import {
66
CodeTabs,
7-
PythonBlock,
8-
TypeScriptBlock,
7+
python,
8+
typescript,
99
} from "@site/src/components/InstructionsWithCode";
1010

11-
# Log user feedback
11+
# Log user feedback using the SDK
1212

13-
:::tip Recommended Reading
14-
Before diving into this content, it might be helpful to read the following:
13+
:::tip Key concepts
1514

1615
- [Conceptual guide on tracing and feedback](../../../observability/concepts)
1716
- [Reference guide on feedback data format](/reference/data_formats/feedback_data_format)
1817

1918
:::
2019

21-
In many applications, but even more so for LLM applications, it is important to collect user feedback to understand how your application is performing in real-world scenarios.
22-
The ability to observe user feedback along with trace data can be very powerful to drill down into the most interesting datapoints, then send those datapoints for further review, automatic evaluation, or even datasets.
23-
To learn more about how to filter traces based on various attributes, including user feedback, see [this guide](../../../observability/how_to_guides/filter_traces_in_application)
20+
LangSmith makes it easy to attach feedback to traces.
21+
This feedback can come from users, annotators, automated evaluators, etc., and is crucial for monitoring and evaluating applications.
2422

25-
LangSmith makes it easy to attach user feedback to traces.
26-
It's often helpful to expose a simple mechanism (such as a thumbs-up, thumbs-down button) to collect user feedback for your application responses. You can then use the LangSmith SDK or API to send feedback for a trace. To get the `run_id` of a logged run, see [this guide](../../../observability/how_to_guides/access_current_span).
23+
## Use [create_feedback()](https://docs.smith.langchain.com/reference/python/client/langsmith.client.Client#langsmith.client.Client.create_feedback) / [createFeedback()](https://docs.smith.langchain.com/reference/js/classes/client.Client#createfeedback)
2724

28-
:::note
25+
Here we'll walk through how to log feedback using the SDK.
2926

30-
You can attach user feedback to ANY intermediate run (span) of the trace, not just the root span.
31-
This is useful for critiquing specific parts of the LLM application, such as the retrieval step or generation step of the RAG pipeline.
27+
:::info Child runs
28+
29+
You can attach user feedback to ANY child run of a trace, not just the trace (root run) itself.
30+
This is useful for critiquing specific steps of the LLM application, such as the retrieval step or generation step of a RAG pipeline.
31+
32+
:::
33+
34+
:::tip Non-blocking creation (Python only)
35+
36+
The Python client will automatically background feedback creation if you pass `trace_id=` to [create_feedback()](https://docs.smith.langchain.com/reference/python/client/langsmith.client.Client#langsmith.client.Client.create_feedback).
37+
This is essential for low-latency environments, where you want to make sure your application isn't blocked on feedback creation.
3238

3339
:::
3440

3541
<CodeTabs
3642
tabs={[
37-
PythonBlock(`from langsmith import Client\n
38-
client = Client()\n
39-
# ... Run your application and get the run_id...
40-
# This information can be the result of a user-facing feedback form\n
41-
client.create_feedback(
42-
run_id,
43-
key="feedback-key",
44-
score=1.0,
45-
comment="comment",
46-
)`),
47-
TypeScriptBlock(`import { Client } from "langsmith";
43+
python({caption: "Requires `langsmith >= 0.3.43`"})
44+
`
45+
from langsmith import trace, traceable, Client
46+
47+
@traceable
48+
def foo(x):
49+
return {"y": x * 2}
50+
51+
@traceable
52+
def bar(y):
53+
return {"z": y - 1}
54+
55+
client = Client()
56+
57+
inputs = {"x": 1}
58+
with trace(name="foobar", inputs=inputs) as root_run:
59+
result = foo(**inputs)
60+
result = bar(**result)
61+
root_run.outputs = result
62+
trace_id = root_run.id
63+
child_runs = root_run.child_runs
64+
65+
# Provide feedback for a trace (a.k.a. a root run)
66+
client.create_feedback(
67+
key="user_feedback",
68+
score=1,
69+
trace_id=trace_id,
70+
comment="the user said that ..."
71+
)
72+
73+
# Provide feedback for a child run
74+
foo_run_id = [run for run in child_runs if run.name == "foo"][0].id
75+
client.create_feedback(
76+
key="correctness",
77+
score=0,
78+
run_id=foo_run_id,
79+
# trace_id= is optional but recommended to enable batched and backgrounded
80+
# feedback ingestion.
81+
trace_id=trace_id,
82+
)
83+
`,
84+
typescript({})`import { Client } from "langsmith";
4885
const client = new Client();\n
4986
// ... Run your application and get the run_id...
5087
// This information can be the result of a user-facing feedback form\n
@@ -55,7 +92,11 @@ await client.createFeedback(
5592
score: 1.0,
5693
comment: "comment",
5794
}
58-
);`),
95+
);`,
5996
]}
6097
groupId="client-language"
6198
/>
99+
100+
You can even log feedback for in-progress runs using `create_feedback() / createFeedback()`. See [this guide](../../../observability/how_to_guides/access_current_span) for how to get the run ID of an in-progress run.
101+
102+
To learn more about how to filter traces based on various attributes, including user feedback, see [this guide](../../../observability/how_to_guides/filter_traces_in_application).

docs/observability/how_to_guides/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,6 @@ Leverage LangSmith's powerful monitoring, automation, and online evaluation feat
7373

7474
## Human feedback
7575

76-
- [Log user feedback](../evaluation/how_to_guides/attach_user_feedback)
76+
- [Log user feedback using the SDK](../evaluation/how_to_guides/attach_user_feedback)
7777
- [Set up a new feedback criteria](../evaluation/how_to_guides/set_up_feedback_criteria)
7878
- [Annotate traces inline in the UI](../evaluation/how_to_guides/annotate_traces_inline)

0 commit comments

Comments
 (0)