Skip to content

feat(js): Adds docs on tracing in serverless environments in JS #492

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Oct 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/observability/how_to_guides/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ Set up LangSmith tracing to get visibility into your production applications.
- [Prevent logging of sensitive data in traces](./how_to_guides/tracing/mask_inputs_outputs)
- [Trace generator functions](./how_to_guides/tracing/trace_generator_functions)
- [Calculate token-based costs for traces](./how_to_guides/tracing/calculate_token_based_costs)
- [Trace JS functions in serverless environments](./how_to_guides/tracing/serverless_environments)

## Tracing projects UI & API

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Trace JS functions in serverless environments

:::note
This section is relevant for those using the LangSmith JS SDK version 0.2.0 and higher.
If you are tracing using LangChain.js or LangGraph.js in serverless environments, see [this guide](https://js.langchain.com/docs/how_to/callbacks_serverless).
:::

When tracing JavaScript functions, LangSmith will trace runs in the background by default to avoid adding latency.
In serverless environments where the execution context may be terminated abruptly, it's important to ensure that all tracing data is properly flushed before the function completes.

To make sure this occurs, you can either:

- Set an environment variable named `LANGSMITH_TRACING_BACKGROUND` to `"false"`. This will cause your traced functions to wait for tracing to complete before returning.
- Note that this is named differently from the [environment variable](https://js.langchain.com/docs/how_to/callbacks_serverless) in LangChain.js because LangSmith can be used without LangChain.
- Pass a custom client into your traced runs and `await` the `client.awaitPendingTraceBatches();` method.

Here's an example of using `awaitPendingTraceBatches` alongside the [`traceable`](/observability/how_to_guides/tracing/annotate_code) method:

```ts
import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";

const langsmithClient = new Client({});

const tracedFn = traceable(
async () => {
return "Some return value";
},
{
client: langsmithClient,
}
);

const res = await tracedFn();

await langsmithClient.awaitPendingTraceBatches();
```
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ console.log(runId);`),

In LangChain Python, LangSmith's tracing is done in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. This is especially prevalent in a serverless environment, where your VM may be terminated immediately once your chain or agent completes.

In LangChain JS/TS, the default is to block for a short period of time for the trace to finish due to the greater popularity of serverless environments. You can make callbacks asynchronous by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `"true"`.
You can make callbacks synchronous by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `"false"`.

For both languages, LangChain exposes methods to wait for traces to be submitted before exiting your application.
Below is an example:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ This tactic is also useful for when you have multiple chains running in a shared

In LangChain Python, LangSmith's tracing is done in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. This is especially prevalent in a serverless environment, where your VM may be terminated immediately once your chain or agent completes.

In LangChain JS, prior to `@langchain/core` version `0.3.0`, the default was to block for a short period of time for the trace to finish due to the greater popularity of serverless environments. Versions `>=0.3.0` will have the same default as Python.
In LangChain JS, prior to `@langchain/core` version `0.3.0`, the default was to block for a short period of time for the trace to finish due to the greater popularity of serverless environments. Versions `>=0.3.0` have the same default as Python.
You can explicitly make callbacks synchronous by setting the `LANGCHAIN_CALLBACKS_BACKGROUND` environment variable to `"false"` or asynchronous by setting it to `"true"`. You can also check out [this guide](https://js.langchain.com/docs/how_to/callbacks_serverless) for more options for awaiting backgrounded callbacks in serverless environments.

For both languages, LangChain exposes methods to wait for traces to be submitted before exiting your application.
Expand Down
Loading