diff --git a/docs/administration/concepts/index.mdx b/docs/administration/concepts/index.mdx
new file mode 100644
index 000000000..4909900ab
--- /dev/null
+++ b/docs/administration/concepts/index.mdx
@@ -0,0 +1,404 @@
+# Concepts
+
+This conceptual guide covers topics related to managing users, organizations, and workspaces within LangSmith.
+
+## Resource Hierarchy
+
+### Organizations
+
+An organization is a logical grouping of users within LangSmith with its own billing configuration. Typically, there is one organization per company. An organization can have multiple workspaces. For more details, see the [setup guide](../how_to_guides/organization_management/set_up_organization.mdx).
+
+When you log in for the first time, a personal organization will be created for you automatically. If you'd like to collaborate with others, you can create a separate organization and invite your team members to join.
+There are a few important differences between your personal organization and shared organizations:
+
+| Feature | Personal | Shared |
+| ------------------- | ------------------- | ----------------------------------------------------------- |
+| Maximum workspaces | 1 | Variable, depending on plan (see [pricing page](./pricing)) |
+| Collaboration | Cannot invite users | Can invite users |
+| Billing: paid plans | Developer plan only | All other plans available |
+
+### Workspaces
+
+:::info
+Workspaces were formerly called Tenants. Some code and APIs may still reference the old name for a period of time during the transition.
+:::
+
+A workspace is a logical grouping of users and resources within an organization. A workspace separates trust boundaries for resources and access control.
+Users may have permissions in a workspace that grant them access to the resources in that workspace, including tracing projects, datasets, annotation queues, and prompts. For more details, see the [setup guide](./how_to_guides/organization_management/set_up_workspace).
+
+It is recommended to create a separate workspace for each team within your organization. To organize resources even further, you can use [Resource Tags](#resource-tags) to group resources within a workspace.
+
+The following image shows a sample workspace settings page:
+
+
+The following diagram explains the relationship between organizations, workspaces, and the different resources scoped to and within a workspace:
+
+```mermaid
+graph TD
+ Organization --> WorkspaceA[Workspace A]
+ Organization --> WorkspaceB[Workspace B]
+ WorkspaceA --> tg1(Trace Projects)
+ WorkspaceA --> tg2(Datasets and Experiments)
+ WorkspaceA --> tg3(Annotation Queues)
+ WorkspaceA --> tg4(Prompts)
+ WorkspaceB --> tg5(Trace Projects)
+ WorkspaceB --> tg6(Datasets and Experiments)
+ WorkspaceB --> tg7(Annotation Queues)
+ WorkspaceB --> tg8(Prompts)
+```
+
+
+
+See the table below for details on which features are available in which scope (organization or workspace):
+
+| Resource/Setting | Scope |
+| --------------------------------------------------------------------------- | ---------------- |
+| Trace Projects | Workspace |
+| Annotation Queues | Workspace |
+| Deployments | Workspace |
+| Datasets & Experiments | Workspace |
+| Prompts | Workspace |
+| Resource Tags | Workspace |
+| API Keys | Workspace |
+| Settings including Secrets, Feedback config, Models, Rules, and Shared URLs | Workspace |
+| User management: Invite User to Workspace | Workspace |
+| RBAC: Assigning Workspace Roles | Workspace |
+| Data Retention, Usage Limits | Workspace\* |
+| Plans and Billing, Credits, Invoices | Organization |
+| User management: Invite User to Organization | Organization\*\* |
+| Adding Workspaces | Organization |
+| Assigning Organization Roles | Organization |
+| RBAC: Creating/Editing/Deleting Custom Roles | Organization |
+
+\* Data retention settings and usage limits will be available soon for the organization level as well
+\*\* Self-hosted installations may enable workspace-level invites of users to the organization via a feature flag.
+See the [self-hosted user management docs](../../self_hosting/configuration/user_management) for details.
+
+### Resource tags
+
+Resource tags allow you to organize resources within a workspaces. Each tag is a key-value pair that can be assigned to a resource.
+Tags can be used to filter workspace-scoped resources in the UI and API: Projects, Datasets, Annotation Queues, Deployments, and Experiments.
+
+Each new workspace comes with two default tag keys: `Application` and `Environment`; as the names suggest, these tags can be used to categorize resources based on the application and environment they belong to.
+More tags can be added as needed.
+
+LangSmith resource tags are very similar to tags in cloud services like [AWS](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html).
+
+
+
+## User Management and RBAC
+
+### Users
+
+A user is a person who has access to LangSmith. Users can be members of one or more organizations and workspaces within those organizations.
+
+Organization members are managed in organization settings:
+
+
+
+And workspace members are managed in workspace settings:
+
+
+
+### API keys
+
+:::danger Dropping support August 15, 2024
+We will be dropping support for API keys on August 15, 2024 in favor of personal access tokens (PATs) and service keys. We recommend using PATs and service keys for all new integrations. API keys prefixed with `ls__` will NO LONGER work after August 15, 2024.
+:::
+
+API keys are used to authenticate requests to the LangSmith API. They are created by users and scoped to a workspace. This means that all requests made with an API key will be associated with the workspace that the key was created in. The API key will have the ability to create, read, update, delete all resources within that workspace.
+
+API keys are prefixed with `ls__`. These keys will also show up in the UI under the service keys tab.
+
+#### Personal Access Tokens (PATs)
+
+Personal Access Tokens (PATs) are used to authenticate requests to the LangSmith API. They are created by users and scoped to a user. The PAT will have the same permissions as the user that created it.
+
+PATs are prefixed with `lsv2_pt_`
+
+#### Service keys
+
+Service keys are similar to PATs, but are used to authenticate requests to the LangSmith API on behalf of a service account.
+
+Service keys are prefixed with `lsv2_sk_`
+
+:::note
+To see how to create a service key or Personal Access Token, see the [setup guide](../how_to_guides/organization_management/create_account_api_key.mdx)
+:::
+
+### Organization roles
+
+Organization roles are distinct from the Enterprise feature (RBAC) below and are used in the context of multiple [workspaces](#workspaces). Your organization role determines your workspace membership characteristics and your organization-level permissions. See the [organization setup guide](./how_to_guides/organization_management/set_up_organization#organization-roles) for more information.
+
+The organization role selected also impacts workspace membership as described here:
+
+- `Organization Admin` grants full access to manage all organization configuration, users, billing, and workspaces. **An `Organization Admin` has `Admin` access to all workspaces in an organization**
+- `Organization User` may read organization information but cannot execute any write actions at the organization level. **An `Organization User` can be added to a subset of workspaces and assigned workspace roles as usual (if RBAC is enabled), which specify permissions at the workspace level.**
+
+:::info
+The `Organization User` role is only available in organizations on plans with multiple workspaces. In organizations limited to a single workspace, all users are `Organization Admins`.
+Custom organization-scoped roles are not available yet.
+:::
+
+See the table below for all organization permissions:
+
+| | Organization User | Organization Admin |
+| ------------------------------------------- | ----------------- | ------------------ |
+| View organization configuration | ✅ | ✅ |
+| View organization roles | ✅ | ✅ |
+| View organization members | ✅ | ✅ |
+| View data retention settings | ✅ | ✅ |
+| View usage limits | ✅ | ✅ |
+| Admin access to all workspaces | | ✅ |
+| Manage billing settings | | ✅ |
+| Create workspaces | | ✅ |
+| Create, edit, and delete organization roles | | ✅ |
+| Invite new users to organization | | ✅ |
+| Delete user invites | | ✅ |
+| Remove users from an organization | | ✅ |
+| Update data retention settings\* | | ✅ |
+| Update usage limits\* | | ✅ |
+
+### Workspace roles (RBAC) {#workspace-roles}
+
+:::note
+RBAC (Role-Based Access Control) is a feature that is only available to Enterprise customers. If you are interested in this feature, please contact our sales team at sales@langchain.dev
+Other plans default to using the Admin role for all users.
+:::
+
+Roles are used to define the set of permissions that a user has within a workspace. There are three built-in system roles that cannot be edited:
+
+- `Admin` - has full access to all resources within the workspace
+- `Viewer` - has read-only access to all resources within the workspace
+- `Editor` - has full permissions except for workspace management (adding/removing users, changing roles, configuring service keys)
+
+Organization admins can also create/edit custom roles with specific permissions for different resources.
+
+Roles can be managed in organization settings under the `Roles` tab:
+
+
+
+For more details on assigning and creating roles, see the [access control setup guide](../how_to_guides/organization_management/set_up_access_control.mdx).
+
+## Usage and Billing
+
+### Data Retention
+
+In May 2024, LangSmith introduced a maximum data retention period on traces of 400 days. In June 2024, LangSmith introduced
+a new data retention based pricing model where customers can configure a shorter data retention period on traces in exchange
+for savings up to 10x. On this page, we'll go through how data retention works and is priced in LangSmith.
+
+#### Why retention matters
+
+- **Privacy**: Many data privacy regulations, such as GDPR in Europe or CCPA in California, require organizations to delete personal data
+ once it's no longer necessary for the purposes for which it was collected. Setting retention periods aids in compliance with
+ such regulations.
+- **Cost**: LangSmith charges less for traces that have low data retention. See our tutorial on how to [optimize spend](./tutorials/manage_spend)
+ for details.
+
+#### How it works
+
+LangSmith now has two tiers of traces based on Data Retention with the following characteristics:
+
+| | Base | Extended |
+| -------------------- | ---------------- | -------------- |
+| **Price** | $.50 / 1k traces | $5 / 1k traces |
+| **Retention Period** | 14 days | 400 days |
+
+**Data deletion after retention ends**
+
+After the specified retention period, traces are no longer accessible via the runs table or API. All user data associated
+with the trace (e.g. inputs and outputs) is deleted from our internal systems within a day thereafter. Some metadata
+associated with each trace may be retained indefinitely for analytics and billing purposes.
+
+**Data retention auto-upgrades**
+
+:::caution
+Auto upgrades can have an impact on your bill. Please read this section carefully to fully understand your
+estimated LangSmith tracing costs.
+:::
+
+When you use certain features with `base` tier traces, their data retention will be automatically upgraded to
+`extended` tier. This will increase both the retention period, and the cost of the trace.
+
+The complete list of scenarios in which a trace will upgrade when:
+
+- **Feedback** is added to any run on the trace
+- An **Annotation Queue** receives any run from the trace
+- A **Run Rule** matches any run within a trace
+
+**Why auto-upgrade traces?**
+
+We have two reasons behind the auto-upgrade model for tracing:
+
+1. We think that traces that match any of these conditions are fundamentally more interesting than other traces, and
+ therefore it is good for users to be able to keep them around longer.
+2. We philosophically want to charge customers an order of magnitude lower for traces that may not be interacted with meaningfully.
+ We think auto-upgrades align our pricing model with the value that LangSmith brings, where only traces with meaningful interaction
+ are charged at a higher rate.
+
+If you have questions or concerns about our pricing model, please feel free to reach out to support@langchain.dev and let us know your thoughts!
+
+**How does data retention affect downstream features?**
+
+- **Annotation Queues, Run Rules, and Feedback**: Traces that use these features will be [auto-upgraded](#data-retention-auto-upgrades).
+- **Monitoring**: The monitoring tab will continue to work even after a base tier trace's data retention period ends. It is powered by
+ trace metadata that exists for >30 days, meaning that your monitoring graphs will continue to stay accurate even on
+ `base` tier traces.
+- **Datasets**: Datasets have an indefinite data retention period. Restated differently, if you add a trace's inputs and outputs to a dataset,
+ they will never be deleted. We suggest that if you are using LangSmith for data collection, you take advantage of the datasets
+ feature.
+
+#### Billing model
+
+**Billable metrics**
+
+On your LangSmith invoice, you will see two metrics that we charge for:
+
+- LangSmith Traces (Base Charge)
+- LangSmith Traces (Extended Data Retention Upgrades).
+
+The first metric includes all traces, regardless of tier. The second metric just counts the number of extended retention traces.
+
+**Why measure all traces + upgrades instead of base and extended traces?**
+
+A natural question to ask when considering our pricing is why not just show the number of `base` tier and `extended` tier
+traces directly on the invoice?
+
+While we understand this would be more straightforward, it doesn't fit trace upgrades properly. Consider a
+`base` tier trace that was recorded on June 30, and upgraded to `extended` tier on July 3. The `base` tier
+trace occurred in the June billing period, but the upgrade occurred in the July billing period. Therefore,
+we need to be able to measure these two events independently to properly bill our customers.
+
+If your trace was recorded as an extended retention trace, then the `base` and `extended` metrics will both be recorded
+with the same timestamp.
+
+**Cost breakdown**
+
+The Base Charge for a trace is .05¢ per trace. We priced the upgrade such that an `extended` retention trace
+costs 10x the price of a base tier trace (.50¢ per trace) including both metrics. Thus, each upgrade costs .45¢.
+
+### Rate Limits
+
+LangSmith has rate limits which are designed to ensure the stability of the service for all users.
+
+To ensure access and stability, LangSmith will respond with HTTP Status Code 429 indicating that rate or usage limits have been exceeded under the following circumstances:
+
+#### Scenarios
+
+###### Temporary throughput limit over a 1 minute period at our application load balancer
+
+This 429 is the the result of exceeding a fixed number of API calls over a 1 minute window on a per API key/access token basis. The start of the window will vary slightly — it is not guaranteed to start at the start of a clock minute — and may change depending on application deployment events.
+
+After the max events are received we will respond with a 429 until 60 seconds from the start of the evaluation window has been reached and then the process repeats.
+
+This 429 is thrown by our application load balancer and is a mechanism in place for all LangSmith users independent of plan tier to ensure continuity of service for all users.
+
+| Method | Endpoint | Limit | Window |
+| ------------- | -------- | ----- | -------- |
+| DELETE | Sessions | 30 | 1 minute |
+| POST OR PATCH | Runs | 5000 | 1 minute |
+| POST | Feedback | 5000 | 1 minute |
+| \* | \* | 2000 | 1 minute |
+
+:::note
+The LangSmith SDK takes steps to minimize the likelihood of reaching these limits on run-related endpoints by batching up to 100 runs from a single session ID into a single API call.
+:::
+
+###### Plan-level hourly trace event limit
+
+This 429 is the result of reaching your maximum hourly events ingested and is evaluated in a fixed window starting at the beginning of each clock hour in UTC and resets at the top of each new hour.
+
+An event in this context is the creation or update of a run. So if run is created, then subsequently updated in the same hourly window, that will count as 2 events against this limit.
+
+This is thrown by our application and varies by plan tier, with organizations on our Startup/Plus and Enterprise plan tiers having higher hourly limits than our Free and Developer Plan Tiers which are designed for personal use.
+
+| Plan | Limit | Window |
+| -------------------------------- | -------------- | ------ |
+| Developer (no payment on file) | 50,000 events | 1 hour |
+| Developer (with payment on file) | 250,000 events | 1 hour |
+| Startup/Plus | 500,000 events | 1 hour |
+| Enterprise | Custom | Custom |
+
+###### Plan-level hourly trace data ingest limit
+
+This 429 is the result of reaching the maximum amount of data ingested across your trace inputs, outputs, and metadata and is evaluated in a fixed window starting at the beginning of each clock hour in UTC and resets at the top of each new hour.
+
+Typically, inputs, outputs, and metadata are send on both run creation and update events. So if a run is created and is 2.0MB in size at creation, and 3.0MB in size when updated in the same hourly window, that will count as 5.0MB of storage against this limit.
+
+This is thrown by our application and varies by plan tier, with organizations on our Startup/Plus and Enterprise plan tiers having higher hourly limits than our Free and Developer Plan Tiers which are designed for personal use.
+
+| Plan | Limit | Window |
+| -------------------------------- | ------ | ------ |
+| Developer (no payment on file) | 500MB | 1 hour |
+| Developer (with payment on file) | 2.5GB | 1 hour |
+| Startup/Plus | 5.0GB | 1 hour |
+| Enterprise | Custom | Custom |
+
+###### Plan-level monthly unique traces limit
+
+This 429 is the result of reaching your maximum monthly traces ingested and is evaluated in a fixed window starting at the beginning of each calendar month in UTC and resets at the beginning of each new month.
+
+This is thrown by our application and applies only to the Developer Plan Tier when there is no payment method on file.
+
+| Plan | Limit | Window |
+| ------------------------------ | ------------ | ------- |
+| Developer (no payment on file) | 5,000 traces | 1 month |
+
+###### Self-configured monthly usage limits
+
+This 429 is the result of reaching your usage limit as configured by your organization admin and is evaluated in a fixed window starting at the beginning of each calendar month in UTC and resets at the beginning of each new month.
+
+This is thrown by our application and varies by organization based on their configured settings.
+
+#### Handling 429s responses in your application
+
+Since some 429 responses are temporary and may succeed on a successive call, if you are directly calling the LangSmith API in your application we recommend implementing retry logic with exponential backoff and jitter.
+
+For convenience, LangChain applications built with the LangSmith SDK has this capability built-in.
+
+:::note
+It is important to note that if you are saturating the endpoints for extended periods of time, retries may not be effective as your application will eventually run large enough backlogs to exhaust all retries.
+
+If that is the case, we would like to discuss your needs more specifically. Please reach out to [LangSmith Support](mailto:support@langchain.dev) with details about your applications throughput needs and sample code and we can work with you to better understand whether the best approach is fixing a bug, changes to your application code, or a different LangSmith plan.
+:::
+
+### Usage Limits
+
+LangSmith lets you configure usage limits on tracing. Note that these are _usage_ limits, not _spend_ limits, which
+mean they let you limit the quantity of occurrences of some event rather than the total amount you will spend.
+
+LangSmith lets you set two different monthly limits, mirroring our Billable Metrics discussed in the aforementioned data retention guide:
+
+- All traces limit
+- Extended data retention traces limit
+
+These let you limit the number of total traces, and extended data retention traces respectively.
+
+#### Properties of usage limiting
+
+Usage limiting is approximate, meaning that we do not guarantee the exactness of the limit. In rare cases, there
+may be a small period of time where additional traces are processed above the limit threshold before usage limiting
+begins to apply.
+
+#### Side effects of extended data retention traces limit
+
+The extended data retention traces limit has side effects. If the limit is already reached, any feature that could
+cause an auto-upgrade of tracing tiers becomes inaccessible. This is because an auto-upgrade of a trace would cause
+another extended retention trace to be created, which in turn should not be allowed by the limit. Therefore, you can
+no longer:
+
+1. match run rules
+2. add feedback to traces
+3. add runs to annotation queues
+
+Each of these features may cause an auto upgrade, so we shut them off when the limit is reached.
+
+#### Updating usage limits
+
+Usage limits can be updated from the `Settings` page under `Usage and Billing`. Limit values are cached, so it
+may take a minute or two before the new limits apply.
+
+### Related content
+
+- Tutorial on how to [optimize spend](./tutorials/manage_spend)
diff --git a/docs/concepts/static/org_members_settings.png b/docs/administration/concepts/static/org_members_settings.png
similarity index 100%
rename from docs/concepts/static/org_members_settings.png
rename to docs/administration/concepts/static/org_members_settings.png
diff --git a/docs/concepts/static/org_settings_workspaces_tab.png b/docs/administration/concepts/static/org_settings_workspaces_tab.png
similarity index 100%
rename from docs/concepts/static/org_settings_workspaces_tab.png
rename to docs/administration/concepts/static/org_settings_workspaces_tab.png
diff --git a/docs/concepts/static/resource_tags.png b/docs/administration/concepts/static/resource_tags.png
similarity index 100%
rename from docs/concepts/static/resource_tags.png
rename to docs/administration/concepts/static/resource_tags.png
diff --git a/docs/concepts/static/roles_tab_rbac.png b/docs/administration/concepts/static/roles_tab_rbac.png
similarity index 100%
rename from docs/concepts/static/roles_tab_rbac.png
rename to docs/administration/concepts/static/roles_tab_rbac.png
diff --git a/docs/concepts/static/sample_workspace.png b/docs/administration/concepts/static/sample_workspace.png
similarity index 100%
rename from docs/concepts/static/sample_workspace.png
rename to docs/administration/concepts/static/sample_workspace.png
diff --git a/docs/administration/how_to_guides/index.md b/docs/administration/how_to_guides/index.md
new file mode 100644
index 000000000..666310a33
--- /dev/null
+++ b/docs/administration/how_to_guides/index.md
@@ -0,0 +1,28 @@
+# Administration how-to guides
+
+Step-by-step guides that cover key tasks and operations in LangSmith.
+
+## Organization Management
+
+See the following guides to set up your LangSmith account.
+
+- [Create an account and API key](./how_to_guides/organization_management/create_account_api_key)
+- [Set up an organization](./how_to_guides/organization_management/set_up_organization)
+ - [Create an organization](./how_to_guides/organization_management/set_up_organization#create-an-organization)
+ - [Manage and navigate workspaces](./how_to_guides/organization_management/set_up_organization#manage-and-navigate-workspaces)
+ - [Manage users](./how_to_guides/organization_management/set_up_organization#manage-users)
+ - [Manage your organization using the API](./how_to_guides/organization_management/manage_organization_by_api)
+- [Set up a workspace](./how_to_guides/organization_management/set_up_workspace)
+ - [Create a workspace](./how_to_guides/organization_management/set_up_workspace#create-a-workspace)
+ - [Manage users](./how_to_guides/organization_management/set_up_workspace#manage-users)
+ - [Configure workspace settings](./how_to_guides/organization_management/set_up_workspace#configure-workspace-settings)
+- [Set up billing](./how_to_guides/organization_management/set_up_billing)
+- [Update invoice email, tax id and, business information](./how_to_guides/organization_management/update_business_info)
+- [Set up access control (enterprise only)](./how_to_guides/organization_management/set_up_access_control)
+ - [Create a role](./how_to_guides/organization_management/set_up_access_control#create-a-role)
+ - [Assign a role to a user](./how_to_guides/organization_management/set_up_access_control#assign-a-role-to-a-user)
+- [Set up resource tags](./how_to_guides/organization_management/set_up_resource_tags)
+ - [Create a tag](./how_to_guides/organization_management/set_up_resource_tags#create-a-tag)
+ - [Assign a tag to a resource](./how_to_guides/organization_management/set_up_resource_tags#assign-a-tag-to-a-resource)
+ - [Delete a tag](./how_to_guides/organization_management/set_up_resource_tags#delete-a-tag)
+ - [Filter resources by tags](./how_to_guides/organization_management/set_up_resource_tags#filter-resources-by-tags)
diff --git a/docs/how_to_guides/setup/_category_.json b/docs/administration/how_to_guides/organization_management/_category_.json
similarity index 100%
rename from docs/how_to_guides/setup/_category_.json
rename to docs/administration/how_to_guides/organization_management/_category_.json
diff --git a/docs/how_to_guides/setup/create_account_api_key.mdx b/docs/administration/how_to_guides/organization_management/create_account_api_key.mdx
similarity index 91%
rename from docs/how_to_guides/setup/create_account_api_key.mdx
rename to docs/administration/how_to_guides/organization_management/create_account_api_key.mdx
index ee29f7e4a..22c324887 100644
--- a/docs/how_to_guides/setup/create_account_api_key.mdx
+++ b/docs/administration/how_to_guides/organization_management/create_account_api_key.mdx
@@ -11,14 +11,14 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
To get started with LangSmith, you need to create an account. You can sign up for a free account .
We support logging in with Google, GitHub, Discord, and email.
-
+
## API keys
LangSmith supports two types of API keys: Service Keys and Personal Access Tokens.
Both types of tokens can be used to authenticate requests to the LangSmith API, but they have different use cases.
-Read more about the differences between Service Keys and Personal Access Tokens under [admin concepts](../../concepts/admin/admin.mdx)
+Read more about the differences between Service Keys and Personal Access Tokens under [admin concepts](../../concepts)
## Create an API key
@@ -30,7 +30,7 @@ To create either type of API key head to the {
+ // Redirect to parent page on load
+ history.push("../how_to_guides");
+ }, [history]);
+
+ return null; // No need to render anything since we're redirecting
+}
diff --git a/docs/how_to_guides/setup/manage_organization_by_api.mdx b/docs/administration/how_to_guides/organization_management/manage_organization_by_api.mdx
similarity index 94%
rename from docs/how_to_guides/setup/manage_organization_by_api.mdx
rename to docs/administration/how_to_guides/organization_management/manage_organization_by_api.mdx
index 48bcfd0fc..9804ee8a9 100644
--- a/docs/how_to_guides/setup/manage_organization_by_api.mdx
+++ b/docs/administration/how_to_guides/organization_management/manage_organization_by_api.mdx
@@ -8,8 +8,8 @@ LangSmith's API supports programmatic access via API key to all of the actions a
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on organizations and workspaces](../../concepts/admin)
-- [Organization setup how-to guild](../../how_to_guides/setup/set_up_organization.mdx)
+- [Conceptual guide on organizations and workspaces](../../concepts)
+- [Organization setup how-to guild](./set_up_organization.mdx)
:::
@@ -17,7 +17,7 @@ Before diving into this content, it might be helpful to read the following:
There are a few limitations that will be lifted soon:
- The LangSmith SDKs do not support these organization management actions yet.
-- [Service Keys](../../concepts/admin/admin.mdx#api-keys) don't have access to newly-added workspaces yet (we're adding support soon). We recommend using a PAT of an Organization Admin for now, which by default has the required permissions for these actions.
+- [Service Keys](../../concepts#api-keys) don't have access to newly-added workspaces yet (we're adding support soon). We recommend using a PAT of an Organization Admin for now, which by default has the required permissions for these actions.
:::
@@ -120,7 +120,7 @@ Workspace level:
/>
:::note
-These params should be omitted: `read_only` (deprecated), `password` and `full_name` ([basic auth](../../reference/authentication_authorization/authentication_methods.mdx) only)
+These params should be omitted: `read_only` (deprecated), `password` and `full_name` ([basic auth](/reference/authentication_authorization/authentication_methods.mdx) only)
:::
## API Keys
@@ -146,7 +146,7 @@ If the header is not present, operations will default to the workspace the API k
These endpoints are user-scoped and require a logged-in user's JWT, so they should only be executed through the UI.
- `/api-key/current` endpoints: these are related a user's PATs
-- `/sso/email-verification/send` (Cloud-only): this endpoint is related to [SAML SSO](../../how_to_guides/setup/set_up_saml_sso.mdx)
+- `/sso/email-verification/send` (Cloud-only): this endpoint is related to [SAML SSO](./set_up_saml_sso.mdx)
## Sample Code
diff --git a/docs/how_to_guides/setup/set_up_access_control.mdx b/docs/administration/how_to_guides/organization_management/set_up_access_control.mdx
similarity index 90%
rename from docs/how_to_guides/setup/set_up_access_control.mdx
rename to docs/administration/how_to_guides/organization_management/set_up_access_control.mdx
index d716594b9..31e7876cf 100644
--- a/docs/how_to_guides/setup/set_up_access_control.mdx
+++ b/docs/administration/how_to_guides/organization_management/set_up_access_control.mdx
@@ -4,14 +4,14 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
:::note
RBAC (Role-Based Access Control) is a feature that is only available to Enterprise customers. If you are interested in this feature, please contact our sales team at sales@langchain.dev
-Other plans default to using the Admin role for all users. Read more about roles under [admin concepts](../../concepts/admin/admin.mdx)
+Other plans default to using the Admin role for all users. Read more about roles under [admin concepts](../../concepts)
:::
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on organizations and workspaces](../../concepts/admin)
+- [Conceptual guide on organizations and workspaces](../../concepts)
:::
@@ -32,7 +32,7 @@ To create a role, navigate to the `Roles` tab in the `Members and roles` section
Click on the `Create Role` button to create a new role. You should see a form like the one below:
-
+
Assign permissions for the different LangSmith resources that you want to control access to.
@@ -42,8 +42,8 @@ Once you have your roles set up, you can assign them to users. To assign a role
Each user will have a `Role` dropdown that you can use to assign a role to them.
-
+
You can also invite new users with a given role.
-
+
diff --git a/docs/how_to_guides/setup/set_up_billing.mdx b/docs/administration/how_to_guides/organization_management/set_up_billing.mdx
similarity index 95%
rename from docs/how_to_guides/setup/set_up_billing.mdx
rename to docs/administration/how_to_guides/organization_management/set_up_billing.mdx
index 46c6207fc..afdd7436e 100644
--- a/docs/how_to_guides/setup/set_up_billing.mdx
+++ b/docs/administration/how_to_guides/organization_management/set_up_billing.mdx
@@ -25,7 +25,7 @@ add a credit card on the Plans and Billing page as follows:
### 1. Click `Set up Billing`
-
+
### 2. Add your credit card info
@@ -49,14 +49,14 @@ If you are a startup building with AI, please instead click `Apply Now` on our S
eligible for discounted prices and a generous free, monthly trace allotment.
:::
-
+
### 2. Review your existing members
Before subscribing, LangSmith lets you remove any added users that you would not
like to be charged for.
-
+
### 3. Enter your credit card info
@@ -78,7 +78,7 @@ rate limited to a maximum of 5,000 traces per month.
### 2. Click `Set up Billing`
-
+
### 3. Enter your credit card info
diff --git a/docs/how_to_guides/setup/set_up_organization.mdx b/docs/administration/how_to_guides/organization_management/set_up_organization.mdx
similarity index 88%
rename from docs/how_to_guides/setup/set_up_organization.mdx
rename to docs/administration/how_to_guides/organization_management/set_up_organization.mdx
index c5f14d950..382234814 100644
--- a/docs/how_to_guides/setup/set_up_organization.mdx
+++ b/docs/administration/how_to_guides/organization_management/set_up_organization.mdx
@@ -10,12 +10,12 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on organizations and workspaces](../../concepts/admin)
+- [Conceptual guide on organizations and workspaces](../../concepts)
:::
:::note
-If you're interested in managing your organization and workspaces programmatically, see [this hot-to guide](../../how_to_guides/setup/manage_organization_by_api.mdx).
+If you're interested in managing your organization and workspaces programmatically, see [this how-to guide](./manage_organization_by_api.mdx).
:::
## Create an organization
@@ -25,14 +25,14 @@ When you log in for the first time, a personal organization will be created for
To do this, head to the and click **Create Organization**.
Shared organizations require a credit card before they can be used. You will need to [set up billing](./set_up_billing) to proceed.
-
+
## Manage and navigate workspaces
Once you've subscribed to a plan that allows for multiple users per organization, you can [set up workspaces](./set_up_workspace) to collaborate more effectively and isolate LangSmith resources between different groups of users.
To navigate between workspaces and access the resources within each workspace (trace projects, annotation queues, etc.), select the desired workspace from the picker in the top left:
-
+
## Manage users
@@ -43,7 +43,7 @@ Here you can
- Edit a user's organization role
- Remove users from your organization
-
+
Organizations on the Enterprise plan may set up custom workspace roles in the `Roles` tab here. See the [access control setup guide](./set_up_access_control.mdx) for more details.
@@ -59,4 +59,4 @@ The `Organization User` role is only available in organizations on plans with mu
Custom organization-scoped roles are not available yet.
:::
-See [this conceptual guide](../../concepts/admin#organization-roles) for a full list of permissions associated with each role.
+See [this conceptual guide](../../concepts#organization-roles) for a full list of permissions associated with each role.
diff --git a/docs/how_to_guides/setup/set_up_resource_tags.mdx b/docs/administration/how_to_guides/organization_management/set_up_resource_tags.mdx
similarity index 93%
rename from docs/how_to_guides/setup/set_up_resource_tags.mdx
rename to docs/administration/how_to_guides/organization_management/set_up_resource_tags.mdx
index 564a945ef..02237f513 100644
--- a/docs/how_to_guides/setup/set_up_resource_tags.mdx
+++ b/docs/administration/how_to_guides/organization_management/set_up_resource_tags.mdx
@@ -4,7 +4,7 @@
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on organizations and workspaces](../../concepts/admin)
+- [Conceptual guide on organizations and workspaces](../../concepts)
:::
@@ -22,7 +22,7 @@ Here, you'll be able to see the existing tag values, grouped by key. Two keys `A
To create a new tag, click on the "New Tag" button. You'll be prompted to enter a key and a value for the tag. Note that you can use an existing key or create a new one.
-
+
## Assign a tag to a resource
@@ -34,7 +34,7 @@ You can only tag workspace-scoped resources with resource tags. This includes Tr
You can also assign tags to resources from the resource's detail page. Click on the Resource tags button to open up the tag panel and assign tags.
-
+
To un-assign a tag from a resource, click on the Trash icon next to the tag, both in the tag panel and the resource tag panel.
@@ -44,7 +44,7 @@ You can delete either a key or a value of a tag from the [workspace settings pag
Note that if you delete a key, all values associated with that key will also be deleted. When you delete a value, you will lose all associations between that value and resources.
-
+
## Filter resources by tags
@@ -56,4 +56,4 @@ In the homepage, you can see updated counts for resources based on the tags you'
As you navigate through the different product surfaces, you will _only_ see resources that match the tags you've selected. At any time, you can clear the tags to see all resources in the workspace or select different tags to filter by.
-
+
diff --git a/docs/how_to_guides/setup/set_up_saml_sso.mdx b/docs/administration/how_to_guides/organization_management/set_up_saml_sso.mdx
similarity index 95%
rename from docs/how_to_guides/setup/set_up_saml_sso.mdx
rename to docs/administration/how_to_guides/organization_management/set_up_saml_sso.mdx
index 136d3c6fb..96831aa07 100644
--- a/docs/how_to_guides/setup/set_up_saml_sso.mdx
+++ b/docs/administration/how_to_guides/organization_management/set_up_saml_sso.mdx
@@ -7,7 +7,7 @@ Single Sign-On (SSO) functionality is available for Enterprise customers to acce
LangSmith's SSO configuration is built using the SAML (Security Assertion Markup Language) 2.0 standard. SAML 2.0 enables connecting an Identity Provider (IdP) to your organization for an easier, more secure login experience.
:::note
-SAML SSO is available for organizations on the [Enterprise plan](../../pricing.mdx). Please [contact sales](https://www.langchain.com/contact-sales) to learn more.
+SAML SSO is available for organizations on the [Enterprise plan](../../pricing). Please [contact sales](https://www.langchain.com/contact-sales) to learn more.
:::
## What is SAML SSO?
@@ -26,7 +26,7 @@ SSO services permit a user to use one set of credentials (for example, a name or
- Your organization must be on an Enterprise plan
- Your Identity Provider (IdP) must support the SAML 2.0 standard
-- Only [Organization Admins](../../concepts/admin#organization-roles) can configure SAML SSO
+- Only [Organization Admins](../../concepts#organization-roles) can configure SAML SSO
### Initial configuration
@@ -61,15 +61,15 @@ The URLs are different for the US and EU. Please make sure to select your region
LangSmith supports Just-in-Time provisioning when using SAML SSO. This allows someone signing in via SAML SSO to join the organization and selected workspaces automatically as a member.
:::note
-JIT provisioning only runs for new users i.e. users who do not already have access to the organization with the same email address via a [different login method](../../reference/authentication_authorization/authentication_methods.mdx#cloud)
+JIT provisioning only runs for new users i.e. users who do not already have access to the organization with the same email address via a [different login method](/reference/authentication_authorization/authentication_methods.mdx#cloud)
:::
## Login methods and access
-Once you have completed your configuration of SAML SSO for your organization, users will be able to login via SAML SSO in addition to [other login methods](../../reference/authentication_authorization/authentication_methods.mdx#cloud) such as username/password and Google Authentication.
+Once you have completed your configuration of SAML SSO for your organization, users will be able to login via SAML SSO in addition to [other login methods](/reference/authentication_authorization/authentication_methods.mdx#cloud) such as username/password and Google Authentication.
- When logged in via SAML SSO, users can only access the corresponding organization with SAML SSO configured.
-- Users with SAML SSO as their only login method do not have [personal organizations](../../concepts/admin/admin.mdx#organizations)
+- Users with SAML SSO as their only login method do not have [personal organizations](../../concepts#organizations)
- When logged in via any other method, users can access the organization with SAML SSO configured along with any other organizations they are a part of
## Enforce SAML SSO only
@@ -93,7 +93,7 @@ If you have issues setting up SAML SSO, please reach out to [support@langchain.d
Some identity providers retain the original `User ID` through an email change while others do not, so we recommend that you follow these steps to avoid duplicate users in LangSmith:
-1. Remove the user from the organization (see [here](../setup/set_up_organization.mdx#manage-users))
+1. Remove the user from the organization (see [here](./set_up_organization.mdx#manage-users))
1. Change their email address in the IdP
1. Have them login to LangSmith again via SAML SSO - this will trigger the usual [JIT provisioning](#just-in-time-jit-provisioning) flow with their new email address
diff --git a/docs/how_to_guides/setup/set_up_workspace.mdx b/docs/administration/how_to_guides/organization_management/set_up_workspace.mdx
similarity index 90%
rename from docs/how_to_guides/setup/set_up_workspace.mdx
rename to docs/administration/how_to_guides/organization_management/set_up_workspace.mdx
index d25bfa615..58e23c89d 100644
--- a/docs/how_to_guides/setup/set_up_workspace.mdx
+++ b/docs/administration/how_to_guides/organization_management/set_up_workspace.mdx
@@ -10,11 +10,11 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on organizations and workspaces](../../concepts/admin)
+- [Conceptual guide on organizations and workspaces](../../concepts)
:::
-When you log in for the first time, a default [workspace](../../concepts/admin/admin.mdx#workspaces) will be created for you automatically in your [personal organization](./set_up_organization#personal-vs-shared-organizations).
+When you log in for the first time, a default [workspace](../../concepts#workspaces) will be created for you automatically in your [personal organization](./set_up_organization#personal-vs-shared-organizations).
Workspaces are often used to separate resources between different teams or business units, ensuring clear trust boundaries between them. Within each workspace, Role-Based Access Control (RBAC) is implemented to manage permissions and access levels, ensuring that users only have access to the resources and settings necessary for their role. Most LangSmith activity happens in the context of a workspace, each of which has its own settings and access controls.
To organize resources _within_ a workspace, you can use [resource tags](./set_up_resource_tags).
@@ -24,7 +24,7 @@ To organize resources _within_ a workspace, you can use [resource tags](./set_up
To create a new workspace, head to the `Workspaces` tab in your shared organization and click **Add Workspace**.
Once your workspace has been created, you can manage its members and other configuration by selecting it on this page.
-
+
:::note
Different plans have different limits placed on the number of workspaces that can be used in an organization.
@@ -44,4 +44,4 @@ Users may also be invited directly to one or more workspaces when they are [invi
Workspace configuration exists in the tab. Select the workspace to configure and then the desired configuration sub-tab. The example below shows the `API keys`, and other configuration options including secrets, models, and shared URLs are available here as well.
-
+
diff --git a/docs/how_to_guides/static/assign_role.png b/docs/administration/how_to_guides/organization_management/static/assign_role.png
similarity index 100%
rename from docs/how_to_guides/static/assign_role.png
rename to docs/administration/how_to_guides/organization_management/static/assign_role.png
diff --git a/docs/how_to_guides/static/create_account.png b/docs/administration/how_to_guides/organization_management/static/create_account.png
similarity index 100%
rename from docs/how_to_guides/static/create_account.png
rename to docs/administration/how_to_guides/organization_management/static/create_account.png
diff --git a/docs/how_to_guides/static/create_api_key.png b/docs/administration/how_to_guides/organization_management/static/create_api_key.png
similarity index 100%
rename from docs/how_to_guides/static/create_api_key.png
rename to docs/administration/how_to_guides/organization_management/static/create_api_key.png
diff --git a/docs/how_to_guides/static/create_organization.png b/docs/administration/how_to_guides/organization_management/static/create_organization.png
similarity index 100%
rename from docs/how_to_guides/static/create_organization.png
rename to docs/administration/how_to_guides/organization_management/static/create_organization.png
diff --git a/docs/how_to_guides/static/create_role.png b/docs/administration/how_to_guides/organization_management/static/create_role.png
similarity index 100%
rename from docs/how_to_guides/static/create_role.png
rename to docs/administration/how_to_guides/organization_management/static/create_role.png
diff --git a/docs/how_to_guides/static/create_workspace.png b/docs/administration/how_to_guides/organization_management/static/create_workspace.png
similarity index 100%
rename from docs/how_to_guides/static/create_workspace.png
rename to docs/administration/how_to_guides/organization_management/static/create_workspace.png
diff --git a/docs/how_to_guides/static/free_tier_billing_page.png b/docs/administration/how_to_guides/organization_management/static/free_tier_billing_page.png
similarity index 100%
rename from docs/how_to_guides/static/free_tier_billing_page.png
rename to docs/administration/how_to_guides/organization_management/static/free_tier_billing_page.png
diff --git a/docs/how_to_guides/static/invite_user.png b/docs/administration/how_to_guides/organization_management/static/invite_user.png
similarity index 100%
rename from docs/how_to_guides/static/invite_user.png
rename to docs/administration/how_to_guides/organization_management/static/invite_user.png
diff --git a/docs/how_to_guides/static/new_org_billing_page.png b/docs/administration/how_to_guides/organization_management/static/new_org_billing_page.png
similarity index 100%
rename from docs/how_to_guides/static/new_org_billing_page.png
rename to docs/administration/how_to_guides/organization_management/static/new_org_billing_page.png
diff --git a/docs/how_to_guides/static/new_org_manage_spend.png b/docs/administration/how_to_guides/organization_management/static/new_org_manage_spend.png
similarity index 100%
rename from docs/how_to_guides/static/new_org_manage_spend.png
rename to docs/administration/how_to_guides/organization_management/static/new_org_manage_spend.png
diff --git a/docs/how_to_guides/static/organization_members_and_roles.png b/docs/administration/how_to_guides/organization_management/static/organization_members_and_roles.png
similarity index 100%
rename from docs/how_to_guides/static/organization_members_and_roles.png
rename to docs/administration/how_to_guides/organization_management/static/organization_members_and_roles.png
diff --git a/docs/how_to_guides/static/resource_tags/assign_tag.png b/docs/administration/how_to_guides/organization_management/static/resource_tags/assign_tag.png
similarity index 100%
rename from docs/how_to_guides/static/resource_tags/assign_tag.png
rename to docs/administration/how_to_guides/organization_management/static/resource_tags/assign_tag.png
diff --git a/docs/how_to_guides/static/resource_tags/create_tag.png b/docs/administration/how_to_guides/organization_management/static/resource_tags/create_tag.png
similarity index 100%
rename from docs/how_to_guides/static/resource_tags/create_tag.png
rename to docs/administration/how_to_guides/organization_management/static/resource_tags/create_tag.png
diff --git a/docs/how_to_guides/static/resource_tags/delete_tag.png b/docs/administration/how_to_guides/organization_management/static/resource_tags/delete_tag.png
similarity index 100%
rename from docs/how_to_guides/static/resource_tags/delete_tag.png
rename to docs/administration/how_to_guides/organization_management/static/resource_tags/delete_tag.png
diff --git a/docs/how_to_guides/static/resource_tags/filter_by_tags.png b/docs/administration/how_to_guides/organization_management/static/resource_tags/filter_by_tags.png
similarity index 100%
rename from docs/how_to_guides/static/resource_tags/filter_by_tags.png
rename to docs/administration/how_to_guides/organization_management/static/resource_tags/filter_by_tags.png
diff --git a/docs/how_to_guides/static/select_workspace.png b/docs/administration/how_to_guides/organization_management/static/select_workspace.png
similarity index 100%
rename from docs/how_to_guides/static/select_workspace.png
rename to docs/administration/how_to_guides/organization_management/static/select_workspace.png
diff --git a/docs/how_to_guides/static/setup_billing_legacy.png b/docs/administration/how_to_guides/organization_management/static/setup_billing_legacy.png
similarity index 100%
rename from docs/how_to_guides/static/setup_billing_legacy.png
rename to docs/administration/how_to_guides/organization_management/static/setup_billing_legacy.png
diff --git a/docs/how_to_guides/static/update_business_info.png b/docs/administration/how_to_guides/organization_management/static/update_business_info.png
similarity index 100%
rename from docs/how_to_guides/static/update_business_info.png
rename to docs/administration/how_to_guides/organization_management/static/update_business_info.png
diff --git a/docs/how_to_guides/static/update_invoice_email.png b/docs/administration/how_to_guides/organization_management/static/update_invoice_email.png
similarity index 100%
rename from docs/how_to_guides/static/update_invoice_email.png
rename to docs/administration/how_to_guides/organization_management/static/update_invoice_email.png
diff --git a/docs/how_to_guides/static/workspace_settings.png b/docs/administration/how_to_guides/organization_management/static/workspace_settings.png
similarity index 100%
rename from docs/how_to_guides/static/workspace_settings.png
rename to docs/administration/how_to_guides/organization_management/static/workspace_settings.png
diff --git a/docs/how_to_guides/setup/update_business_info.mdx b/docs/administration/how_to_guides/organization_management/update_business_info.mdx
similarity index 96%
rename from docs/how_to_guides/setup/update_business_info.mdx
rename to docs/administration/how_to_guides/organization_management/update_business_info.mdx
index 5f302cabe..7feea4d26 100644
--- a/docs/how_to_guides/setup/update_business_info.mdx
+++ b/docs/administration/how_to_guides/organization_management/update_business_info.mdx
@@ -20,7 +20,7 @@ Business information, tax id and invoice email can only be updated for the plus
## Update invoice email
-
+
To update the email address where your invoices are sent, follow these steps:
@@ -37,7 +37,7 @@ This ensures that all future invoices will be sent to the updated email address.
In certain jurisdictions, LangSmith is required to collect sales tax. If you are a business, providing your Tax ID may qualify you for a sales tax exemption.
:::
-
+
To update your organization's business information, follow these steps:
diff --git a/docs/pricing.mdx b/docs/administration/pricing.mdx
similarity index 99%
rename from docs/pricing.mdx
rename to docs/administration/pricing.mdx
index 04507116a..be6c995b2 100644
--- a/docs/pricing.mdx
+++ b/docs/administration/pricing.mdx
@@ -258,7 +258,7 @@ On the Enterprise plan, you’ll get white-glove support with a Slack channel, a
### Where is my data stored?
-You may choose to sign up in either the US or EU region. See the [cloud architecture reference](./reference/cloud_architecture_and_scalability) for more details. If you’re on the Enterprise plan, we can deliver LangSmith to run on your kubernetes cluster in AWS, GCP, or Azure so that data never leaves your environment.
+You may choose to sign up in either the US or EU region. See the [cloud architecture reference](../reference/cloud_architecture_and_scalability) for more details. If you’re on the Enterprise plan, we can deliver LangSmith to run on your kubernetes cluster in AWS, GCP, or Azure so that data never leaves your environment.
### Which security frameworks is LangSmith compliant with?
diff --git a/docs/administration/tutorials/index.mdx b/docs/administration/tutorials/index.mdx
new file mode 100644
index 000000000..91a4467b7
--- /dev/null
+++ b/docs/administration/tutorials/index.mdx
@@ -0,0 +1,5 @@
+# Tutorials
+
+New to LangSmith or to LLM app development in general? Read this material to quickly get up and running.
+
+- [Optimize tracing spend on LangSmith](./tutorials/manage_spend)
diff --git a/docs/tutorials/Administrators/manage_spend.mdx b/docs/administration/tutorials/manage_spend.mdx
similarity index 92%
rename from docs/tutorials/Administrators/manage_spend.mdx
rename to docs/administration/tutorials/manage_spend.mdx
index 88178a415..117e91f8e 100644
--- a/docs/tutorials/Administrators/manage_spend.mdx
+++ b/docs/administration/tutorials/manage_spend.mdx
@@ -14,8 +14,8 @@ import {
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Data Retention Conceptual Docs](../../concepts/usage_and_billing/data_retention_billing)
-- [Usage Limiting Conceptual Docs](../../concepts/usage_and_billing/usage_limits#side-effects-of-extended-data-retention-traces-limit)
+- [Data Retention Conceptual Docs](/administration/concepts#data-retention)
+- [Usage Limiting Conceptual Docs](/administration/concepts#usage-limit)
:::
@@ -56,7 +56,7 @@ We see in the graph above that there are two usage metrics that LangSmith charge
- LangSmith Traces (Extended Data Retention Upgrades).
The first metric tracks all traces that you send to LangSmith. The second tracks all traces that also have our Extended 400 Day Data Retention.
-For more details, see our [data retention conceptual docs](../../concepts/usage_and_billing/data_retention_billing). Notice that these graphs look
+For more details, see our [data retention conceptual docs](/administration/concepts#data-retention). Notice that these graphs look
identical, which will come into play later in the tutorial.
LangSmith Traces usage is measured per workspace, because workspaces often represent development environments (as in our example),
@@ -88,14 +88,14 @@ workspace. Further, the majority of spend in that workspace was on extended data
These upgrades occur for two reasons:
1. You use extended data retention tracing, meaning that, by default, your traces are retained for 400 days
-2. You use base data retention tracing, and use a feature that automatically extends the data retention of a trace ([see our Auto-Upgrade conceptual docs](../../../concepts/usage_and_billing/data_retention_billing#data-retention-auto-upgrades))
+2. You use base data retention tracing, and use a feature that automatically extends the data retention of a trace ([see our Auto-Upgrade conceptual docs](/administration/concepts#data-retention))
Given that the number of total traces per day is equal to the number of extended retention traces per day, it's most likely the
case that this org is using extended data retention tracing everywhere. As such, we start by optimizing our retention settings.
## Optimization 1: manage data retention
-LangSmith charges differently based on a trace's data retention (see our [data retention conceptual docs](../../concepts/usage_and_billing/data_retention_billing)),
+LangSmith charges differently based on a trace's data retention (see our [data retention conceptual docs](/administration/concepts#data-retention)),
where short-lived traces are an order of magnitude less expensive than ones that last for a long time. In this optimization, we will
show how to get optimal settings for data retention without sacrificing historical observability, and
show the effect it has on our bill.
@@ -129,18 +129,18 @@ of LangSmith's built in ability to do server side sampling for extended data ret
Choosing the right percentage of runs to sample depends on your use case. We will arbitrarily pick 10% of runs here, but will
leave it to the user to find the right value that balances collecting rare events and cost constraints.
-LangSmith automatically upgrades the data retention for any trace that matches a run rule in our automations product (see our [run rules docs](../../../how_to_guides/monitoring/rules)). On the
+LangSmith automatically upgrades the data retention for any trace that matches a run rule in our automations product (see our [run rules docs](../../observability/how_to_guides/monitoring/rules)). On the
projects page, click `Rules -> Add Rule`, and configure the rule as follows:

Run rules match on runs rather than traces. Runs are single units of work within an LLM application's API handling. Traces
-are end to end API calls (learn more about [tracing concepts in LangSmith](../../concepts/tracing)). This means a trace can
+are end to end API calls (learn more about [tracing concepts in LangSmith](/observability/concepts)). This means a trace can
be thought of as a tree of runs making up an API call. When a run rule matches any run within a trace, the trace's full run tree
upgrades to be retained for 400 days.
Therefore, to make sure we have the proper sampling rate on traces, we take advantage of the
-[filtering](../../how_to_guides/monitoring/rules#step-2-define-the-filter) functionality of run rules.
+[filtering](../../observability/how_to_guides/monitoring/rules#step-2-define-the-filter) functionality of run rules.
We add add a filter condition to only match the "root" run in the run tree. This is distinct per trace, so our 10% sampling
will upgrade 10% of traces, rather 10% of runs, which could correspond to more than 10% of traces. If desired, we can optionally add
@@ -223,7 +223,7 @@ data retention upgrades. This lets us be confident that new users on the platfor
:::note
The extended data retention limit can cause features other than traces to stop working once reached. If you plan to
-use this feature, please read more about its functionality [here](../../concepts/usage_and_billing/usage_limits#side-effects-of-extended-data-retention-traces-limit).
+use this feature, please read more about its functionality [here](../../administration/concepts#side-effects-of-extended-data-retention-traces-limit).
:::
### Set dev/staging limits and view total spent limit across workspaces
diff --git a/docs/tutorials/Administrators/static/P2SampleTraces.png b/docs/administration/tutorials/static/P2SampleTraces.png
similarity index 100%
rename from docs/tutorials/Administrators/static/P2SampleTraces.png
rename to docs/administration/tutorials/static/P2SampleTraces.png
diff --git a/docs/tutorials/Administrators/static/invoice_investigation_v2.gif b/docs/administration/tutorials/static/invoice_investigation_v2.gif
similarity index 100%
rename from docs/tutorials/Administrators/static/invoice_investigation_v2.gif
rename to docs/administration/tutorials/static/invoice_investigation_v2.gif
diff --git a/docs/tutorials/Administrators/static/p1endresultgraph_v2.png b/docs/administration/tutorials/static/p1endresultgraph_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p1endresultgraph_v2.png
rename to docs/administration/tutorials/static/p1endresultgraph_v2.png
diff --git a/docs/tutorials/Administrators/static/p1endresultinvoice_v2.png b/docs/administration/tutorials/static/p1endresultinvoice_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p1endresultinvoice_v2.png
rename to docs/administration/tutorials/static/p1endresultinvoice_v2.png
diff --git a/docs/tutorials/Administrators/static/p1findworkspaceid_v2.png b/docs/administration/tutorials/static/p1findworkspaceid_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p1findworkspaceid_v2.png
rename to docs/administration/tutorials/static/p1findworkspaceid_v2.png
diff --git a/docs/tutorials/Administrators/static/p1orgretention_v2.png b/docs/administration/tutorials/static/p1orgretention_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p1orgretention_v2.png
rename to docs/administration/tutorials/static/p1orgretention_v2.png
diff --git a/docs/tutorials/Administrators/static/p1projectretention.png b/docs/administration/tutorials/static/p1projectretention.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p1projectretention.png
rename to docs/administration/tutorials/static/p1projectretention.png
diff --git a/docs/tutorials/Administrators/static/p1usagegraph_v2.png b/docs/administration/tutorials/static/p1usagegraph_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p1usagegraph_v2.png
rename to docs/administration/tutorials/static/p1usagegraph_v2.png
diff --git a/docs/tutorials/Administrators/static/p2alllimitonly_v2.png b/docs/administration/tutorials/static/p2alllimitonly_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p2alllimitonly_v2.png
rename to docs/administration/tutorials/static/p2alllimitonly_v2.png
diff --git a/docs/tutorials/Administrators/static/p2bothlimits_v2.png b/docs/administration/tutorials/static/p2bothlimits_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p2bothlimits_v2.png
rename to docs/administration/tutorials/static/p2bothlimits_v2.png
diff --git a/docs/tutorials/Administrators/static/p2totalspendlimits_v2.png b/docs/administration/tutorials/static/p2totalspendlimits_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p2totalspendlimits_v2.png
rename to docs/administration/tutorials/static/p2totalspendlimits_v2.png
diff --git a/docs/tutorials/Administrators/static/p2usagelimitsempty_v2.png b/docs/administration/tutorials/static/p2usagelimitsempty_v2.png
similarity index 100%
rename from docs/tutorials/Administrators/static/p2usagelimitsempty_v2.png
rename to docs/administration/tutorials/static/p2usagelimitsempty_v2.png
diff --git a/docs/tutorials/Administrators/static/workspaces.png b/docs/administration/tutorials/static/workspaces.png
similarity index 100%
rename from docs/tutorials/Administrators/static/workspaces.png
rename to docs/administration/tutorials/static/workspaces.png
diff --git a/docs/concepts/admin/admin.mdx b/docs/concepts/admin/admin.mdx
deleted file mode 100644
index 88f8b3f86..000000000
--- a/docs/concepts/admin/admin.mdx
+++ /dev/null
@@ -1,181 +0,0 @@
-# Admin
-
-This conceptual guide covers topics related to managing users, organizations, and workspaces within LangSmith.
-
-## Organizations
-
-An organization is a logical grouping of users within LangSmith with its own billing configuration. Typically, there is one organization per company. An organization can have multiple workspaces. For more details, see the [setup guide](../../how_to_guides/setup/set_up_organization.mdx).
-
-When you log in for the first time, a personal organization will be created for you automatically. If you'd like to collaborate with others, you can create a separate organization and invite your team members to join.
-There are a few important differences between your personal organization and shared organizations:
-
-| Feature | Personal | Shared |
-| ------------------- | ------------------- | ------------------------------------------------------------ |
-| Maximum workspaces | 1 | Variable, depending on plan (see [pricing page](../pricing)) |
-| Collaboration | Cannot invite users | Can invite users |
-| Billing: paid plans | Developer plan only | All other plans available |
-
-## Workspaces
-
-:::info
-Workspaces were formerly called Tenants. Some code and APIs may still reference the old name for a period of time during the transition.
-:::
-
-A workspace is a logical grouping of users and resources within an organization. A workspace separates trust boundaries for resources and access control.
-Users may have permissions in a workspace that grant them access to the resources in that workspace, including tracing projects, datasets, annotation queues, and prompts. For more details, see the [setup guide](../../how_to_guides/setup/set_up_workspace.mdx).
-
-It is recommended to create a separate workspace for each team within your organization. To organize resources even further, you can use [Resource Tags](#resource-tags) to group resources within a workspace.
-
-The following image shows a sample workspace settings page:
-
-
-The following diagram explains the relationship between organizations, workspaces, and the different resources scoped to and within a workspace:
-
-```mermaid
-graph TD
- Organization --> WorkspaceA[Workspace A]
- Organization --> WorkspaceB[Workspace B]
- WorkspaceA --> tg1(Trace Projects)
- WorkspaceA --> tg2(Datasets and Experiments)
- WorkspaceA --> tg3(Annotation Queues)
- WorkspaceA --> tg4(Prompts)
- WorkspaceB --> tg5(Trace Projects)
- WorkspaceB --> tg6(Datasets and Experiments)
- WorkspaceB --> tg7(Annotation Queues)
- WorkspaceB --> tg8(Prompts)
-```
-
-
-
-See the table below for details on which features are available in which scope (organization or workspace):
-
-| Resource/Setting | Scope |
-| --------------------------------------------------------------------------- | ---------------- |
-| Trace Projects | Workspace |
-| Annotation Queues | Workspace |
-| Deployments | Workspace |
-| Datasets & Experiments | Workspace |
-| Prompts | Workspace |
-| Resource Tags | Workspace |
-| API Keys | Workspace |
-| Settings including Secrets, Feedback config, Models, Rules, and Shared URLs | Workspace |
-| User management: Invite User to Workspace | Workspace |
-| RBAC: Assigning Workspace Roles | Workspace |
-| Data Retention, Usage Limits | Workspace\* |
-| Plans and Billing, Credits, Invoices | Organization |
-| User management: Invite User to Organization | Organization\*\* |
-| Adding Workspaces | Organization |
-| Assigning Organization Roles | Organization |
-| RBAC: Creating/Editing/Deleting Custom Roles | Organization |
-
-\* Data retention settings and usage limits will be available soon for the organization level as well
-\*\* Self-hosted installations may enable workspace-level invites of users to the organization via a feature flag.
-See the [self-hosted user management docs](../../self_hosting/configuration/user_management) for details.
-
-## Resource tags
-
-Resource tags allow you to organize resources within a workspaces. Each tag is a key-value pair that can be assigned to a resource.
-Tags can be used to filter workspace-scoped resources in the UI and API: Projects, Datasets, Annotation Queues, Deployments, and Experiments.
-
-Each new workspace comes with two default tag keys: `Application` and `Environment`; as the names suggest, these tags can be used to categorize resources based on the application and environment they belong to.
-More tags can be added as needed.
-
-LangSmith resource tags are very similar to tags in cloud services like [AWS](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html).
-
-
-
-## Users
-
-A user is a person who has access to LangSmith. Users can be members of one or more organizations and workspaces within those organizations.
-
-Organization members are managed in organization settings:
-
-
-
-And workspace members are managed in workspace settings:
-
-
-
-## API keys
-
-:::danger Dropped support October 22, 2024
-We have dropped support for `ls__` prefixed API keys on October 22, 2024 in favor of personal access tokens (PATs) and service keys. We recommend using PATs and service keys for all new integrations. API keys prefixed with `ls__` will NO LONGER work.
-:::
-
-API keys are used to authenticate requests to the LangSmith API. They are created by users and scoped to a workspace. This means that all requests made with an API key will be associated with the workspace that the key was created in. The API key will have the ability to create, read, update, delete all resources within that workspace.
-
-These inactive API keys are prefixed with `ls__`. These keys will also show up in the UI under the service keys tab.
-
-### Personal Access Tokens (PATs)
-
-Personal Access Tokens (PATs) are used to authenticate requests to the LangSmith API. They are created by users and scoped to a user. The PAT will have the same permissions as the user that created it.
-
-When a user's permission changes or they are removed from a workspace, that is reflected in the PAT permissions. Similarly, if the user is removed from the org, requests using the PAT will start failing.
-
-PATs are prefixed with `lsv2_pt_`.
-
-### Service keys
-
-Service keys are similar to PATs, but are used to authenticate requests to the LangSmith API on behalf of a service account rather than a human user. Service Keys have Admin permissions in all workspaces present at the time they were created.
-
-Service keys are preferred for API requests scoped to a workspace because they don't rely on a specific user, while PATs are preferred for [API requests scoped to an organization](../../how_to_guides/setup/manage_organization_by_api.mdx) for now.
-
-Service keys are prefixed with `lsv2_sk_`.
-
-:::note
-To see how to create a service key or Personal Access Token, see the [setup guide](../../how_to_guides/setup/create_account_api_key.mdx)
-:::
-
-## Organization roles
-
-Organization roles are distinct from the Enterprise feature (RBAC) below and are used in the context of multiple [workspaces](#workspaces). Your organization role determines your workspace membership characteristics and your organization-level permissions. See the [organization setup guide](../../how_to_guides/setup/set_up_organization#organization-roles) for more information.
-
-The organization role selected also impacts workspace membership as described here:
-
-- `Organization Admin` grants full access to manage all organization configuration, users, billing, and workspaces. **An `Organization Admin` has `Admin` access to all workspaces in an organization**
-- `Organization User` may read organization information but cannot execute any write actions at the organization level. **An `Organization User` can be added to a subset of workspaces and assigned workspace roles as usual (if RBAC is enabled), which specify permissions at the workspace level.**
-
-:::info
-The `Organization User` role is only available in organizations on plans with multiple workspaces. In organizations limited to a single workspace, all users are `Organization Admins`.
-Custom organization-scoped roles are not available yet.
-:::
-
-See the table below for all organization permissions:
-
-| | Organization User | Organization Admin |
-| ------------------------------------------- | ----------------- | ------------------ |
-| View organization configuration | ✅ | ✅ |
-| View organization roles | ✅ | ✅ |
-| View organization members | ✅ | ✅ |
-| View data retention settings | ✅ | ✅ |
-| View usage limits | ✅ | ✅ |
-| Admin access to all workspaces | | ✅ |
-| Manage billing settings | | ✅ |
-| Create workspaces | | ✅ |
-| Create, edit, and delete organization roles | | ✅ |
-| Invite new users to organization | | ✅ |
-| Delete user invites | | ✅ |
-| Remove users from an organization | | ✅ |
-| Update data retention settings\* | | ✅ |
-| Update usage limits\* | | ✅ |
-
-## Workspace roles (RBAC) {#workspace-roles}
-
-:::note
-RBAC (Role-Based Access Control) is a feature that is only available to Enterprise customers. If you are interested in this feature, please contact our sales team at sales@langchain.dev
-Other plans default to using the Admin role for all users.
-:::
-
-Roles are used to define the set of permissions that a user has within a workspace. There are three built-in system roles that cannot be edited:
-
-- `Admin` - has full access to all resources within the workspace
-- `Viewer` - has read-only access to all resources within the workspace
-- `Editor` - has full permissions except for workspace management (adding/removing users, changing roles, configuring service keys)
-
-Organization admins can also create/edit custom roles with specific permissions for different resources.
-
-Roles can be managed in organization settings under the `Roles` tab:
-
-
-
-For more details on assigning and creating roles, see the [access control setup guide](../../how_to_guides/setup/set_up_access_control.mdx).
diff --git a/docs/concepts/index.md b/docs/concepts/index.md
deleted file mode 100644
index 5ba3cfe21..000000000
--- a/docs/concepts/index.md
+++ /dev/null
@@ -1,40 +0,0 @@
-# Concepts
-
-Explanations, clarification and discussion of key topics in LangSmith.
-
-## Admin
-
-- [Organizations](./concepts/admin#organizations)
-- [Workspaces](./concepts/admin#workspaces)
-- [Users](./concepts/admin#users)
-- [API keys](./concepts/admin#api-keys)
- - [Personal Access Tokens (PATs)](./concepts/admin#personal-access-tokens-pats)
- - [Service keys](./concepts/admin#service-keys)
-- [Roles](./concepts/admin#roles)
- - [Organization roles](./concepts/admin#organization-roles)
- - [Workspace roles](./concepts/admin#workspace-roles)
-
-## Tracing
-
-- [Runs](./concepts/tracing#runs)
-- [Traces](./concepts/tracing#traces)
-- [Projects](./concepts/tracing#projects)
-- [Feedback](./concepts/tracing#feedback)
-- [Tags](./concepts/tracing#tags)
-- [Metadata](./concepts/tracing#metadata)
-
-## Evaluation
-
-- [Datasets and examples](./concepts/evaluation#datasets-and-examples)
-- [Experiments](./concepts/evaluation#experiments)
-- [Evaluators](./concepts/evaluation#evaluators)
-
-## Prompts
-
-- [Prompt types](./concepts/prompts#prompt-types)
-- [Template formats](./concepts/prompts#template-formats)
-
-## Usage and Billing
-
-- [Data Retention](./concepts/usage_and_billing/data_retention_billing)
-- [Usage Limits](./concepts/usage_and_billing/usage_limits)
diff --git a/docs/concepts/static/langsmith_app_flow.png b/docs/concepts/static/langsmith_app_flow.png
deleted file mode 100644
index 377d32731..000000000
Binary files a/docs/concepts/static/langsmith_app_flow.png and /dev/null differ
diff --git a/docs/concepts/static/langsmith_app_flow_dark.png b/docs/concepts/static/langsmith_app_flow_dark.png
deleted file mode 100644
index a88f7cecd..000000000
Binary files a/docs/concepts/static/langsmith_app_flow_dark.png and /dev/null differ
diff --git a/docs/concepts/static/langsmith_summary_dark.png b/docs/concepts/static/langsmith_summary_dark.png
deleted file mode 100644
index cf44965a5..000000000
Binary files a/docs/concepts/static/langsmith_summary_dark.png and /dev/null differ
diff --git a/docs/concepts/static/sample_langsmith_dataset.png b/docs/concepts/static/sample_langsmith_dataset.png
deleted file mode 100644
index d61f1c335..000000000
Binary files a/docs/concepts/static/sample_langsmith_dataset.png and /dev/null differ
diff --git a/docs/concepts/static/sample_langsmith_example.png b/docs/concepts/static/sample_langsmith_example.png
deleted file mode 100644
index 4f99c2000..000000000
Binary files a/docs/concepts/static/sample_langsmith_example.png and /dev/null differ
diff --git a/docs/concepts/usage_and_billing/data_retention_billing.mdx b/docs/concepts/usage_and_billing/data_retention_billing.mdx
deleted file mode 100644
index 169e41536..000000000
--- a/docs/concepts/usage_and_billing/data_retention_billing.mdx
+++ /dev/null
@@ -1,114 +0,0 @@
-# Data Retention
-
-In May 2024, LangSmith introduced a maximum data retention period on traces of 400 days. In June 2024, LangSmith introduced
-a new data retention based pricing model where customers can configure a shorter data retention period on traces in exchange
-for savings up to 10x. On this page, we'll go through how data retention works and is priced in LangSmith.
-
-## Why retention matters
-
-### Privacy
-
-Many data privacy regulations, such as GDPR in Europe or CCPA in California, require organizations to delete personal data
-once it's no longer necessary for the purposes for which it was collected. Setting retention periods aids in compliance with
-such regulations.
-
-### Cost
-
-LangSmith charges less for traces that have low data retention. See our tutorial on how to [optimize spend](../../tutorials/Administrators/manage_spend)
-for details.
-
-## How it works
-
-### The basics
-
-LangSmith now has two tiers of traces based on Data Retention with the following characteristics:
-
-| | Base | Extended |
-| -------------------- | ---------------- | -------------- |
-| **Price** | $.50 / 1k traces | $5 / 1k traces |
-| **Retention Period** | 14 days | 400 days |
-
-### Data deletion after retention ends
-
-After the specified retention period, traces are no longer accessible via the runs table or API. All user data associated
-with the trace (e.g. inputs and outputs) is deleted from our internal systems within a day thereafter. Some metadata
-associated with each trace may be retained indefinitely for analytics and billing purposes.
-
-### Data retention auto-upgrades
-
-:::caution
-Auto upgrades can have an impact on your bill. Please read this section carefully to fully understand your
-estimated LangSmith tracing costs.
-:::
-
-When you use certain features with `base` tier traces, their data retention will be automatically upgraded to
-`extended` tier. This will increase both the retention period, and the cost of the trace.
-
-The complete list of scenarios in which a trace will upgrade when:
-
-- **Feedback** is added to any run on the trace
-- An **Annotation Queue** receives any run from the trace
-- A **Run Rule** matches any run within a trace
-
-#### Why auto-upgrade traces?
-
-We have two reasons behind the auto-upgrade model for tracing:
-
-1. We think that traces that match any of these conditions are fundamentally more interesting than other traces, and
- therefore it is good for users to be able to keep them around longer.
-2. We philosophically want to charge customers an order of magnitude lower for traces that may not be interacted with meaningfully.
- We think auto-upgrades align our pricing model with the value that LangSmith brings, where only traces with meaningful interaction
- are charged at a higher rate.
-
-If you have questions or concerns about our pricing model, please feel free to reach out to support@langchain.dev and let us know your thoughts!
-
-### How does data retention affect downstream features?
-
-#### Annotation Queues, Run Rules, and Feedback
-
-Traces that use these features will be [auto-upgraded](#data-retention-auto-upgrades).
-
-#### Monitoring
-
-The monitoring tab will continue to work even after a base tier trace's data retention period ends. It is powered by
-trace metadata that exists for >30 days, meaning that your monitoring graphs will continue to stay accurate even on
-`base` tier traces.
-
-#### Datasets
-
-Datasets have an indefinite data retention period. Restated differently, if you add a trace's inputs and outputs to a dataset,
-they will never be deleted. We suggest that if you are using LangSmith for data collection, you take advantage of the datasets
-feature.
-
-## Billing model
-
-### Billable metrics
-
-On your LangSmith invoice, you will see two metrics that we charge for:
-
-- LangSmith Traces (Base Charge)
-- LangSmith Traces (Extended Data Retention Upgrades).
-
-The first metric includes all traces, regardless of tier. The second metric just counts the number of extended retention traces.
-
-### Why measure all traces + upgrades instead of base and extended traces?
-
-A natural question to ask when considering our pricing is why not just show the number of `base` tier and `extended` tier
-traces directly on the invoice?
-
-While we understand this would be more straightforward, it doesn't fit trace upgrades properly. Consider a
-`base` tier trace that was recorded on June 30, and upgraded to `extended` tier on July 3. The `base` tier
-trace occurred in the June billing period, but the upgrade occurred in the July billing period. Therefore,
-we need to be able to measure these two events independently to properly bill our customers.
-
-If your trace was recorded as an extended retention trace, then the `base` and `extended` metrics will both be recorded
-with the same timestamp.
-
-### Cost breakdown
-
-The Base Charge for a trace is .05¢ per trace. We priced the upgrade such that an `extended` retention trace
-costs 10x the price of a base tier trace (.50¢ per trace) including both metrics. Thus, each upgrade costs .45¢.
-
-## Related content
-
-- Tutorial on how to [optimize spend](../../tutorials/Administrators/manage_spend)
diff --git a/docs/concepts/usage_and_billing/index.md b/docs/concepts/usage_and_billing/index.md
deleted file mode 100644
index 71aa8acfc..000000000
--- a/docs/concepts/usage_and_billing/index.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_label: Usage and Billing
----
-
-# Usage and Billing
-
-- [Data Retention](./usage_and_billing/data_retention_billing)
-- [Usage Limits](./usage_and_billing/usage_limits)
-- [Rate Limits](./usage_and_billing/rate_limits)
diff --git a/docs/concepts/usage_and_billing/rate_limits.mdx b/docs/concepts/usage_and_billing/rate_limits.mdx
deleted file mode 100644
index ae35e1f25..000000000
--- a/docs/concepts/usage_and_billing/rate_limits.mdx
+++ /dev/null
@@ -1,84 +0,0 @@
-# Rate Limits
-
-LangSmith has rate limits which are designed to ensure the stability of the service for all users.
-
-To ensure access and stability, LangSmith will respond with HTTP Status Code 429 indicating that rate or usage limits have been exceeded under the following circumstances:
-
-## Scenarios
-
-### Temporary throughput limit over a 1 minute period at our application load balancer
-
-This 429 is the the result of exceeding a fixed number of API calls over a 1 minute window on a per API key/access token basis. The start of the window will vary slightly — it is not guaranteed to start at the start of a clock minute — and may change depending on application deployment events.
-
-After the max events are received we will respond with a 429 until 60 seconds from the start of the evaluation window has been reached and then the process repeats.
-
-This 429 is thrown by our application load balancer and is a mechanism in place for all LangSmith users independent of plan tier to ensure continuity of service for all users.
-
-| Method | Endpoint | Limit | Window |
-| ------------- | -------- | ----- | -------- |
-| DELETE | Sessions | 30 | 1 minute |
-| POST OR PATCH | Runs | 5000 | 1 minute |
-| POST | Feedback | 5000 | 1 minute |
-| \* | \* | 2000 | 1 minute |
-
-:::note
-The LangSmith SDK takes steps to minimize the likelihood of reaching these limits on run-related endpoints by batching up to 100 runs from a single session ID into a single API call.
-:::
-
-### Plan-level hourly trace event limit
-
-This 429 is the result of reaching your maximum hourly events ingested and is evaluated in a fixed window starting at the beginning of each clock hour in UTC and resets at the top of each new hour.
-
-An event in this context is the creation or update of a run. So if run is created, then subsequently updated in the same hourly window, that will count as 2 events against this limit.
-
-This is thrown by our application and varies by plan tier, with organizations on our Startup/Plus and Enterprise plan tiers having higher hourly limits than our Free and Developer Plan Tiers which are designed for personal use.
-
-| Plan | Limit | Window |
-| -------------------------------- | -------------- | ------ |
-| Developer (no payment on file) | 50,000 events | 1 hour |
-| Developer (with payment on file) | 250,000 events | 1 hour |
-| Startup/Plus | 500,000 events | 1 hour |
-| Enterprise | Custom | Custom |
-
-### Plan-level hourly trace data ingest limit
-
-This 429 is the result of reaching the maximum amount of data ingested across your trace inputs, outputs, and metadata and is evaluated in a fixed window starting at the beginning of each clock hour in UTC and resets at the top of each new hour.
-
-Typically, inputs, outputs, and metadata are send on both run creation and update events. So if a run is created and is 2.0MB in size at creation, and 3.0MB in size when updated in the same hourly window, that will count as 5.0MB of storage against this limit.
-
-This is thrown by our application and varies by plan tier, with organizations on our Startup/Plus and Enterprise plan tiers having higher hourly limits than our Free and Developer Plan Tiers which are designed for personal use.
-
-| Plan | Limit | Window |
-| -------------------------------- | ------ | ------ |
-| Developer (no payment on file) | 500MB | 1 hour |
-| Developer (with payment on file) | 2.5GB | 1 hour |
-| Startup/Plus | 5.0GB | 1 hour |
-| Enterprise | Custom | Custom |
-
-### Plan-level monthly unique traces limit
-
-This 429 is the result of reaching your maximum monthly traces ingested and is evaluated in a fixed window starting at the beginning of each calendar month in UTC and resets at the beginning of each new month.
-
-This is thrown by our application and applies only to the Developer Plan Tier when there is no payment method on file.
-
-| Plan | Limit | Window |
-| ------------------------------ | ------------ | ------- |
-| Developer (no payment on file) | 5,000 traces | 1 month |
-
-### Self-configured monthly usage limits
-
-This 429 is the result of reaching your [usage limit](./usage_limits.mdx) as configured by your organization admin and is evaluated in a fixed window starting at the beginning of each calendar month in UTC and resets at the beginning of each new month.
-
-This is thrown by our application and varies by organization based on their configured settings.
-
-## Handling 429s responses in your application
-
-Since some 429 responses are temporary and may succeed on a successive call, if you are directly calling the LangSmith API in your application we recommend implementing retry logic with exponential backoff and jitter.
-
-For convenience, LangChain applications built with the LangSmith SDK has this capability built-in.
-
-:::note
-It is important to note that if you are saturating the endpoints for extended periods of time, retries may not be effective as your application will eventually run large enough backlogs to exhaust all retries.
-
-If that is the case, we would like to discuss your needs more specifically. Please reach out to [LangSmith Support](mailto:support@langchain.dev) with details about your applications throughput needs and sample code and we can work with you to better understand whether the best approach is fixing a bug, changes to your application code, or a different LangSmith plan.
-:::
diff --git a/docs/concepts/usage_and_billing/usage_limits.mdx b/docs/concepts/usage_and_billing/usage_limits.mdx
deleted file mode 100644
index 26768d11d..000000000
--- a/docs/concepts/usage_and_billing/usage_limits.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
-# Usage Limits
-
-:::note
-This page assumes that you have already read our guide on [data retention](./data_retention_billing). Please read that page before proceeding.
-:::
-
-## How usage limits work
-
-LangSmith lets you configure usage limits on tracing. Note that these are _usage_ limits, not _spend_ limits, which
-mean they let you limit the quantity of occurrences of some event rather than the total amount you will spend.
-
-LangSmith lets you set two different monthly limits, mirroring our Billable Metrics discussed in the aforementioned data retention guide:
-
-- All traces limit
-- Extended data retention traces limit
-
-These let you limit the number of total traces, and extended data retention traces respectively.
-
-## Properties of usage limiting
-
-Usage limiting is approximate, meaning that we do not guarantee the exactness of the limit. In rare cases, there
-may be a small period of time where additional traces are processed above the limit threshold before usage limiting
-begins to apply.
-
-## Side effects of extended data retention traces limit
-
-The extended data retention traces limit has side effects. If the limit is already reached, any feature that could
-cause an auto-upgrade of tracing tiers becomes inaccessible. This is because an auto-upgrade of a trace would cause
-another extended retention trace to be created, which in turn should not be allowed by the limit. Therefore, you can
-no longer:
-
-1. match run rules
-2. add feedback to traces
-3. add runs to annotation queues
-
-Each of these features may cause an auto upgrade, so we shut them off when the limit is reached.
-
-## Updating usage limits
-
-Usage limits can be updated from the `Settings` page under `Usage and Billing`. Limit values are cached, so it
-may take a minute or two before the new limits apply.
-
-## Related content
-
-- Tutorial on how to [optimize spend](../../tutorials/Administrators/manage_spend)
diff --git a/docs/concepts/evaluation/evaluation.mdx b/docs/evaluation/concepts/index.mdx
similarity index 98%
rename from docs/concepts/evaluation/evaluation.mdx
rename to docs/evaluation/concepts/index.mdx
index ed3e164f7..43f50c4a3 100644
--- a/docs/concepts/evaluation/evaluation.mdx
+++ b/docs/evaluation/concepts/index.mdx
@@ -1,4 +1,4 @@
-# Evaluation
+# Concepts
The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. High quality evaluations can help you rapidly answer these types of questions with confidence.
@@ -7,7 +7,7 @@ LangSmith allows you to build high-quality evaluations for your AI application.
- `Dataset`: These are the inputs to your application used for conducting evaluations.
- `Evaluator`: An evaluator is a function responsible for scoring your AI application based on the provided dataset.
-
+
## Datasets
@@ -159,11 +159,11 @@ This can be the case for tasks like summarization - it may be hard to give a sum
We can visualize the above ideas collectively in the below diagram. To review, `datasets` are composed of `examples` that can be curated from a variety of sources such as historical logs or user curated examples. `Evaluators` are functions that score how well your application performs on each `example` in your `dataset`. Evaluators can use different scoring functions, such as `human`, `heuristic`, `LLM-as-judge`, or `pairwise`. And if the `dataset` contains `reference` outputs, then the evaluator can compare the application output to the `reference`.
-
+
Each time we run an evaluation, we are conducting an experiment. An experiment is a single execution of all the example inputs in your `dataset` through your `task`. Typically, we will run multiple experiments on a given `dataset`, testing different tweaks to our `task` (e.g., different prompts or LLMs). In LangSmith, you can easily view all the experiments associated with your `dataset` and track your application's performance over time. Additionally, you can compare multiple experiments in a comparison view.
-
+
In the `Dataset` section above, we discussed a few ways to build datasets (e.g., from historical logs or manual curation). One common way to use these datasets is offline evaluation, which is usually conducted prior to deployment of your LLM application. Below we'll discuss a few common paradigms for offline evaluation.
@@ -196,7 +196,7 @@ They are also commonly done when evaluating new or different models.
LangSmith's comparison view has native support for regression testing, allowing you to quickly see examples that have changed relative to the baseline (with regressions on specific examples shown in red and improvements in green):
-
+
### Back-testing
@@ -268,11 +268,11 @@ Below, we will discuss evaluation of a few specific, popular LLM applications.
[LLM-powered autonomous agents](https://lilianweng.github.io/posts/2023-06-23-agent/) combine three components (1) Tool calling, (2) Memory, and (3) Planning. Agents [use tool calling](https://python.langchain.com/v0.1/docs/modules/agents/agent_types/tool_calling/) with planning (e.g., often via prompting) and memory (e.g., often short-term message history) to generate responses. [Tool calling](https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling/) allows a model to respond to a given prompt by generating two things: (1) a tool to invoke and (2) the input arguments required.
-
+
Below is a tool-calling agent in [LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/). The `assistant node` is an LLM that determines whether to invoke a tool based upon the input. The `tool condition` sees if a tool was selected by the `assistant node` and, if so, routes to the `tool node`. The `tool node` executes the tool and returns the output as a tool message to the `assistant node`. This loop continues until as long as the `assistant node` selects a tool. If no tool is selected, then the agent directly returns the LLM response.
-
+
This sets up three general types of agent evaluations that users are often interested in:
@@ -280,7 +280,7 @@ This sets up three general types of agent evaluations that users are often inter
- `Single step`: Evaluate any agent step in isolation (e.g., whether it selects the appropriate tool).
- `Trajectory`: Evaluate whether the agent took the expected path (e.g., of tool calls) to arrive at the final answer.
-
+
Below we will cover what these are, the components (inputs, outputs, evaluators) needed for each one, and when you should consider this.
Note that you likely will want to do multiple (if not all!) of these types of evaluations - they are not mutually exclusive!
@@ -373,7 +373,7 @@ When evaluating RAG applications, a key consideration is whether you have (or ca
`LLM-as-judge` is a commonly used evaluator for RAG because it's an effective way to evaluate factual accuracy or consistency between texts.
-
+
When evaluating RAG applications, you have two main options:
diff --git a/docs/concepts/static/agent_eval.png b/docs/evaluation/concepts/static/agent_eval.png
similarity index 100%
rename from docs/concepts/static/agent_eval.png
rename to docs/evaluation/concepts/static/agent_eval.png
diff --git a/docs/concepts/static/comparing_multiple_experiments.png b/docs/evaluation/concepts/static/comparing_multiple_experiments.png
similarity index 100%
rename from docs/concepts/static/comparing_multiple_experiments.png
rename to docs/evaluation/concepts/static/comparing_multiple_experiments.png
diff --git a/docs/concepts/static/langgraph_agent.png b/docs/evaluation/concepts/static/langgraph_agent.png
similarity index 100%
rename from docs/concepts/static/langgraph_agent.png
rename to docs/evaluation/concepts/static/langgraph_agent.png
diff --git a/docs/concepts/static/langsmith_overview.png b/docs/evaluation/concepts/static/langsmith_overview.png
similarity index 100%
rename from docs/concepts/static/langsmith_overview.png
rename to docs/evaluation/concepts/static/langsmith_overview.png
diff --git a/docs/concepts/static/langsmith_summary.png b/docs/evaluation/concepts/static/langsmith_summary.png
similarity index 100%
rename from docs/concepts/static/langsmith_summary.png
rename to docs/evaluation/concepts/static/langsmith_summary.png
diff --git a/docs/concepts/static/rag-types.png b/docs/evaluation/concepts/static/rag-types.png
similarity index 100%
rename from docs/concepts/static/rag-types.png
rename to docs/evaluation/concepts/static/rag-types.png
diff --git a/docs/concepts/static/regression.png b/docs/evaluation/concepts/static/regression.png
similarity index 100%
rename from docs/concepts/static/regression.png
rename to docs/evaluation/concepts/static/regression.png
diff --git a/docs/concepts/static/tool_use.png b/docs/evaluation/concepts/static/tool_use.png
similarity index 100%
rename from docs/concepts/static/tool_use.png
rename to docs/evaluation/concepts/static/tool_use.png
diff --git a/docs/how_to_guides/datasets/_category_.json b/docs/evaluation/how_to_guides/datasets/_category_.json
similarity index 100%
rename from docs/how_to_guides/datasets/_category_.json
rename to docs/evaluation/how_to_guides/datasets/_category_.json
diff --git a/docs/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection.mdx b/docs/evaluation/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection.mdx
similarity index 95%
rename from docs/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection.mdx
rename to docs/evaluation/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection.mdx
index 24b6d1328..3e44bb925 100644
--- a/docs/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection.mdx
+++ b/docs/evaluation/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection.mdx
@@ -30,7 +30,7 @@ Configure your datasets so that you can search for few shot examples based on an
Navigate to the datasets UI, and click the new `Few-Shot search` tab. Hit the `Start sync` button, which will
create a new index on your dataset to make it searchable.
-
+
By default, we sync to the latest version of your dataset. That means when new examples are added to your dataset, they
will automatically be added to your index. This process runs every few minutes, so there should be a very short delay for
@@ -41,11 +41,11 @@ in the next section.
Now that you have turned on indexing for your dataset, you will see the new few shot playground.
-
+
You can type in a sample input, and check which results would be returned by our search API.
-
+
Each result will have a score and a link to the example in the dataset. The scoring system works such that
0 is a completely random result, and higher scores are better. Results will be sorted in descending order
@@ -62,7 +62,7 @@ may evolve over time. They are simply used for convenience in vibe-testing outpu
Click the `Get Code Snippet` button in the previous diagram, you'll be taken to a screen that has
code snippets from our LangSmith SDK in different languages.
-
+
For code samples on using few shot search in LangChain python applications, please see our [how-to guide
in the LangChain docs](https://python.langchain.com/v0.2/docs/how_to/example_selectors_langsmith/).
diff --git a/docs/how_to_guides/datasets/manage_datasets_in_application.mdx b/docs/evaluation/how_to_guides/datasets/manage_datasets_in_application.mdx
similarity index 78%
rename from docs/how_to_guides/datasets/manage_datasets_in_application.mdx
rename to docs/evaluation/how_to_guides/datasets/manage_datasets_in_application.mdx
index 3fb682b76..07c2df7c9 100644
--- a/docs/how_to_guides/datasets/manage_datasets_in_application.mdx
+++ b/docs/evaluation/how_to_guides/datasets/manage_datasets_in_application.mdx
@@ -7,7 +7,7 @@ sidebar_position: 1
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Concepts guide on evaluation and datasets](../../concepts/evaluation#datasets-and-examples)
+- [Concepts guide on evaluation and datasets](../../concepts#datasets-and-examples)
:::
@@ -17,29 +17,29 @@ The easiest way to interact with datasets is directly in the LangSmith app. Here
To get started, you can create a new datasets by heading to the "Datasets and Testing" section of the application and clicking on "+ New Dataset".
-
+
-Then, enter the relevant dataset details, including a name, optional description, and dataset type. Please see the [concepts](../../concepts/evaluation#datasets-and-examples) for more information on dataset types. For most flexibility, the key-value dataset type is recommended.
+Then, enter the relevant dataset details, including a name, optional description, and dataset type. Please see the [concepts](../../concepts#datasets-and-examples) for more information on dataset types. For most flexibility, the key-value dataset type is recommended.
-
+
You can then add examples to the dataset by clicking on "Add Example". Here, you can enter the input and output as JSON objects.
-
+
## Dataset schema validation
If you are creating a key-value dataset, you may optionally define a schema for your dataset. All examples you create will be validated against this schema.
-
+
Dataset schemas are defined with standard [JSON schemas](https://json-schema.org/). If you would rather manually enter raw JSON, click "Editor" at the bottom of the schema editor and then select "JSON".
-
+
If you have defined a schema for your dataset, you will get easy validation when creating new examples:
-
+
## Add inputs and outputs from traces to datasets
@@ -49,30 +49,30 @@ You can do this from any 'run' details page by clicking the 'Add to Dataset' but
:::tip
An extremely powerful technique to build datasets is to drill-down into the most interesting traces, such as traces that were tagged with poor user feedback, and add them to a dataset.
-For tips on how to filter traces, see the [filtering traces](../monitoring/filter_traces_in_application) guide.
+For tips on how to filter traces, see the [filtering traces](../../../observability/how_to_guides/monitoring/filter_traces_in_application) guide.
:::
:::tip automations
-You can use [automations](../monitoring/rules) to automatically add traces to a dataset based on certain conditions. For example, you could add all traces that have a certain tag to a dataset.
+You can use [automations](../../../observability/how_to_guides/monitoring/rules) to automatically add traces to a dataset based on certain conditions. For example, you could add all traces that have a certain tag to a dataset.
:::
-
+
From there, we select the dataset to organize it in and update the ground truth output values if necessary.
-
+
## Upload a CSV file to create a dataset
The easiest way to create a dataset from your own data is by clicking the 'upload a CSV dataset' button on the home page or in the top right-hand corner of the 'Datasets & Testing' page.
-
+
Select a name and description for the dataset, and then confirm that the inferred input and output columns are correct.
-
+
## Generate synthetic examples
@@ -81,11 +81,11 @@ For a dataset with a specified schema, you can generate synthetic examples to en
1. **Select few-shot examples**: Choose a set of examples to guide the LLM's generation. You can manually select these examples from your dataset or use the automatic selection option.
2. **Specify the number of examples**: Enter the number of synthetic examples you want to generate.
3. **Configure API Key**: Ensure your OpenAI API key is entered at the "API Key" link.
- 
+ 
After clicking "Generate," the examples will appear on the page. You can choose which examples to add to your dataset, with the option to edit them before finalizing.
Each example will be validated against your specified dataset schema and tagged as "synthetic" in the source metadata.
-
+
## Export a dataset
@@ -94,11 +94,11 @@ You can export your LangSmith dataset to CSV or OpenAI evals format directly fro
To do so, click "Export Dataset" from the homepage.
To do so, select a dataset, click on "Examples", and then click the "Export Dataset" button at the top of the examples table.
-
+
This will open a modal where you can select the format you want to export to.
-
+
## Create and manage dataset splits
@@ -119,23 +119,23 @@ evaluate your application.
In order to create and manage splits in the app, you can select some examples in your dataset and click "Add to Split". From the resulting popup menu,
you can select and unselect splits for the selected examples, or create a new split.
-
+
## Edit example metadata
You can add metadata to your examples by clicking on an example and then clicking on the "Metadata" tab in the side pane.
From this page, you can update/delete existing metadata, or add new metadata. You may use this to store information about
-your examples, such as tags or version info, which you can [then filter by when you call `list_examples` in the SDK](/how_to_guides/datasets/manage_datasets_programmatically#list-examples-by-metadata).
+your examples, such as tags or version info, which you can [then filter by when you call `list_examples` in the SDK](./manage_datasets_programmatically#list-examples-by-metadata).
-
+
## Filter examples
You can filter examples by metadata key/value or full-text search. To filter examples, click "Filter" in the top left of the table:
-
+
Next, click "Add filter" and select "Full Text" or "Metadata" from the resulting dropdown. You may add multiple filters, and only examples that satisfy all of the
filters will be displayed in the table.
-
+
diff --git a/docs/how_to_guides/datasets/manage_datasets_programmatically.mdx b/docs/evaluation/how_to_guides/datasets/manage_datasets_programmatically.mdx
similarity index 98%
rename from docs/how_to_guides/datasets/manage_datasets_programmatically.mdx
rename to docs/evaluation/how_to_guides/datasets/manage_datasets_programmatically.mdx
index c607a51c6..d0042f284 100644
--- a/docs/how_to_guides/datasets/manage_datasets_programmatically.mdx
+++ b/docs/evaluation/how_to_guides/datasets/manage_datasets_programmatically.mdx
@@ -84,7 +84,7 @@ await client.createExamples({
## Create a dataset from traces
To create datasets from the runs (spans) of your traces, you can use the same approach.
-For **many** more examples of how to fetch and filter runs, see the [export traces](../tracing/export_traces) guide.
+For **many** more examples of how to fetch and filter runs, see the [export traces](../../../observability/how_to_guides/tracing/export_traces) guide.
Below is an example:
**Shared URLs** or , then click on **Unshare** next to the dataset you want to unshare.
- 
+ 
diff --git a/docs/how_to_guides/static/add_manual_example.png b/docs/evaluation/how_to_guides/datasets/static/add_manual_example.png
similarity index 100%
rename from docs/how_to_guides/static/add_manual_example.png
rename to docs/evaluation/how_to_guides/datasets/static/add_manual_example.png
diff --git a/docs/how_to_guides/static/add_metadata.png b/docs/evaluation/how_to_guides/datasets/static/add_metadata.png
similarity index 100%
rename from docs/how_to_guides/static/add_metadata.png
rename to docs/evaluation/how_to_guides/datasets/static/add_metadata.png
diff --git a/docs/how_to_guides/static/add_to_dataset.png b/docs/evaluation/how_to_guides/datasets/static/add_to_dataset.png
similarity index 100%
rename from docs/how_to_guides/static/add_to_dataset.png
rename to docs/evaluation/how_to_guides/datasets/static/add_to_dataset.png
diff --git a/docs/how_to_guides/static/add_to_split2.png b/docs/evaluation/how_to_guides/datasets/static/add_to_split2.png
similarity index 100%
rename from docs/how_to_guides/static/add_to_split2.png
rename to docs/evaluation/how_to_guides/datasets/static/add_to_split2.png
diff --git a/docs/how_to_guides/static/create_dataset_csv.png b/docs/evaluation/how_to_guides/datasets/static/create_dataset_csv.png
similarity index 100%
rename from docs/how_to_guides/static/create_dataset_csv.png
rename to docs/evaluation/how_to_guides/datasets/static/create_dataset_csv.png
diff --git a/docs/how_to_guides/static/custom_json_schema.png b/docs/evaluation/how_to_guides/datasets/static/custom_json_schema.png
similarity index 100%
rename from docs/how_to_guides/static/custom_json_schema.png
rename to docs/evaluation/how_to_guides/datasets/static/custom_json_schema.png
diff --git a/docs/how_to_guides/static/dataset_schema_definition.png b/docs/evaluation/how_to_guides/datasets/static/dataset_schema_definition.png
similarity index 100%
rename from docs/how_to_guides/static/dataset_schema_definition.png
rename to docs/evaluation/how_to_guides/datasets/static/dataset_schema_definition.png
diff --git a/docs/how_to_guides/static/enter_dataset_details.png b/docs/evaluation/how_to_guides/datasets/static/enter_dataset_details.png
similarity index 100%
rename from docs/how_to_guides/static/enter_dataset_details.png
rename to docs/evaluation/how_to_guides/datasets/static/enter_dataset_details.png
diff --git a/docs/how_to_guides/static/export-dataset-button.png b/docs/evaluation/how_to_guides/datasets/static/export-dataset-button.png
similarity index 100%
rename from docs/how_to_guides/static/export-dataset-button.png
rename to docs/evaluation/how_to_guides/datasets/static/export-dataset-button.png
diff --git a/docs/how_to_guides/static/export-dataset-modal.png b/docs/evaluation/how_to_guides/datasets/static/export-dataset-modal.png
similarity index 100%
rename from docs/how_to_guides/static/export-dataset-modal.png
rename to docs/evaluation/how_to_guides/datasets/static/export-dataset-modal.png
diff --git a/docs/how_to_guides/static/few_shot_code_snippet.png b/docs/evaluation/how_to_guides/datasets/static/few_shot_code_snippet.png
similarity index 100%
rename from docs/how_to_guides/static/few_shot_code_snippet.png
rename to docs/evaluation/how_to_guides/datasets/static/few_shot_code_snippet.png
diff --git a/docs/how_to_guides/static/few_shot_search_results.png b/docs/evaluation/how_to_guides/datasets/static/few_shot_search_results.png
similarity index 100%
rename from docs/how_to_guides/static/few_shot_search_results.png
rename to docs/evaluation/how_to_guides/datasets/static/few_shot_search_results.png
diff --git a/docs/how_to_guides/static/few_shot_synced_empty_state.png b/docs/evaluation/how_to_guides/datasets/static/few_shot_synced_empty_state.png
similarity index 100%
rename from docs/how_to_guides/static/few_shot_synced_empty_state.png
rename to docs/evaluation/how_to_guides/datasets/static/few_shot_synced_empty_state.png
diff --git a/docs/how_to_guides/static/few_shot_tab_unsynced.png b/docs/evaluation/how_to_guides/datasets/static/few_shot_tab_unsynced.png
similarity index 100%
rename from docs/how_to_guides/static/few_shot_tab_unsynced.png
rename to docs/evaluation/how_to_guides/datasets/static/few_shot_tab_unsynced.png
diff --git a/docs/how_to_guides/static/filter_examples.png b/docs/evaluation/how_to_guides/datasets/static/filter_examples.png
similarity index 100%
rename from docs/how_to_guides/static/filter_examples.png
rename to docs/evaluation/how_to_guides/datasets/static/filter_examples.png
diff --git a/docs/how_to_guides/static/filters_applied.png b/docs/evaluation/how_to_guides/datasets/static/filters_applied.png
similarity index 100%
rename from docs/how_to_guides/static/filters_applied.png
rename to docs/evaluation/how_to_guides/datasets/static/filters_applied.png
diff --git a/docs/how_to_guides/static/generate_synthetic_examples_create.png b/docs/evaluation/how_to_guides/datasets/static/generate_synthetic_examples_create.png
similarity index 100%
rename from docs/how_to_guides/static/generate_synthetic_examples_create.png
rename to docs/evaluation/how_to_guides/datasets/static/generate_synthetic_examples_create.png
diff --git a/docs/how_to_guides/static/generate_synthetic_examples_pane.png b/docs/evaluation/how_to_guides/datasets/static/generate_synthetic_examples_pane.png
similarity index 100%
rename from docs/how_to_guides/static/generate_synthetic_examples_pane.png
rename to docs/evaluation/how_to_guides/datasets/static/generate_synthetic_examples_pane.png
diff --git a/docs/how_to_guides/static/modify_example.png b/docs/evaluation/how_to_guides/datasets/static/modify_example.png
similarity index 100%
rename from docs/how_to_guides/static/modify_example.png
rename to docs/evaluation/how_to_guides/datasets/static/modify_example.png
diff --git a/docs/how_to_guides/static/new_dataset.png b/docs/evaluation/how_to_guides/datasets/static/new_dataset.png
similarity index 100%
rename from docs/how_to_guides/static/new_dataset.png
rename to docs/evaluation/how_to_guides/datasets/static/new_dataset.png
diff --git a/docs/how_to_guides/static/schema_validation.png b/docs/evaluation/how_to_guides/datasets/static/schema_validation.png
similarity index 100%
rename from docs/how_to_guides/static/schema_validation.png
rename to docs/evaluation/how_to_guides/datasets/static/schema_validation.png
diff --git a/docs/how_to_guides/static/select_columns.png b/docs/evaluation/how_to_guides/datasets/static/select_columns.png
similarity index 100%
rename from docs/how_to_guides/static/select_columns.png
rename to docs/evaluation/how_to_guides/datasets/static/select_columns.png
diff --git a/docs/how_to_guides/static/share_dataset.png b/docs/evaluation/how_to_guides/datasets/static/share_dataset.png
similarity index 100%
rename from docs/how_to_guides/static/share_dataset.png
rename to docs/evaluation/how_to_guides/datasets/static/share_dataset.png
diff --git a/docs/how_to_guides/static/tag_this_version.png b/docs/evaluation/how_to_guides/datasets/static/tag_this_version.png
similarity index 100%
rename from docs/how_to_guides/static/tag_this_version.png
rename to docs/evaluation/how_to_guides/datasets/static/tag_this_version.png
diff --git a/docs/how_to_guides/static/unshare_dataset.png b/docs/evaluation/how_to_guides/datasets/static/unshare_dataset.png
similarity index 100%
rename from docs/how_to_guides/static/unshare_dataset.png
rename to docs/evaluation/how_to_guides/datasets/static/unshare_dataset.png
diff --git a/docs/how_to_guides/static/unshare_trace_list.png b/docs/evaluation/how_to_guides/datasets/static/unshare_trace_list.png
similarity index 100%
rename from docs/how_to_guides/static/unshare_trace_list.png
rename to docs/evaluation/how_to_guides/datasets/static/unshare_trace_list.png
diff --git a/docs/how_to_guides/static/version_dataset.png b/docs/evaluation/how_to_guides/datasets/static/version_dataset.png
similarity index 100%
rename from docs/how_to_guides/static/version_dataset.png
rename to docs/evaluation/how_to_guides/datasets/static/version_dataset.png
diff --git a/docs/how_to_guides/static/version_dataset_tests.png b/docs/evaluation/how_to_guides/datasets/static/version_dataset_tests.png
similarity index 100%
rename from docs/how_to_guides/static/version_dataset_tests.png
rename to docs/evaluation/how_to_guides/datasets/static/version_dataset_tests.png
diff --git a/docs/how_to_guides/datasets/version_datasets.mdx b/docs/evaluation/how_to_guides/datasets/version_datasets.mdx
similarity index 93%
rename from docs/how_to_guides/datasets/version_datasets.mdx
rename to docs/evaluation/how_to_guides/datasets/version_datasets.mdx
index c706a6141..510b78157 100644
--- a/docs/how_to_guides/datasets/version_datasets.mdx
+++ b/docs/evaluation/how_to_guides/datasets/version_datasets.mdx
@@ -12,13 +12,13 @@ Any time you _add_, _update_, or _delete_ examples in your dataset, a new versio
By default, the version is defined by the timestamp of the change. When you click on a particular version of a dataset (by timestamp) in the "Examples" tab, you can see the state of the dataset at that point in time.
-
+
Note that examples are read-only when viewing a past version of the dataset. You will also see the operations that were between this version of the dataset and the "latest" version of the dataset. Also, by default the **latest version of the dataset is shown in the "Examples" tab** and experiments from **all versions are shown in the "Tests" tab**.
In the "Tests" tab, you can see the results of tests run on the dataset at different versions.
-
+
## Tag a version
@@ -28,7 +28,7 @@ For example, you might tag a version of your dataset as "prod" and use it to run
Tagging can be done in the UI by clicking on "+ Tag this version" in the "Examples" tab.
-
+
You can also tag versions of your dataset using the SDK. Here's an example of how to tag a version of a dataset using the python SDK:
diff --git a/docs/how_to_guides/evaluation/_category_.json b/docs/evaluation/how_to_guides/evaluation/_category_.json
similarity index 100%
rename from docs/how_to_guides/evaluation/_category_.json
rename to docs/evaluation/how_to_guides/evaluation/_category_.json
diff --git a/docs/how_to_guides/evaluation/audit_evaluator_scores.mdx b/docs/evaluation/how_to_guides/evaluation/audit_evaluator_scores.mdx
similarity index 91%
rename from docs/how_to_guides/evaluation/audit_evaluator_scores.mdx
rename to docs/evaluation/how_to_guides/evaluation/audit_evaluator_scores.mdx
index 595ae3cde..5b74d3c9a 100644
--- a/docs/how_to_guides/evaluation/audit_evaluator_scores.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/audit_evaluator_scores.mdx
@@ -18,13 +18,13 @@ In the comparison view, you may click on any feedback tag to bring up the feedba
If you would like, you may also attach an explanation to your correction. This is useful if you are using a [few-shot evaluator](./create_few_shot_evaluators) and will be automatically inserted into your few-shot examples
in place of the `few_shot_explanation` prompt variable.
-
+
## In the runs table
In the runs table, find the "Feedback" column and click on the feedback tag to bring up the feedback details. Again, click the "edit" icon on the right to bring up the corrections view.
-
+
## In the SDK
diff --git a/docs/how_to_guides/evaluation/bind_evaluator_to_dataset.mdx b/docs/evaluation/how_to_guides/evaluation/bind_evaluator_to_dataset.mdx
similarity index 90%
rename from docs/how_to_guides/evaluation/bind_evaluator_to_dataset.mdx
rename to docs/evaluation/how_to_guides/evaluation/bind_evaluator_to_dataset.mdx
index 892977d8b..505f8a06d 100644
--- a/docs/how_to_guides/evaluation/bind_evaluator_to_dataset.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/bind_evaluator_to_dataset.mdx
@@ -7,7 +7,7 @@ sidebar_position: 2
While you can specify evaluators to grade the results of your experiments programmatically (see [this guide](./evaluate_llm_application) for more information), you can also bind evaluators to a dataset in the UI.
This allows you to configure automatic evaluators that grade your experiment results. We have support for both LLM-based evaluators, and custom python code evaluators.
-The process for configuring this is very similar to the process for configuring an [online evaluator](../monitoring/online_evaluations) for traces.
+The process for configuring this is very similar to the process for configuring an [online evaluator](../../../observability/how_to_guides/monitoring/online_evaluations) for traces.
:::note Only affects subsequent experiment runs
When you configure an evaluator for a dataset, it will only affect the experiment runs that are created after the evaluator is configured. It will not affect the evaluation of experiment runs that were created before the evaluator was configured.
@@ -23,7 +23,7 @@ The next steps vary based on the evaluator type.
1. **Select the LLM as judge type evaluator**
2. **Give your evaluator a name** and **set an inline prompt or load a prompt from the prompt hub** that will be used to evaluate the results of the runs in the experiment.
-
+
Importantly, evaluator prompts can only contain the following input variables:
@@ -42,11 +42,11 @@ LangSmith currently doesn't support setting up evaluators in the application tha
You can specify the scoring criteria in the "schema" field. In this example, we are asking the LLM to grade on "correctness" of the output with respect to the reference, with a boolean output of 0 or 1. The name of the field in the schema will be interpreted as the feedback key and the type will be the type of the score.
-
+
3. **Save the evaluator** and navigate back to the dataset details page. Each **subsequent** experiment run from the dataset will now be evaluated by the evaluator you configured. Note that in the below image, each run in the experiment has a "correctness" score.
-
+
## Custom code evaluators
@@ -70,11 +70,11 @@ You can specify the scoring criteria in the "schema" field. In this example, we
In the UI, you will see a panel that lets you write your code inline, with some starter code:
-
+
Custom Code evaluators take in two arguments:
-- A `Run` ([reference](../../../reference/data_formats/run_data_format)). This represents the new run in your
+- A `Run` ([reference](/reference/data_formats/run_data_format)). This represents the new run in your
experiment. For example, if you ran an experiment via SDK, this would contain the input/output from your
chain or model you are testing.
- An `Example`. This represents the reference example in your dataset that the chain or model you are testing
@@ -127,8 +127,8 @@ To visualize the feedback left on new experiments, try running a new experiment
On the dataset, if you now click to the `experiments` tab -> `+ Experiment` -> `Run in Playground`, you can see the results in action.
Your runs in your experiments will be automatically marked with the key specified in your code sample above (here, `formatted`):
-
+
And if you navigate back to your dataset, you'll see summary stats for said experiment in the `experiments` tab:
-
+
diff --git a/docs/how_to_guides/evaluation/compare_experiment_results.mdx b/docs/evaluation/how_to_guides/evaluation/compare_experiment_results.mdx
similarity index 83%
rename from docs/how_to_guides/evaluation/compare_experiment_results.mdx
rename to docs/evaluation/how_to_guides/evaluation/compare_experiment_results.mdx
index 9cb5f2786..3e435122b 100644
--- a/docs/how_to_guides/evaluation/compare_experiment_results.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/compare_experiment_results.mdx
@@ -8,52 +8,52 @@ Oftentimes, when you are iterating on your LLM application (such as changing the
LangSmith supports a powerful comparison view that lets you hone in on key differences, regressions, and improvements between different experiments.
-
+
## Open the comparison view
To open the comparison view, select two or more experiments from the "Experiments" tab from a given dataset page. Then, click on the "Compare" button at the bottom of the page.
-
+
## View regressions and improvements
In the LangSmith comparison view, runs that _regressed_ on your specified feedback key against your baseline experiment will be highlighted in red, while runs that _improved_
will be highlighted in green. At the top of each column, you can see how many runs in that experiment did better and how many did worse than your baseline experiment.
-
+
## Filter on regressions or improvements
Click on the regressions or improvements buttons on the top of each column to filter to the runs that regressed or improved in that specific experiment.
-
+
## Update baseline experiment
In order to track regressions, you need a baseline experiment against which to compare. This will be automatically assigned as the first experiment in your comparison, but you can
change it from the dropdown at the top of the page.
-
+
## Select feedback key
You will also want to select the feedback key (evaluation metric) on which you would like focus on. This can be selected via another dropdown at the top. Again, one will be assigned by
default, but you can adjust as needed.
-
+
## Open a trace
If tracing is enabled for the evaluation run, you can click on the trace icon in the hover state of any experiment cell to open the trace view for that run. This will open up a trace in the side panel.
-
+
## Expand detailed view
From any cell, you can click on the expand icon in the hover state to open up a detailed view of all experiment results on that particular example input, along with feedback keys and scores.
-
+
## Update display settings
@@ -61,4 +61,4 @@ You can adjust the display settings for comparison view by clicking on "Display"
Here, you'll be able to toggle feedback, metrics, summary charts, and expand full text.
-
+
diff --git a/docs/how_to_guides/evaluation/create_few_shot_evaluators.mdx b/docs/evaluation/how_to_guides/evaluation/create_few_shot_evaluators.mdx
similarity index 84%
rename from docs/how_to_guides/evaluation/create_few_shot_evaluators.mdx
rename to docs/evaluation/how_to_guides/evaluation/create_few_shot_evaluators.mdx
index adba3d5c7..cbee27039 100644
--- a/docs/how_to_guides/evaluation/create_few_shot_evaluators.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/create_few_shot_evaluators.mdx
@@ -11,7 +11,7 @@ you to automatically collect human corrections on evaluator prompts, which are t
:::tip Recommended Reading
Before learning how to create few-shot evaluators, it might be helpful to learn how to setup automations (both online and offline) and how to leave corrections on evaluator scores:
-- [Set up online evaluations](../monitoring/online_evaluations)
+- [Set up online evaluations](../../../observability/how_to_guides/monitoring/online_evaluations)
- [Bind an evaluator to a dataset in the UI (offline evaluation)](./bind_evaluator_to_dataset)
- [Audit evaluator scores](./audit_evaluator_scores)
@@ -24,7 +24,7 @@ The default maximum few-shot examples to use in the prompt is 5. Examples are pu
:::
-When creating an [online](../monitoring/online_evaluations) or [offline](./bind_evaluator_to_dataset) evaluator - from a tracing project or a dataset, respectively - you will see the option to use corrections as few-shot examples. Note that these types of evaluators
+When creating an [online](../../../observability/how_to_guides/monitoring/online_evaluations) or [offline](./bind_evaluator_to_dataset) evaluator - from a tracing project or a dataset, respectively - you will see the option to use corrections as few-shot examples. Note that these types of evaluators
are only supported when using mustache prompts - you will not be able to click this option if your prompt uses f-string formatting. When you select this,
we will auto-create a few-shot prompt for you. Each individual few-shot example will be formatted according to this prompt, and inserted into your main prompt in place of the `{{Few-shot examples}}`
template variable which will be auto-added above. Your few-shot prompt should contain the same variables as your main prompt, plus a `few_shot_explanation` and a score variable which should have the same name
@@ -34,7 +34,7 @@ as your output key. For example, if your main prompt has variables `question` an
You may also specify the number of few-shot examples to use. The default is 5. If your examples will tend to be very long, you may want to set this number lower to save tokens - whereas if your examples tend
to be short, you can set a higher number in order to give your evaluator more examples to learn from. If you have more examples in your dataset than this number, we will randomly choose them for you.
-
+
Note that few-shot examples are not currently supported in evaluators that use Hub prompts.
@@ -51,7 +51,7 @@ begin seeing examples populated inside your corrections dataset. As you make cor
The inputs to the few-shot examples will be the relevant fields from the inputs, outputs, and reference (if this an offline evaluator) of your chain/dataset.
The outputs will be the corrected evaluator score and the explanations that you created when you left the corrections. Feel free to edit these to your liking. Here is an example of a few-shot example in a corrections dataset:
-
+
Note that the corrections may take a minute or two to be populated into your few-shot dataset. Once they are there, future runs of your evaluator will include them in the prompt!
@@ -59,12 +59,12 @@ Note that the corrections may take a minute or two to be populated into your few
In order to view your corrections dataset, go to your rule and click "Edit Rule" (or "Edit Evaluator" from a dataset):
-
+
If this is an online evaluator (in a tracing project), you will need to click to edit your prompt:
-
+
From this screen, you will see a button that says "View few-shot dataset". Clicking this will bring you to your dataset of corrections, where you can view and update your few-shot examples:
-
+
diff --git a/docs/how_to_guides/evaluation/evaluate_existing_experiment.mdx b/docs/evaluation/how_to_guides/evaluation/evaluate_existing_experiment.mdx
similarity index 100%
rename from docs/how_to_guides/evaluation/evaluate_existing_experiment.mdx
rename to docs/evaluation/how_to_guides/evaluation/evaluate_existing_experiment.mdx
diff --git a/docs/how_to_guides/evaluation/evaluate_llm_application.mdx b/docs/evaluation/how_to_guides/evaluation/evaluate_llm_application.mdx
similarity index 97%
rename from docs/how_to_guides/evaluation/evaluate_llm_application.mdx
rename to docs/evaluation/how_to_guides/evaluation/evaluate_llm_application.mdx
index 0a79e4029..6ccfd0700 100644
--- a/docs/how_to_guides/evaluation/evaluate_llm_application.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/evaluate_llm_application.mdx
@@ -15,7 +15,7 @@ import {
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on evaluation](../../concepts/evaluation)
+- [Conceptual guide on evaluation](../../concepts)
- [How-to guide on managing datasets](../datasets/manage_datasets_in_application)
- [How-to guide on managing datasets programmatically](../datasets/manage_datasets_programmatically)
@@ -40,7 +40,7 @@ The following example involves evaluating a very simple LLM pipeline as classifi
In this case, we are defining a simple evaluation target consisting of an LLM pipeline that classifies text as toxic or non-toxic.
We've optionally enabled tracing to capture the inputs and outputs of each step in the pipeline.
-To understand how to annotate your code for tracing, please refer to [this guide](../tracing/annotate_code).
+To understand how to annotate your code for tracing, please refer to [this guide](../../../observability/how_to_guides/tracing/annotate_code).
dict:
Rows from the resulting experiment will display each of the scores.
-
+
diff --git a/docs/how_to_guides/evaluation/evaluate_on_intermediate_steps.mdx b/docs/evaluation/how_to_guides/evaluation/evaluate_on_intermediate_steps.mdx
similarity index 99%
rename from docs/how_to_guides/evaluation/evaluate_on_intermediate_steps.mdx
rename to docs/evaluation/how_to_guides/evaluation/evaluate_on_intermediate_steps.mdx
index 34333ac81..f92060973 100644
--- a/docs/how_to_guides/evaluation/evaluate_on_intermediate_steps.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/evaluate_on_intermediate_steps.mdx
@@ -167,7 +167,7 @@ def rag_pipeline(question):
/>
This pipeline will produce a trace that looks something like:
-
+
## 2. Create a dataset and examples to evaluate the pipeline
@@ -387,4 +387,4 @@ Finally, we'll run `evaluate` with the custom evaluators defined above.
/>
The experiment will contain the results of the evaluation, including the scores and comments from the evaluators:
-
+
diff --git a/docs/how_to_guides/evaluation/evaluate_pairwise.mdx b/docs/evaluation/how_to_guides/evaluation/evaluate_pairwise.mdx
similarity index 97%
rename from docs/how_to_guides/evaluation/evaluate_pairwise.mdx
rename to docs/evaluation/how_to_guides/evaluation/evaluate_pairwise.mdx
index 2ca1dbea7..a75eeaf8c 100644
--- a/docs/how_to_guides/evaluation/evaluate_pairwise.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/evaluate_pairwise.mdx
@@ -67,7 +67,7 @@ which asks the LLM to decide which is better between two AI assistant responses.
:::note Optional LangChain Usage
-In the Python example below, we are pulling [this structured prompt](https://smith.langchain.com/hub/langchain-ai/pairwise-evaluation-2) from the [LangChain Hub](../prompts/langchain_hub) and using it with a LangChain LLM wrapper.
+In the Python example below, we are pulling [this structured prompt](https://smith.langchain.com/hub/langchain-ai/pairwise-evaluation-2) from the [LangChain Hub](../../../prompt_engineering/how_to_guides/prompts/langchain_hub) and using it with a LangChain LLM wrapper.
The prompt asks the LLM to decide which is better between two AI assistant responses. It uses structured output to parse the AI's response: 0, 1, or 2.
**Usage of LangChain is totally optional.** To illustrate this point, the TypeScript example below uses the OpenAI API directly.
@@ -221,12 +221,12 @@ The prompt asks the LLM to decide which is better between two AI assistant respo
Navigate to the "Pairwise Experiments" tab from the dataset page:
-
+
Click on a pairwise experiment that you would like to inspect, and you will be brought to the Comparison View:
-
+
You may filter to runs where the first experiment was better or vice versa by clicking the thumbs up/thumbs down buttons in the table header:
-
+
diff --git a/docs/how_to_guides/evaluation/fetch_perf_metrics_experiment.mdx b/docs/evaluation/how_to_guides/evaluation/fetch_perf_metrics_experiment.mdx
similarity index 100%
rename from docs/how_to_guides/evaluation/fetch_perf_metrics_experiment.mdx
rename to docs/evaluation/how_to_guides/evaluation/fetch_perf_metrics_experiment.mdx
diff --git a/docs/how_to_guides/evaluation/filter_experiments_ui.mdx b/docs/evaluation/how_to_guides/evaluation/filter_experiments_ui.mdx
similarity index 94%
rename from docs/how_to_guides/evaluation/filter_experiments_ui.mdx
rename to docs/evaluation/how_to_guides/evaluation/filter_experiments_ui.mdx
index 2afce9df1..df610e77d 100644
--- a/docs/how_to_guides/evaluation/filter_experiments_ui.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/filter_experiments_ui.mdx
@@ -74,20 +74,20 @@ and a known ID of the prompt:
In the UI, we see all experiments that have been run by default.
-
+
If we, say, have a preference for openai models, we can easily filter down and see scores within just openai
models first:
-
+
We can stack filters, allowing us to filter out low scores on correctness to make sure we only compare
relevant experiments:
-
+
Finally, we can clear and reset filters. For example, if we see there is clear there's a winner with the
`singleminded` prompt, we can change filtering settings to see if any other model providers' models work
as well with it:
-
+
diff --git a/docs/how_to_guides/evaluation/run_evals_api_only.mdx b/docs/evaluation/how_to_guides/evaluation/run_evals_api_only.mdx
similarity index 100%
rename from docs/how_to_guides/evaluation/run_evals_api_only.mdx
rename to docs/evaluation/how_to_guides/evaluation/run_evals_api_only.mdx
diff --git a/docs/how_to_guides/evaluation/run_evaluation_from_prompt_playground.mdx b/docs/evaluation/how_to_guides/evaluation/run_evaluation_from_prompt_playground.mdx
similarity index 86%
rename from docs/how_to_guides/evaluation/run_evaluation_from_prompt_playground.mdx
rename to docs/evaluation/how_to_guides/evaluation/run_evaluation_from_prompt_playground.mdx
index 0b9765973..c7b99141b 100644
--- a/docs/how_to_guides/evaluation/run_evaluation_from_prompt_playground.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/run_evaluation_from_prompt_playground.mdx
@@ -4,7 +4,7 @@ sidebar_position: 2
# Run an evaluation from the prompt playground
-While you can kick off experiments easily using the sdk, as outlined [here](../../#5-create-your-first-evaluation), it's often useful to run experiments directly in the prompt playground.
+While you can kick off experiments easily using the sdk, as outlined [here](./evaluate_llm_application), it's often useful to run experiments directly in the prompt playground.
This allows you to test your prompt / model configuration over a series of inputs to see how well it generalizes across different contexts or scenarios, without having to write any code.
@@ -12,12 +12,12 @@ This allows you to test your prompt / model configuration over a series of input
1. **Navigate to the prompt playground** by clicking on "Prompts" in the sidebar, then selecting a prompt from the list of available prompts or creating a new one.
2. **Select the "Switch to dataset" button** to switch to the dataset you want to use for the experiment. Please note that the dataset keys of the dataset inputs must match the input variables of the prompt. In the below sections, note that the selected dataset has inputs with keys "text", which correctly match the input variable of the prompt. Also note that there is a max capacity of 15 inputs for the prompt playground.
- 
+ 
3. **Click on the "Start" button** or CMD+Enter to start the experiment. This will run the prompt over all the examples in the dataset and create an entry for the experiment in the dataset details page. Note that you need to commit the prompt to the prompt hub before you can start the experiment to ensure it can be referenced in the experiment. The result for each input will be streamed and displayed inline for each input in the dataset.
- 
+ 
4. **View the results** by clicking on the "View Experiment" button at the bottom of the page. This will take you to the experiment details page where you can see the results of the experiment.
5. **Navigate back to the commit page** by clicking on the "View Commit" button. This will take you back to the prompt page where you can make changes to the prompt and run more experiments. The "View Commit" button is available to all experiments that were run from the prompt playground. The experiment is prefixed with the prompt repository name, a unique identifier, and the date and time the experiment was run.
- 
+ 
## Add evaluation scores to the experiment
diff --git a/docs/how_to_guides/evaluation/static/add-auto-evaluator-python.png b/docs/evaluation/how_to_guides/evaluation/static/add-auto-evaluator-python.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/add-auto-evaluator-python.png
rename to docs/evaluation/how_to_guides/evaluation/static/add-auto-evaluator-python.png
diff --git a/docs/how_to_guides/static/annotate_trace_inline.png b/docs/evaluation/how_to_guides/evaluation/static/annotate_trace_inline.png
similarity index 100%
rename from docs/how_to_guides/static/annotate_trace_inline.png
rename to docs/evaluation/how_to_guides/evaluation/static/annotate_trace_inline.png
diff --git a/docs/how_to_guides/evaluation/static/click_to_edit_prompt.png b/docs/evaluation/how_to_guides/evaluation/static/click_to_edit_prompt.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/click_to_edit_prompt.png
rename to docs/evaluation/how_to_guides/evaluation/static/click_to_edit_prompt.png
diff --git a/docs/how_to_guides/evaluation/static/code-autoeval-popup.png b/docs/evaluation/how_to_guides/evaluation/static/code-autoeval-popup.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/code-autoeval-popup.png
rename to docs/evaluation/how_to_guides/evaluation/static/code-autoeval-popup.png
diff --git a/docs/how_to_guides/evaluation/static/corrections_comparison_view.png b/docs/evaluation/how_to_guides/evaluation/static/corrections_comparison_view.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/corrections_comparison_view.png
rename to docs/evaluation/how_to_guides/evaluation/static/corrections_comparison_view.png
diff --git a/docs/how_to_guides/evaluation/static/corrections_runs_table.png b/docs/evaluation/how_to_guides/evaluation/static/corrections_runs_table.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/corrections_runs_table.png
rename to docs/evaluation/how_to_guides/evaluation/static/corrections_runs_table.png
diff --git a/docs/how_to_guides/static/create_evaluator.png b/docs/evaluation/how_to_guides/evaluation/static/create_evaluator.png
similarity index 100%
rename from docs/how_to_guides/static/create_evaluator.png
rename to docs/evaluation/how_to_guides/evaluation/static/create_evaluator.png
diff --git a/docs/how_to_guides/evaluation/static/create_few_shot_evaluator.png b/docs/evaluation/how_to_guides/evaluation/static/create_few_shot_evaluator.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/create_few_shot_evaluator.png
rename to docs/evaluation/how_to_guides/evaluation/static/create_few_shot_evaluator.png
diff --git a/docs/how_to_guides/evaluation/static/edit_evaluator.png b/docs/evaluation/how_to_guides/evaluation/static/edit_evaluator.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/edit_evaluator.png
rename to docs/evaluation/how_to_guides/evaluation/static/edit_evaluator.png
diff --git a/docs/how_to_guides/static/evaluation_intermediate_experiment.png b/docs/evaluation/how_to_guides/evaluation/static/evaluation_intermediate_experiment.png
similarity index 100%
rename from docs/how_to_guides/static/evaluation_intermediate_experiment.png
rename to docs/evaluation/how_to_guides/evaluation/static/evaluation_intermediate_experiment.png
diff --git a/docs/how_to_guides/static/evaluation_intermediate_trace.png b/docs/evaluation/how_to_guides/evaluation/static/evaluation_intermediate_trace.png
similarity index 100%
rename from docs/how_to_guides/static/evaluation_intermediate_trace.png
rename to docs/evaluation/how_to_guides/evaluation/static/evaluation_intermediate_trace.png
diff --git a/docs/how_to_guides/static/evaluator_prompt.png b/docs/evaluation/how_to_guides/evaluation/static/evaluator_prompt.png
similarity index 100%
rename from docs/how_to_guides/static/evaluator_prompt.png
rename to docs/evaluation/how_to_guides/evaluation/static/evaluator_prompt.png
diff --git a/docs/how_to_guides/static/expanded_view.png b/docs/evaluation/how_to_guides/evaluation/static/expanded_view.png
similarity index 100%
rename from docs/how_to_guides/static/expanded_view.png
rename to docs/evaluation/how_to_guides/evaluation/static/expanded_view.png
diff --git a/docs/how_to_guides/evaluation/static/experiments-tab-code-results.png b/docs/evaluation/how_to_guides/evaluation/static/experiments-tab-code-results.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/experiments-tab-code-results.png
rename to docs/evaluation/how_to_guides/evaluation/static/experiments-tab-code-results.png
diff --git a/docs/how_to_guides/evaluation/static/few_shot_example.png b/docs/evaluation/how_to_guides/evaluation/static/few_shot_example.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/few_shot_example.png
rename to docs/evaluation/how_to_guides/evaluation/static/few_shot_example.png
diff --git a/docs/how_to_guides/evaluation/static/filter-all-experiments.png b/docs/evaluation/how_to_guides/evaluation/static/filter-all-experiments.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/filter-all-experiments.png
rename to docs/evaluation/how_to_guides/evaluation/static/filter-all-experiments.png
diff --git a/docs/how_to_guides/evaluation/static/filter-feedback.png b/docs/evaluation/how_to_guides/evaluation/static/filter-feedback.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/filter-feedback.png
rename to docs/evaluation/how_to_guides/evaluation/static/filter-feedback.png
diff --git a/docs/how_to_guides/evaluation/static/filter-openai.png b/docs/evaluation/how_to_guides/evaluation/static/filter-openai.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/filter-openai.png
rename to docs/evaluation/how_to_guides/evaluation/static/filter-openai.png
diff --git a/docs/how_to_guides/evaluation/static/filter-singleminded.png b/docs/evaluation/how_to_guides/evaluation/static/filter-singleminded.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/filter-singleminded.png
rename to docs/evaluation/how_to_guides/evaluation/static/filter-singleminded.png
diff --git a/docs/how_to_guides/evaluation/static/filter_pairwise.png b/docs/evaluation/how_to_guides/evaluation/static/filter_pairwise.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/filter_pairwise.png
rename to docs/evaluation/how_to_guides/evaluation/static/filter_pairwise.png
diff --git a/docs/how_to_guides/static/filter_to_regressions.png b/docs/evaluation/how_to_guides/evaluation/static/filter_to_regressions.png
similarity index 100%
rename from docs/how_to_guides/static/filter_to_regressions.png
rename to docs/evaluation/how_to_guides/evaluation/static/filter_to_regressions.png
diff --git a/docs/how_to_guides/static/input_variables_playground.png b/docs/evaluation/how_to_guides/evaluation/static/input_variables_playground.png
similarity index 100%
rename from docs/how_to_guides/static/input_variables_playground.png
rename to docs/evaluation/how_to_guides/evaluation/static/input_variables_playground.png
diff --git a/docs/how_to_guides/static/multiple_scores.png b/docs/evaluation/how_to_guides/evaluation/static/multiple_scores.png
similarity index 100%
rename from docs/how_to_guides/static/multiple_scores.png
rename to docs/evaluation/how_to_guides/evaluation/static/multiple_scores.png
diff --git a/docs/how_to_guides/static/open_comparison_view.png b/docs/evaluation/how_to_guides/evaluation/static/open_comparison_view.png
similarity index 100%
rename from docs/how_to_guides/static/open_comparison_view.png
rename to docs/evaluation/how_to_guides/evaluation/static/open_comparison_view.png
diff --git a/docs/how_to_guides/static/open_trace_comparison.png b/docs/evaluation/how_to_guides/evaluation/static/open_trace_comparison.png
similarity index 100%
rename from docs/how_to_guides/static/open_trace_comparison.png
rename to docs/evaluation/how_to_guides/evaluation/static/open_trace_comparison.png
diff --git a/docs/how_to_guides/evaluation/static/pairwise_comparison_view.png b/docs/evaluation/how_to_guides/evaluation/static/pairwise_comparison_view.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/pairwise_comparison_view.png
rename to docs/evaluation/how_to_guides/evaluation/static/pairwise_comparison_view.png
diff --git a/docs/how_to_guides/evaluation/static/pairwise_from_dataset.png b/docs/evaluation/how_to_guides/evaluation/static/pairwise_from_dataset.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/pairwise_from_dataset.png
rename to docs/evaluation/how_to_guides/evaluation/static/pairwise_from_dataset.png
diff --git a/docs/how_to_guides/static/playground_evaluator_results.png b/docs/evaluation/how_to_guides/evaluation/static/playground_evaluator_results.png
similarity index 100%
rename from docs/how_to_guides/static/playground_evaluator_results.png
rename to docs/evaluation/how_to_guides/evaluation/static/playground_evaluator_results.png
diff --git a/docs/how_to_guides/static/playground_experiment_results.png b/docs/evaluation/how_to_guides/evaluation/static/playground_experiment_results.png
similarity index 100%
rename from docs/how_to_guides/static/playground_experiment_results.png
rename to docs/evaluation/how_to_guides/evaluation/static/playground_experiment_results.png
diff --git a/docs/how_to_guides/static/regression_test.gif b/docs/evaluation/how_to_guides/evaluation/static/regression_test.gif
similarity index 100%
rename from docs/how_to_guides/static/regression_test.gif
rename to docs/evaluation/how_to_guides/evaluation/static/regression_test.gif
diff --git a/docs/how_to_guides/static/regression_view.png b/docs/evaluation/how_to_guides/evaluation/static/regression_view.png
similarity index 100%
rename from docs/how_to_guides/static/regression_view.png
rename to docs/evaluation/how_to_guides/evaluation/static/regression_view.png
diff --git a/docs/how_to_guides/static/runnable_eval.png b/docs/evaluation/how_to_guides/evaluation/static/runnable_eval.png
similarity index 100%
rename from docs/how_to_guides/static/runnable_eval.png
rename to docs/evaluation/how_to_guides/evaluation/static/runnable_eval.png
diff --git a/docs/how_to_guides/static/select_baseline.png b/docs/evaluation/how_to_guides/evaluation/static/select_baseline.png
similarity index 100%
rename from docs/how_to_guides/static/select_baseline.png
rename to docs/evaluation/how_to_guides/evaluation/static/select_baseline.png
diff --git a/docs/how_to_guides/static/select_feedback.png b/docs/evaluation/how_to_guides/evaluation/static/select_feedback.png
similarity index 100%
rename from docs/how_to_guides/static/select_feedback.png
rename to docs/evaluation/how_to_guides/evaluation/static/select_feedback.png
diff --git a/docs/how_to_guides/evaluation/static/show-feedback-from-autoeval-code.png b/docs/evaluation/how_to_guides/evaluation/static/show-feedback-from-autoeval-code.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/show-feedback-from-autoeval-code.png
rename to docs/evaluation/how_to_guides/evaluation/static/show-feedback-from-autoeval-code.png
diff --git a/docs/how_to_guides/static/summary_eval.png b/docs/evaluation/how_to_guides/evaluation/static/summary_eval.png
similarity index 100%
rename from docs/how_to_guides/static/summary_eval.png
rename to docs/evaluation/how_to_guides/evaluation/static/summary_eval.png
diff --git a/docs/how_to_guides/static/switch_to_dataset.png b/docs/evaluation/how_to_guides/evaluation/static/switch_to_dataset.png
similarity index 100%
rename from docs/how_to_guides/static/switch_to_dataset.png
rename to docs/evaluation/how_to_guides/evaluation/static/switch_to_dataset.png
diff --git a/docs/how_to_guides/evaluation/static/unit-test-suite.png b/docs/evaluation/how_to_guides/evaluation/static/unit-test-suite.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/unit-test-suite.png
rename to docs/evaluation/how_to_guides/evaluation/static/unit-test-suite.png
diff --git a/docs/how_to_guides/static/update_display.png b/docs/evaluation/how_to_guides/evaluation/static/update_display.png
similarity index 100%
rename from docs/how_to_guides/static/update_display.png
rename to docs/evaluation/how_to_guides/evaluation/static/update_display.png
diff --git a/docs/how_to_guides/evaluation/static/uploaded_dataset.png b/docs/evaluation/how_to_guides/evaluation/static/uploaded_dataset.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/uploaded_dataset.png
rename to docs/evaluation/how_to_guides/evaluation/static/uploaded_dataset.png
diff --git a/docs/how_to_guides/evaluation/static/uploaded_dataset_examples.png b/docs/evaluation/how_to_guides/evaluation/static/uploaded_dataset_examples.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/uploaded_dataset_examples.png
rename to docs/evaluation/how_to_guides/evaluation/static/uploaded_dataset_examples.png
diff --git a/docs/how_to_guides/evaluation/static/uploaded_experiment.png b/docs/evaluation/how_to_guides/evaluation/static/uploaded_experiment.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/uploaded_experiment.png
rename to docs/evaluation/how_to_guides/evaluation/static/uploaded_experiment.png
diff --git a/docs/how_to_guides/evaluation/static/use_corrections_as_few_shot.png b/docs/evaluation/how_to_guides/evaluation/static/use_corrections_as_few_shot.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/use_corrections_as_few_shot.png
rename to docs/evaluation/how_to_guides/evaluation/static/use_corrections_as_few_shot.png
diff --git a/docs/how_to_guides/static/view_experiment.gif b/docs/evaluation/how_to_guides/evaluation/static/view_experiment.gif
similarity index 100%
rename from docs/how_to_guides/static/view_experiment.gif
rename to docs/evaluation/how_to_guides/evaluation/static/view_experiment.gif
diff --git a/docs/how_to_guides/evaluation/static/view_few_shot_ds.png b/docs/evaluation/how_to_guides/evaluation/static/view_few_shot_ds.png
similarity index 100%
rename from docs/how_to_guides/evaluation/static/view_few_shot_ds.png
rename to docs/evaluation/how_to_guides/evaluation/static/view_few_shot_ds.png
diff --git a/docs/how_to_guides/evaluation/unit_testing.mdx b/docs/evaluation/how_to_guides/evaluation/unit_testing.mdx
similarity index 99%
rename from docs/how_to_guides/evaluation/unit_testing.mdx
rename to docs/evaluation/how_to_guides/evaluation/unit_testing.mdx
index 5a42ad35c..bc2c2f53e 100644
--- a/docs/how_to_guides/evaluation/unit_testing.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/unit_testing.mdx
@@ -57,7 +57,7 @@ Each time you run this test suite, LangSmith collects the pass/fail rate and oth
The test suite syncs to a corresponding dataset named after your package or github repository.
-
+
## Going further
diff --git a/docs/how_to_guides/evaluation/upload_existing_experiments.mdx b/docs/evaluation/how_to_guides/evaluation/upload_existing_experiments.mdx
similarity index 97%
rename from docs/how_to_guides/evaluation/upload_existing_experiments.mdx
rename to docs/evaluation/how_to_guides/evaluation/upload_existing_experiments.mdx
index 17f76ac1c..41f9c59fc 100644
--- a/docs/how_to_guides/evaluation/upload_existing_experiments.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/upload_existing_experiments.mdx
@@ -260,12 +260,12 @@ information in the request body).
## View the experiment in the UI
Now, login to the UI and click on your newly-created dataset! You should see a single experiment:
-
+
Your examples will have been uploaded:
-
+
Clicking on your experiment will bring you to the comparison view:
-
+
As you upload more experiments to your dataset, you will be able to compare the results and easily identify regressions in the comparison view.
diff --git a/docs/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators.mdx b/docs/evaluation/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators.mdx
similarity index 98%
rename from docs/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators.mdx
rename to docs/evaluation/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators.mdx
index 2a04e437b..a270c5e66 100644
--- a/docs/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators.mdx
+++ b/docs/evaluation/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators.mdx
@@ -7,7 +7,7 @@ sidebar_position: 4
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [LangChain evaluator reference](../../reference/sdk_reference/langchain_evaluators)
+- [LangChain evaluator reference](/reference/sdk_reference/langchain_evaluators)
:::
diff --git a/docs/how_to_guides/human_feedback/_category_.json b/docs/evaluation/how_to_guides/human_feedback/_category_.json
similarity index 100%
rename from docs/how_to_guides/human_feedback/_category_.json
rename to docs/evaluation/how_to_guides/human_feedback/_category_.json
diff --git a/docs/how_to_guides/human_feedback/annotate_traces_inline.mdx b/docs/evaluation/how_to_guides/human_feedback/annotate_traces_inline.mdx
similarity index 87%
rename from docs/how_to_guides/human_feedback/annotate_traces_inline.mdx
rename to docs/evaluation/how_to_guides/human_feedback/annotate_traces_inline.mdx
index 967d43d43..8973c1006 100644
--- a/docs/how_to_guides/human_feedback/annotate_traces_inline.mdx
+++ b/docs/evaluation/how_to_guides/human_feedback/annotate_traces_inline.mdx
@@ -6,7 +6,7 @@ sidebar_position: 3
LangSmith allows you to manually annotate traces with feedback within the application. This can be useful for adding context to a trace, such as a user's comment or a note about a specific issue.
You can annotate a trace either inline or by sending the trace to an annotation queue, which allows you closely inspect and log feedbacks to runs one at a time.
-Feedback tags are associated with your [workspace](../../concepts/admin#workspaces).
+Feedback tags are associated with your [workspace](../../../administration/concepts#workspaces).
:::note
@@ -17,11 +17,11 @@ This is useful for critiquing specific parts of the LLM application, such as the
To annotate a trace inline, click on the `Annotate` in the upper right corner of trace view for any particular run that is part of the trace.
-
+
This will open up a pane that allows you to choose from feedback tags associated with your workspace and add a score for particular tags. You can also add a standalone comment. Follow [this guide](./set_up_feedback_criteria) to set up feedback tags for your workspace.
You can also set up new feedback criteria from within the pane itself.
-
+
You can use the labeled keyboard shortcuts to streamline the annotation process.
diff --git a/docs/how_to_guides/human_feedback/annotation_queues.mdx b/docs/evaluation/how_to_guides/human_feedback/annotation_queues.mdx
similarity index 89%
rename from docs/how_to_guides/human_feedback/annotation_queues.mdx
rename to docs/evaluation/how_to_guides/human_feedback/annotation_queues.mdx
index ef1a7659c..305bd5988 100644
--- a/docs/how_to_guides/human_feedback/annotation_queues.mdx
+++ b/docs/evaluation/how_to_guides/human_feedback/annotation_queues.mdx
@@ -9,12 +9,12 @@ While you can always [annotate runs inline](./annotate_traces_inline), annotatio
## Create an annotation queue
-
+
To create an annotation queue, navigate to the **Annotation queues** section through the homepage or left-hand navigation bar.
Then click **+ New annotation queue** in the top right corner.
-
+
Fill in the form with the **name** and **description** of the queue.
You can also assign a **default dataset** to queue, which will streamline the process of sending the inputs and outputs of certain runs to datasets in your LangSmith workspace.
@@ -42,19 +42,19 @@ Because of these settings, it's possible (and likely) that the number of runs vi
You can update these settings at any time by clicking on the pencil icon in the **Annotation Queues** section.
-
+
## Assign runs to an annotation queue
To assign runs to an annotation queue, either:
1. Click on **Add to Annotation Queue** in top right corner of any trace view. You can add ANY intermediate run (span) of the trace to an annotation queue, not just the root span.
- 
+ 
2. Select multiple runs in the runs table then click **Add to Annotation Queue** at the bottom of the page.
- 
+ 
-3. [Set up an automation rule](../monitoring/rules) that automatically assigns runs which pass a certain filter and sampling condition to an annotation queue.
+3. [Set up an automation rule](../../../observability/how_to_guides/monitoring/rules) that automatically assigns runs which pass a certain filter and sampling condition to an annotation queue.
:::tip
@@ -73,4 +73,4 @@ You can also remove the run from the queue for all users, despite any current re
The keyboard shortcuts shown can help streamline the review process.
-
+
diff --git a/docs/how_to_guides/human_feedback/attach_user_feedback.mdx b/docs/evaluation/how_to_guides/human_feedback/attach_user_feedback.mdx
similarity index 84%
rename from docs/how_to_guides/human_feedback/attach_user_feedback.mdx
rename to docs/evaluation/how_to_guides/human_feedback/attach_user_feedback.mdx
index 907487b28..9a220bba5 100644
--- a/docs/how_to_guides/human_feedback/attach_user_feedback.mdx
+++ b/docs/evaluation/how_to_guides/human_feedback/attach_user_feedback.mdx
@@ -13,17 +13,17 @@ import {
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on tracing and feedback](../../concepts/tracing)
-- [Reference guide on feedback data format](../../reference/data_formats/feedback_data_format)
+- [Conceptual guide on tracing and feedback](../../../observability/concepts)
+- [Reference guide on feedback data format](/reference/data_formats/feedback_data_format)
:::
In many applications, but even more so for LLM applications, it is important to collect user feedback to understand how your application is performing in real-world scenarios.
The ability to observe user feedback along with trace data can be very powerful to drill down into the most interesting datapoints, then send those datapoints for further review, automatic evaluation, or even datasets.
-To learn more about how to filter traces based on various attributes, including user feedback, see [this guide](../monitoring/filter_traces_in_application)
+To learn more about how to filter traces based on various attributes, including user feedback, see [this guide](../../../observability/how_to_guides/monitoring/filter_traces_in_application)
LangSmith makes it easy to attach user feedback to traces.
-It's often helpful to expose a simple mechanism (such as a thumbs-up, thumbs-down button) to collect user feedback for your application responses. You can then use the LangSmith SDK or API to send feedback for a trace. To get the `run_id` of a logged run, see [this guide](../tracing/access_current_span).
+It's often helpful to expose a simple mechanism (such as a thumbs-up, thumbs-down button) to collect user feedback for your application responses. You can then use the LangSmith SDK or API to send feedback for a trace. To get the `run_id` of a logged run, see [this guide](../../../observability/how_to_guides/tracing/access_current_span).
:::note
diff --git a/docs/how_to_guides/human_feedback/set_up_feedback_criteria.mdx b/docs/evaluation/how_to_guides/human_feedback/set_up_feedback_criteria.mdx
similarity index 83%
rename from docs/how_to_guides/human_feedback/set_up_feedback_criteria.mdx
rename to docs/evaluation/how_to_guides/human_feedback/set_up_feedback_criteria.mdx
index 55e1e5c80..7ac1413a6 100644
--- a/docs/how_to_guides/human_feedback/set_up_feedback_criteria.mdx
+++ b/docs/evaluation/how_to_guides/human_feedback/set_up_feedback_criteria.mdx
@@ -9,8 +9,8 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on tracing and feedback](../../concepts/tracing)
-- [Reference guide on feedback data format](../../reference/data_formats/feedback_data_format)
+- [Conceptual guide on tracing and feedback](../../../observability/concepts)
+- [Reference guide on feedback data format](/reference/data_formats/feedback_data_format)
:::
@@ -22,11 +22,11 @@ To set up a new feedback criteria, follow
diff --git a/docs/how_to_guides/evaluation/index.mdx b/docs/how_to_guides/evaluation/index.mdx
deleted file mode 100644
index 0e64cf66a..000000000
--- a/docs/how_to_guides/evaluation/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Evaluation
-
-This section contains how-to guides related to evaluation.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/how_to_guides/human_feedback/index.mdx b/docs/how_to_guides/human_feedback/index.mdx
deleted file mode 100644
index cdec9ebc1..000000000
--- a/docs/how_to_guides/human_feedback/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Human feedback
-
-This section contains how-to guides related to human feedback.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/how_to_guides/index.md b/docs/how_to_guides/index.md
deleted file mode 100644
index 3b332f7b4..000000000
--- a/docs/how_to_guides/index.md
+++ /dev/null
@@ -1,265 +0,0 @@
----
-sidebar_label: How-to guides
-sidebar_position: 0
----
-
-# How-to guides
-
-Step-by-step guides that cover key tasks and operations in LangSmith.
-
-## Setup
-
-See the following guides to set up your LangSmith account.
-
-- [Create an account and API key](./how_to_guides/setup/create_account_api_key)
-- [Set up an organization](./how_to_guides/setup/set_up_organization)
- - [Create an organization](./how_to_guides/setup/set_up_organization#create-an-organization)
- - [Manage and navigate workspaces](./how_to_guides/setup/set_up_organization#manage-and-navigate-workspaces)
- - [Manage users](./how_to_guides/setup/set_up_organization#manage-users)
- - [Manage your organization using the API](./how_to_guides/setup/manage_organization_by_api)
-- [Set up a workspace](./how_to_guides/setup/set_up_workspace)
- - [Create a workspace](./how_to_guides/setup/set_up_workspace#create-a-workspace)
- - [Manage users](./how_to_guides/setup/set_up_workspace#manage-users)
- - [Configure workspace settings](./how_to_guides/setup/set_up_workspace#configure-workspace-settings)
-- [Set up billing](./how_to_guides/setup/set_up_billing)
-- [Update invoice email, tax id and, business information](./how_to_guides/setup/update_business_info)
-- [Set up access control (enterprise only)](./how_to_guides/setup/set_up_access_control)
- - [Create a role](./how_to_guides/setup/set_up_access_control#create-a-role)
- - [Assign a role to a user](./how_to_guides/setup/set_up_access_control#assign-a-role-to-a-user)
-- [Set up resource tags](./how_to_guides/setup/set_up_resource_tags)
- - [Create a tag](./how_to_guides/setup/set_up_resource_tags#create-a-tag)
- - [Assign a tag to a resource](./how_to_guides/setup/set_up_resource_tags#assign-a-tag-to-a-resource)
- - [Delete a tag](./how_to_guides/setup/set_up_resource_tags#delete-a-tag)
- - [Filter resources by tags](./how_to_guides/setup/set_up_resource_tags#filter-resources-by-tags)
-
-## Tracing
-
-Get started with LangSmith's tracing features to start adding observability to your LLM applications.
-
-- [Annotate code for tracing](./how_to_guides/tracing/annotate_code)
- - [Use `@traceable`/`traceable`](./how_to_guides/tracing/annotate_code#use-traceable--traceable)
- - [Wrap the OpenAI API client](./how_to_guides/tracing/annotate_code#wrap-the-openai-client)
- - [Use the RunTree API](./how_to_guides/tracing/annotate_code#use-the-runtree-api)
- - [Use the `trace` context manager (Python only)](./how_to_guides/tracing/annotate_code#use-the-trace-context-manager-python-only)
-- [Toggle tracing on and off](./how_to_guides/tracing/toggle_tracing)
-- [Log traces to specific project](./how_to_guides/tracing/log_traces_to_project)
- - [Set the destination project statically](./how_to_guides/tracing/log_traces_to_project#set-the-destination-project-statically)
- - [Set the destination project dynamically](./how_to_guides/tracing/log_traces_to_project#set-the-destination-project-dynamically)
-- [Set a sampling rate for traces](./how_to_guides/tracing/sample_traces)
-- [Add metadata and tags to traces](./how_to_guides/tracing/add_metadata_tags)
-- [Implement distributed tracing](./how_to_guides/tracing/distributed_tracing)
- - [Distributed tracing in Python](./how_to_guides/tracing/distributed_tracing#distributed-tracing-in-python)
- - [Distributed tracing in TypeScript](./how_to_guides/tracing/distributed_tracing#distributed-tracing-in-typescript)
-- [Access the current span within a traced function](./how_to_guides/tracing/access_current_span)
-- [Log multimodal traces](./how_to_guides/tracing/log_multimodal_traces)
-- [Log retriever traces](./how_to_guides/tracing/log_retriever_trace)
-- [Log custom LLM traces](./how_to_guides/tracing/log_llm_trace)
- - [Chat-style models](./how_to_guides/tracing/log_llm_trace#chat-style-models)
- - [Specify model name](./how_to_guides/tracing/log_llm_trace#specify-model-name)
- - [Stream outputs](./how_to_guides/tracing/log_llm_trace#stream-outputs)
- - [Manually provide token counts](./how_to_guides/tracing/log_llm_trace#manually-provide-token-counts)
- - [Instruct-style models](./how_to_guides/tracing/log_llm_trace#instruct-style-models)
-- [Prevent logging of sensitive data in traces](./how_to_guides/tracing/mask_inputs_outputs)
- - [Rule-based masking of inputs and outputs](./how_to_guides/tracing/mask_inputs_outputs#rule-based-masking-of-inputs-and-outputs)
-- [Export traces](./how_to_guides/tracing/export_traces)
- - [Use filter arguments](./how_to_guides/tracing/export_traces#use-filter-arguments)
- - [Use filter query language](./how_to_guides/tracing/export_traces#use-filter-query-language)
-- [Share or unshare a trace publicly](./how_to_guides/tracing/share_trace)
-- [Compare traces](./how_to_guides/tracing/compare_traces)
-- [Trace generator functions](./how_to_guides/tracing/trace_generator_functions)
-- [Trace with `LangChain`](./how_to_guides/tracing/trace_with_langchain)
- - [Installation](./how_to_guides/tracing/trace_with_langchain#installation)
- - [Quick start](./how_to_guides/tracing/trace_with_langchain#quick-start)
- - [Trace selectively](./how_to_guides/tracing/trace_with_langchain#trace-selectively)
- - [Log to specific project](./how_to_guides/tracing/trace_with_langchain#log-to-specific-project)
- - [Add metadata and tags to traces](./how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces)
- - [Customize run name](./how_to_guides/tracing/trace_with_langchain#customize-run-name)
- - [Access run (span) ID for LangChain invocations](./how_to_guides/tracing/trace_with_langchain#access-run-span-id-for-langchain-invocations)
- - [Ensure all traces are submitted before exiting](./how_to_guides/tracing/trace_with_langchain#ensure-all-traces-are-submitted-before-exiting)
- - [Trace without setting environment variables](./how_to_guides/tracing/trace_with_langchain#trace-without-setting-environment-variables)
- - [Distributed tracing with LangChain (Python)](./how_to_guides/tracing/trace_with_langchain#distributed-tracing-with-langchain-python)
- - [Interoperability between LangChain (Python) and LangSmith SDK](./how_to_guides/tracing/trace_with_langchain#interoperability-between-langchain-python-and-langsmith-sdk)
- - [Interoperability between LangChain.JS and LangSmith SDK](./how_to_guides/tracing/trace_with_langchain#interoperability-between-langchainjs-and-langsmith-sdk)
-- [Trace with `LangGraph`](./how_to_guides/tracing/trace_with_langgraph)
- - [Interoperability between LangChain and LangGraph](./how_to_guides/tracing/trace_with_langgraph#with-langchain)
- - [Interoperability between `@traceable`/`traceable` and LangGraph](./how_to_guides/tracing/trace_with_langgraph#without-langchain)
-- [Trace with `Instructor` (Python only)](./how_to_guides/tracing/trace_with_instructor)
-- [Trace with the Vercel `AI SDK` (JS only)](./how_to_guides/tracing/trace_with_vercel_ai_sdk)
-- [Trace without setting environment variables](./how_to_guides/tracing/trace_without_env_vars)
-- [Trace using the LangSmith REST API](./how_to_guides/tracing/trace_with_api)
-- [Calculate token-based costs for traces](./how_to_guides/tracing/calculate_token_based_costs)
-- [Bulk Exporting Traces](./how_to_guides/tracing/data_export)
-
-## Datasets
-
-Manage datasets in LangSmith to evaluate and improve your LLM applications.
-
-- [Manage datasets in the application](./how_to_guides/datasets/manage_datasets_in_application)
- - [Create a new dataset and add examples manually](./how_to_guides/datasets/manage_datasets_in_application#create-a-new-dataset-and-add-examples-manually)
- - [Dataset schema validation](./how_to_guides/datasets/manage_datasets_in_application#dataset-schema-validation)
- - [Add inputs and outputs from traces to datasets](./how_to_guides/datasets/manage_datasets_in_application#add-inputs-and-outputs-from-traces-to-datasets)
- - [Upload a CSV file to create a dataset](./how_to_guides/datasets/manage_datasets_in_application#upload-a-csv-file-to-create-a-dataset)
- - [Generate synthetic examples](./how_to_guides/datasets/manage_datasets_in_application#generate-synthetic-examples)
- - [Export a dataset](./how_to_guides/datasets/manage_datasets_in_application#export-a-dataset)
- - [Create and manage dataset splits](./how_to_guides/datasets/manage_datasets_in_application#create-and-manage-dataset-splits)
-- [Manage datasets programmatically](./how_to_guides/datasets/manage_datasets_programmatically)
- - [Create a dataset from list of values](./how_to_guides/datasets/manage_datasets_programmatically#create-a-dataset-from-list-of-values)
- - [Create a dataset from traces](./how_to_guides/datasets/manage_datasets_programmatically#create-a-dataset-from-traces)
- - [Create a dataset from a CSV file](./how_to_guides/datasets/manage_datasets_programmatically#create-a-dataset-from-a-csv-file)
- - [Create a dataset from a pandas DataFrame](./how_to_guides/datasets/manage_datasets_programmatically#create-a-dataset-from-a-pandas-dataframe)
- - [Fetch datasets](./how_to_guides/datasets/manage_datasets_programmatically#fetch-datasets)
- - [Fetch examples](./how_to_guides/datasets/manage_datasets_programmatically#fetch-examples)
- - [Update examples](./how_to_guides/datasets/manage_datasets_programmatically#update-examples)
- - [Bulk update examples](./how_to_guides/datasets/manage_datasets_programmatically#bulk-update-examples)
-- [Version datasets](./how_to_guides/datasets/version_datasets)
- - [Create a new version of a dataset](./how_to_guides/datasets/version_datasets#create-a-new-version-of-a-dataset)
- - [Tag a version](./how_to_guides/datasets/version_datasets#tag-a-version)
-- [Share or unshare a dataset publicly](./how_to_guides/datasets/share_dataset)
-- [Index a dataset for few shot example selection](./how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection)
-
-## Evaluation
-
-Evaluate your LLM applications to measure their performance over time.
-
-- [Evaluate an LLM application](./how_to_guides/evaluation/evaluate_llm_application)
- - [Run an evaluation](./how_to_guides/evaluation/evaluate_llm_application#run-an-evaluation)
- - [Use custom evaluators](./how_to_guides/evaluation/evaluate_llm_application#use-custom-evaluators)
- - [Evaluate on a particular version of a dataset](./how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-particular-version-of-a-dataset)
- - [Evaluate on a subset of a dataset](./how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-subset-of-a-dataset)
- - [Evaluate on a dataset split](./how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-dataset-split)
- - [Evaluate on a dataset with repetitions](./how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-dataset-with-repetitions)
- - [Use a summary evaluator](./how_to_guides/evaluation/evaluate_llm_application#use-a-summary-evaluator)
- - [Evaluate a LangChain runnable](./how_to_guides/evaluation/evaluate_llm_application#evaluate-a-langchain-runnable)
- - [Return multiple scores](./how_to_guides/evaluation/evaluate_llm_application#return-multiple-scores)
-- [Bind an evaluator to a dataset in the UI](./how_to_guides/evaluation/bind_evaluator_to_dataset)
-- [Run an evaluation from the prompt playground](./how_to_guides/evaluation/run_evaluation_from_prompt_playground)
-- [Evaluate on intermediate steps](./how_to_guides/evaluation/evaluate_on_intermediate_steps)
-- [Use LangChain off-the-shelf evaluators (Python only)](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators)
- - [Use question and answer (correctness) evaluators](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators#use-question-and-answer-correctness-evaluators)
- - [Use criteria evaluators](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators#use-criteria-evaluators)
- - [Use labeled criteria evaluators](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators#use-labeled-criteria-evaluators)
- - [Use string or embedding distance metrics](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators#use-string-or-embedding-distance-metrics)
- - [Use a custom LLM in off-the-shelf evaluators](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators#use-a-custom-llm-in-off-the-shelf-evaluators)
- - [Handle multiple input or output fields](./how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators#handle-multiple-input-or-output-fields)
-- [Compare experiment results](./how_to_guides/evaluation/compare_experiment_results)
- - [Open the comparison view](./how_to_guides/evaluation/compare_experiment_results#open-the-comparison-view)
- - [View regressions and improvements](./how_to_guides/evaluation/compare_experiment_results#view-regressions-and-improvements)
- - [Filter on regressions or improvements](./how_to_guides/evaluation/compare_experiment_results#filter-on-regressions-or-improvements)
- - [Update baseline experiment](./how_to_guides/evaluation/compare_experiment_results#update-baseline-experiment)
- - [Select feedback key](./how_to_guides/evaluation/compare_experiment_results#select-feedback-key)
- - [Open a trace](./how_to_guides/evaluation/compare_experiment_results#open-a-trace)
- - [Expand detailed view](./how_to_guides/evaluation/compare_experiment_results#expand-detailed-view)
- - [Update display settings](./how_to_guides/evaluation/compare_experiment_results#update-display-settings)
-- [Filter experiments in the UI](./how_to_guides/evaluation/filter_experiments_ui)
-- [Evaluate an existing experiment](./how_to_guides/evaluation/evaluate_existing_experiment)
-- [Unit test LLM applications (Python only)](./how_to_guides/evaluation/unit_testing)
-- [Run pairwise evaluations](./how_to_guides/evaluation/evaluate_pairwise)
- - [Use the `evaluate_comparative` function](./how_to_guides/evaluation/evaluate_pairwise#use-the-evaluate_comparative-function)
- - [Configure inputs and outputs for pairwise evaluators](./how_to_guides/evaluation/evaluate_pairwise#configure-inputs-and-outputs-for-pairwise-evaluators)
- - [Compare two experiments with LLM-based pairwise evaluators](./how_to_guides/evaluation/evaluate_pairwise#compare-two-experiments-with-llm-based-pairwise-evaluators)
- - [View pairwise experiments](./how_to_guides/evaluation/evaluate_pairwise#view-pairwise-experiments)
-- [Audit evaluator scores](./how_to_guides/evaluation/audit_evaluator_scores)
- - [In the comparison view](./how_to_guides/evaluation/audit_evaluator_scores#in-the-comparison-view)
- - [In the runs table](./how_to_guides/evaluation/audit_evaluator_scores#in-the-runs-table)
- - [In the SDK](./how_to_guides/evaluation/audit_evaluator_scores#in-the-sdk)
-- [Create few-shot evaluators](./how_to_guides/evaluation/create_few_shot_evaluators)
- - [Create your evaluator](./how_to_guides/evaluation/create_few_shot_evaluators#create-your-evaluator)
- - [Make corrections](./how_to_guides/evaluation/create_few_shot_evaluators#make-corrections)
- - [View your corrections dataset](./how_to_guides/evaluation/create_few_shot_evaluators#view-your-corrections-dataset)
-- [Fetch performance metrics for an experiment](./how_to_guides/evaluation/fetch_perf_metrics_experiment)
-- [Run evals using the API only](./how_to_guides/evaluation/run_evals_api_only)
- - [Create a dataset](./how_to_guides/evaluation/run_evals_api_only#create-a-dataset)
- - [Run a single experiment](./how_to_guides/evaluation/run_evals_api_only#run-a-single-experiment)
- - [Run a pairwise experiment](./how_to_guides/evaluation/run_evals_api_only#run-a-pairwise-experiment)
-- [Upload experiments run outside of LangSmith with the REST API](./how_to_guides/evaluation/upload_existing_experiments)
- - [Request body schema](./how_to_guides/evaluation/upload_existing_experiments#request-body-schema)
- - [Considerations](./how_to_guides/evaluation/upload_existing_experiments#considerations)
- - [Example request](./how_to_guides/evaluation/upload_existing_experiments#example-request)
- - [View the experiment in the UI](./how_to_guides/evaluation/upload_existing_experiments#view-the-experiment-in-the-ui)
-
-## Human feedback
-
-Collect human feedback to improve your LLM applications.
-
-- [Capture user feedback from your application to traces](./how_to_guides/human_feedback/attach_user_feedback)
-- [Set up a new feedback criteria](./how_to_guides/human_feedback/set_up_feedback_criteria)
-- [Annotate traces inline](./how_to_guides/human_feedback/annotate_traces_inline)
-- [Use annotation queues](./how_to_guides/human_feedback/annotation_queues)
- - [Create an annotation queue](./how_to_guides/human_feedback/annotation_queues#create-an-annotation-queue)
- - [Assign runs to an annotation queue](./how_to_guides/human_feedback/annotation_queues#assign-runs-to-an-annotation-queue)
- - [Review runs in an annotation queue](./how_to_guides/human_feedback/annotation_queues#review-runs-in-an-annotation-queue)
-
-## Monitoring and automations
-
-Leverage LangSmith's powerful monitoring and automations features to make sense of your production data.
-
-- [Filter traces in the application](./how_to_guides/monitoring/filter_traces_in_application)
- - [Create a filter](./how_to_guides/monitoring/filter_traces_in_application#create-a-filter)
- - [Filter for intermediate runs (spans)](./how_to_guides/monitoring/filter_traces_in_application#filter-for-intermediate-runs-spans)
- - [Advanced: filter for intermediate runs (spans) on properties of the root](./how_to_guides/monitoring/filter_traces_in_application#advanced-filter-for-intermediate-runs-spans-on-properties-of-the-root)
- - [Advanced: filter for runs (spans) whose child runs have some attribute](./how_to_guides/monitoring/filter_traces_in_application#advanced-filter-for-runs-spans-whose-child-runs-have-some-attribute)
- - [Filter based on inputs and outputs](./how_to_guides/monitoring/filter_traces_in_application#filter-based-on-inputs-and-outputs)
- - [Filter based on input / output key-value pairs](./how_to_guides/monitoring/filter_traces_in_application#filter-based-on-input--output-key-value-pairs)
- - [Copy the filter](./how_to_guides/monitoring/filter_traces_in_application#copy-the-filter)
- - [Manually specify a raw query in LangSmith query language](./how_to_guides/monitoring/filter_traces_in_application#manually-specify-a-raw-query-in-langsmith-query-language)
- - [Use an AI Query to auto-generate a query](./how_to_guides/monitoring/filter_traces_in_application#use-an-ai-query-to-auto-generate-a-query)
-- [Use monitoring charts](./how_to_guides/monitoring/use_monitoring_charts)
- - [Change the time period](./how_to_guides/monitoring/use_monitoring_charts#change-the-time-period)
- - [Slice data by metadata or tag](./how_to_guides/monitoring/use_monitoring_charts#slice-data-by-metadata-or-tag)
- - [Drill down into specific subsets](./how_to_guides/monitoring/use_monitoring_charts#drill-down-into-specific-subsets)
-- [Create dashboards](./how_to_guides/monitoring/dashboards)
- - [Creating a new dashboard](./how_to_guides/monitoring/dashboards#creating-a-new-dashboard)
- - [Adding charts to your dashboard](./how_to_guides/monitoring/dashboards#adding-charts-to-your-dashboard)
- - [Filtering traces in your chart](./how_to_guides/monitoring/dashboards#filtering-traces-in-your-chart)
- - [Comparing data within a chart](./how_to_guides/monitoring/dashboards#comparing-data-within-a-chart)
- - [Chart display options](./how_to_guides/monitoring/dashboards#chart-display-options)
- - [Saving and managing charts](./how_to_guides/monitoring/dashboards#saving-and-managing-charts)
- - [View a chart in full screen](./how_to_guides/monitoring/dashboards#view-a-chart-in-full-screen)
- - [User journeys](./how_to_guides/monitoring/dashboards#user-journeys)
-- [Set up automation rules](./how_to_guides/monitoring/rules)
- - [Create a rule](./how_to_guides/monitoring/rules#create-a-rule)
- - [View logs for your automations](./how_to_guides/monitoring/rules#view-logs-for-your-automations)
-- [Set up online evaluations](./how_to_guides/monitoring/online_evaluations)
- - [Configure online evaluations](./how_to_guides/monitoring/online_evaluations#configure-online-evaluations)
- - [Set API keys](./how_to_guides/monitoring/online_evaluations#set-api-keys)
-- [Set up webhook notifications for rules](./how_to_guides/monitoring/webhooks)
- - [Webhook payload](./how_to_guides/monitoring/webhooks#webhook-payload)
- - [Example with Modal](./how_to_guides/monitoring/webhooks#example-with-modal)
-- [Set up threads](./how_to_guides/monitoring/threads)
- - [Group traces into threads](./how_to_guides/monitoring/threads#group-traces-into-threads)
- - [View threads](./how_to_guides/monitoring/threads#view-threads)
-
-## Prompts
-
-Organize and manage prompts in LangSmith to streamline your LLM development workflow.
-
-- [Create a prompt](./how_to_guides/prompts/create_a_prompt)
- - [Compose your prompt](./how_to_guides/prompts/create_a_prompt#compose-your-prompt)
- - [Save your prompt](./how_to_guides/prompts/create_a_prompt#save-your-prompt)
- - [View your prompts](./how_to_guides/prompts/create_a_prompt#view-your-prompts)
- - [Add metadata](./how_to_guides/prompts/create_a_prompt#add-metadata)
-- [Update a prompt](./how_to_guides/prompts/update_a_prompt)
- - [Update metadata](./how_to_guides/prompts/update_a_prompt#update-metadata)
- - [Update the prompt content](./how_to_guides/prompts/update_a_prompt#update-the-prompt-content)
- - [Version a prompt](./how_to_guides/prompts/update_a_prompt#versioning)
-- [Manage prompts programmatically](./how_to_guides/prompts/manage_prompts_programatically)
- - [Configure environment variables](./how_to_guides/prompts/manage_prompts_programatically#configure_environment_variables)
- - [Push a prompt](./how_to_guides/prompts/manage_prompts_programatically#push_a_prompt)
- - [Pull a prompt](./how_to_guides/prompts/manage_prompts_programatically#pull_a_prompt)
- - [Use a prompt without LangChain](./how_to_guides/prompts/manage_prompts_programatically#use_a_prompt_without_langchain)
- - [List, delete, and like prompts](./how_to_guides/prompts/manage_prompts_programatically#list_delete_and_like_prompts)
-- [Prompt tags](./how_to_guides/prompts/prompt_tags)
- - [Create a tag](./how_to_guides/prompts/prompt_tags#create_a_tag)
- - [Move a tag](./how_to_guides/prompts/prompt_tags#move_a_tag)
- - [Delete a tag](./how_to_guides/prompts/prompt_tags#delete_a_tag)
- - [Using tags in code](./how_to_guides/prompts/prompt_tags#using_tags_in_code)
- - [Common use cases](./how_to_guides/prompts/prompt_tags#common_use_cases)
-- [LangChain Hub](./how_to_guides/prompts/langchain_hub)
-
-## Playground
-
-Quickly iterate on prompts and models in the LangSmith Playground.
-
-- [Use custom TLS certificates](./how_to_guides/playground/custom_tls_certificates)
-- [Use a custom model](./how_to_guides/playground/custom_endpoint)
-- [Save settings configuration](./how_to_guides/playground/save_model_configuration)
diff --git a/docs/how_to_guides/monitoring/index.mdx b/docs/how_to_guides/monitoring/index.mdx
deleted file mode 100644
index 9dc522d70..000000000
--- a/docs/how_to_guides/monitoring/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Monitoring and automations
-
-This section contains how-to guides related to monitoring and automations.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/how_to_guides/playground/index.mdx b/docs/how_to_guides/playground/index.mdx
deleted file mode 100644
index c637251cf..000000000
--- a/docs/how_to_guides/playground/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Playground
-
-This section contains how-to guides related to the LangSmith playground.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/how_to_guides/prompts/index.mdx b/docs/how_to_guides/prompts/index.mdx
deleted file mode 100644
index f227baf2f..000000000
--- a/docs/how_to_guides/prompts/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Prompts
-
-This section contains how-to guides related to prompt management in LangSmith.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/how_to_guides/setup/index.mdx b/docs/how_to_guides/setup/index.mdx
deleted file mode 100644
index 307271330..000000000
--- a/docs/how_to_guides/setup/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Setup
-
-This section contains how-to guides related to setting up LangSmith.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/how_to_guides/static/add_evaluator.png b/docs/how_to_guides/static/add_evaluator.png
deleted file mode 100644
index 911bee1ad..000000000
Binary files a/docs/how_to_guides/static/add_evaluator.png and /dev/null differ
diff --git a/docs/how_to_guides/static/add_tag.png b/docs/how_to_guides/static/add_tag.png
deleted file mode 100644
index 21eadbb21..000000000
Binary files a/docs/how_to_guides/static/add_tag.png and /dev/null differ
diff --git a/docs/how_to_guides/static/annotation_queue.png b/docs/how_to_guides/static/annotation_queue.png
deleted file mode 100644
index 13cd513d5..000000000
Binary files a/docs/how_to_guides/static/annotation_queue.png and /dev/null differ
diff --git a/docs/how_to_guides/static/blank_new_chart.png b/docs/how_to_guides/static/blank_new_chart.png
deleted file mode 100644
index a3b914162..000000000
Binary files a/docs/how_to_guides/static/blank_new_chart.png and /dev/null differ
diff --git a/docs/how_to_guides/static/chart_filters_dropdown.png b/docs/how_to_guides/static/chart_filters_dropdown.png
deleted file mode 100644
index 411e2a57e..000000000
Binary files a/docs/how_to_guides/static/chart_filters_dropdown.png and /dev/null differ
diff --git a/docs/how_to_guides/static/choose_handle.png b/docs/how_to_guides/static/choose_handle.png
deleted file mode 100644
index a2e62a172..000000000
Binary files a/docs/how_to_guides/static/choose_handle.png and /dev/null differ
diff --git a/docs/how_to_guides/static/evaluator.png b/docs/how_to_guides/static/evaluator.png
deleted file mode 100644
index a85e593d7..000000000
Binary files a/docs/how_to_guides/static/evaluator.png and /dev/null differ
diff --git a/docs/how_to_guides/static/evaluator_options.png b/docs/how_to_guides/static/evaluator_options.png
deleted file mode 100644
index 7d468e836..000000000
Binary files a/docs/how_to_guides/static/evaluator_options.png and /dev/null differ
diff --git a/docs/how_to_guides/static/filter_rule.png b/docs/how_to_guides/static/filter_rule.png
deleted file mode 100644
index 80210fab7..000000000
Binary files a/docs/how_to_guides/static/filter_rule.png and /dev/null differ
diff --git a/docs/how_to_guides/static/free_tier_set_usage_limit.png b/docs/how_to_guides/static/free_tier_set_usage_limit.png
deleted file mode 100644
index 4b718ffe3..000000000
Binary files a/docs/how_to_guides/static/free_tier_set_usage_limit.png and /dev/null differ
diff --git a/docs/how_to_guides/static/manage_automations.png b/docs/how_to_guides/static/manage_automations.png
deleted file mode 100644
index 66f3c3351..000000000
Binary files a/docs/how_to_guides/static/manage_automations.png and /dev/null differ
diff --git a/docs/how_to_guides/static/monitoring.png b/docs/how_to_guides/static/monitoring.png
deleted file mode 100644
index 4eba25598..000000000
Binary files a/docs/how_to_guides/static/monitoring.png and /dev/null differ
diff --git a/docs/how_to_guides/static/new_chart.png b/docs/how_to_guides/static/new_chart.png
deleted file mode 100644
index b44095003..000000000
Binary files a/docs/how_to_guides/static/new_chart.png and /dev/null differ
diff --git a/docs/how_to_guides/static/notes.png b/docs/how_to_guides/static/notes.png
deleted file mode 100644
index a4459c76d..000000000
Binary files a/docs/how_to_guides/static/notes.png and /dev/null differ
diff --git a/docs/how_to_guides/static/optimization-negative.png b/docs/how_to_guides/static/optimization-negative.png
deleted file mode 100644
index 42634dbbd..000000000
Binary files a/docs/how_to_guides/static/optimization-negative.png and /dev/null differ
diff --git a/docs/how_to_guides/static/optimization-positive.png b/docs/how_to_guides/static/optimization-positive.png
deleted file mode 100644
index b54b0aaa9..000000000
Binary files a/docs/how_to_guides/static/optimization-positive.png and /dev/null differ
diff --git a/docs/how_to_guides/static/playground_chat_prompt.png b/docs/how_to_guides/static/playground_chat_prompt.png
deleted file mode 100644
index 7a2d3fc91..000000000
Binary files a/docs/how_to_guides/static/playground_chat_prompt.png and /dev/null differ
diff --git a/docs/how_to_guides/static/queue_buttons.png b/docs/how_to_guides/static/queue_buttons.png
deleted file mode 100644
index 601318279..000000000
Binary files a/docs/how_to_guides/static/queue_buttons.png and /dev/null differ
diff --git a/docs/how_to_guides/static/queue_side.png b/docs/how_to_guides/static/queue_side.png
deleted file mode 100644
index fa1463e1c..000000000
Binary files a/docs/how_to_guides/static/queue_side.png and /dev/null differ
diff --git a/docs/how_to_guides/static/time_period.png b/docs/how_to_guides/static/time_period.png
deleted file mode 100644
index 1d19fb921..000000000
Binary files a/docs/how_to_guides/static/time_period.png and /dev/null differ
diff --git a/docs/how_to_guides/static/view_results.png b/docs/how_to_guides/static/view_results.png
deleted file mode 100644
index 9c032b279..000000000
Binary files a/docs/how_to_guides/static/view_results.png and /dev/null differ
diff --git a/docs/how_to_guides/static/view_run.png b/docs/how_to_guides/static/view_run.png
deleted file mode 100644
index 5aed14362..000000000
Binary files a/docs/how_to_guides/static/view_run.png and /dev/null differ
diff --git a/docs/how_to_guides/tracing/index.mdx b/docs/how_to_guides/tracing/index.mdx
deleted file mode 100644
index 030e11823..000000000
--- a/docs/how_to_guides/tracing/index.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# How-to guides: Tracing
-
-This section contains how-to guides related to tracing.
-
-import DocCardList from "@theme/DocCardList";
-
-
diff --git a/docs/index.mdx b/docs/index.mdx
index 92d9932bb..ba8df16df 100644
--- a/docs/index.mdx
+++ b/docs/index.mdx
@@ -62,11 +62,11 @@ To create an API key head to the
- View a [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).
-- Learn more about tracing in the [how-to guides](./how_to_guides/index.md).
+- Learn more about tracing in the [how-to guides](./observability/how_to_guides/index.md).
## 5. Run your first evaluation
@@ -182,4 +182,4 @@ experiment_results = evaluate(
groupId="client-language"
/>
-- Learn more about evaluation in the [how-to guides](./how_to_guides/index.md).
+- Learn more about evaluation in the [how-to guides](./evaluation/how_to_guides/index.md).
diff --git a/docs/langgraph_cloud.mdx b/docs/langgraph_cloud.mdx
index bed8bf116..446c70ea6 100644
--- a/docs/langgraph_cloud.mdx
+++ b/docs/langgraph_cloud.mdx
@@ -1,5 +1,5 @@
---
-sidebar_label: LangGraph Cloud
+sidebar_label: Deployment (LangGraph Cloud)
---
# LangGraph Cloud
diff --git a/docs/concepts/tracing/tracing.mdx b/docs/observability/concepts/index.mdx
similarity index 83%
rename from docs/concepts/tracing/tracing.mdx
rename to docs/observability/concepts/index.mdx
index 7a712954c..b4acdff7f 100644
--- a/docs/concepts/tracing/tracing.mdx
+++ b/docs/observability/concepts/index.mdx
@@ -1,15 +1,15 @@
import { RegionalUrl } from "@site/src/components/RegionalUrls";
import ThemedImage from "@theme/ThemedImage";
-# Tracing
+# Concepts
This conceptual guide covers topics that are important to understand when logging traces to LangSmith. A `Trace` is essentially a series of steps that your application takes to go from input to output. Each of these individual steps is represented by a `Run`. A `Project` is simply a collection of traces. The following diagram displays these concepts in the context of a simple RAG app, which retrieves documents from an index and generates an answer.
, click on **+ Add Rule**, then **Project Rule**.
@@ -31,19 +31,19 @@ _Alternatively_, you can access rules in settings by navigating to
-
+
## Use the `trace` context manager (Python only)
diff --git a/docs/how_to_guides/tracing/calculate_token_based_costs.mdx b/docs/observability/how_to_guides/tracing/calculate_token_based_costs.mdx
similarity index 98%
rename from docs/how_to_guides/tracing/calculate_token_based_costs.mdx
rename to docs/observability/how_to_guides/tracing/calculate_token_based_costs.mdx
index 8db35b371..d25f404a3 100644
--- a/docs/how_to_guides/tracing/calculate_token_based_costs.mdx
+++ b/docs/observability/how_to_guides/tracing/calculate_token_based_costs.mdx
@@ -45,11 +45,11 @@ Here, you can set the cost per token for each model and provider combination. Th
Several default entries for OpenAI models are already present in the model pricing map, which you can clone and modify as needed.
-
+
To create a _new entry_ in the model pricing map, click on the `Add new model` button in the top right corner.
-
+
Here, you can specify the following fields:
@@ -134,4 +134,4 @@ This information matches the model pricing map entry we set up earlier.
The trace produced will contain the token-based costs based on the token counts provided in the LLM invocation and the model pricing map entry.
-
+
diff --git a/docs/how_to_guides/tracing/compare_traces.mdx b/docs/observability/how_to_guides/tracing/compare_traces.mdx
similarity index 77%
rename from docs/how_to_guides/tracing/compare_traces.mdx
rename to docs/observability/how_to_guides/tracing/compare_traces.mdx
index 0d94cb8b8..322906dfe 100644
--- a/docs/how_to_guides/tracing/compare_traces.mdx
+++ b/docs/observability/how_to_guides/tracing/compare_traces.mdx
@@ -8,14 +8,14 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
To compare traces, click on the **Compare** button in the upper right hand side of any trace view.
-
+
This will show the trace run table. Select the trace you want to compare against original trace.
-
+
The pane will open with both traces selected in a side by side comparison view.
-
+
To stop comparing, close the pane or click on **Stop comparing** in the upper right hand side of the pane.
diff --git a/docs/how_to_guides/tracing/data_export.mdx b/docs/observability/how_to_guides/tracing/data_export.mdx
similarity index 99%
rename from docs/how_to_guides/tracing/data_export.mdx
rename to docs/observability/how_to_guides/tracing/data_export.mdx
index 9e3c6527a..69ffbc38e 100644
--- a/docs/how_to_guides/tracing/data_export.mdx
+++ b/docs/observability/how_to_guides/tracing/data_export.mdx
@@ -19,7 +19,7 @@ Bulk exports also have a runtime timeout of 24 hours.
Currently we support exporting to an S3 bucket or S3 API compatible bucket that you provide. The data will be exported in
[Parquet](https://parquet.apache.org/docs/overview/) columnar format. This format will allow you to easily import the data into
-other systems. The data export will contain equivalent data fields as the [Run data format](../../reference/data_formats/run_data_format).
+other systems. The data export will contain equivalent data fields as the [Run data format](/reference/data_formats/run_data_format).
## Exporting Data
diff --git a/docs/how_to_guides/tracing/distributed_tracing.mdx b/docs/observability/how_to_guides/tracing/distributed_tracing.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/distributed_tracing.mdx
rename to docs/observability/how_to_guides/tracing/distributed_tracing.mdx
diff --git a/docs/how_to_guides/tracing/export_traces.mdx b/docs/observability/how_to_guides/tracing/export_traces.mdx
similarity index 97%
rename from docs/how_to_guides/tracing/export_traces.mdx
rename to docs/observability/how_to_guides/tracing/export_traces.mdx
index fc36064c4..ec2cc1958 100644
--- a/docs/how_to_guides/tracing/export_traces.mdx
+++ b/docs/observability/how_to_guides/tracing/export_traces.mdx
@@ -14,9 +14,9 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Run (span) data format](../../reference/data_formats/run_data_format)
+- [Run (span) data format](/reference/data_formats/run_data_format)
-
-- [LangSmith trace query syntax](../../reference/data_formats/trace_query_syntax)
+- [LangSmith trace query syntax](/reference/data_formats/trace_query_syntax)
:::
@@ -27,11 +27,11 @@ handle large data volumes and will support automatic retries, and parallelizatio
The recommended way to query runs (the span data in LangSmith traces) is to use the `list_runs` method in the SDK or `/runs/query` endpoint in the API.
-LangSmith stores traces in a simple format that is specified in the [Run (span) data format](../../reference/data_formats/run_data_format).
+LangSmith stores traces in a simple format that is specified in the [Run (span) data format](/reference/data_formats/run_data_format).
## Use filter arguments
-For simple queries, you don't have to rely on our query syntax. You can use the filter arguments specified in the [filter arguments reference](../../reference/data_formats/trace_query_syntax#filter-arguments).
+For simple queries, you don't have to rely on our query syntax. You can use the filter arguments specified in the [filter arguments reference](/reference/data_formats/trace_query_syntax#filter-arguments).
:::important Prerequisites
Initialize the client before running the below code snippets.
@@ -155,11 +155,11 @@ for await (const run of client.listRuns({
## Use filter query language
-For more complex queries, you can use the query language described in the [filter query language reference](../../reference/data_formats/trace_query_syntax#filter-query-language).
+For more complex queries, you can use the query language described in the [filter query language reference](/reference/data_formats/trace_query_syntax#filter-query-language).
### List all root runs in a conversational thread
-This is the way to fetch runs in a conversational thread. For more information on setting up threads, refer to our [how-to guide on setting up threads](/how_to_guides/monitoring/threads).
+This is the way to fetch runs in a conversational thread. For more information on setting up threads, refer to our [how-to guide on setting up threads](../monitoring/threads).
Threads are grouped by setting a shared thread ID. The LangSmith UI lets you use any one of the following three metadata keys: `session_id`, `conversation_id`, or `thread_id`. The following query matches on any of them.
**Shared URLs** or , then click on **Unshare** next to the trace you want to unshare.
- 
+ 
diff --git a/docs/how_to_guides/static/annotate_code_trace.gif b/docs/observability/how_to_guides/tracing/static/annotate_code_trace.gif
similarity index 100%
rename from docs/how_to_guides/static/annotate_code_trace.gif
rename to docs/observability/how_to_guides/tracing/static/annotate_code_trace.gif
diff --git a/docs/how_to_guides/static/chat_model.png b/docs/observability/how_to_guides/tracing/static/chat_model.png
similarity index 100%
rename from docs/how_to_guides/static/chat_model.png
rename to docs/observability/how_to_guides/tracing/static/chat_model.png
diff --git a/docs/how_to_guides/static/compare_traces/compare_button.png b/docs/observability/how_to_guides/tracing/static/compare_traces/compare_button.png
similarity index 100%
rename from docs/how_to_guides/static/compare_traces/compare_button.png
rename to docs/observability/how_to_guides/tracing/static/compare_traces/compare_button.png
diff --git a/docs/how_to_guides/static/compare_traces/compare_trace.png b/docs/observability/how_to_guides/tracing/static/compare_traces/compare_trace.png
similarity index 100%
rename from docs/how_to_guides/static/compare_traces/compare_trace.png
rename to docs/observability/how_to_guides/tracing/static/compare_traces/compare_trace.png
diff --git a/docs/how_to_guides/static/compare_traces/select_trace.png b/docs/observability/how_to_guides/tracing/static/compare_traces/select_trace.png
similarity index 100%
rename from docs/how_to_guides/static/compare_traces/select_trace.png
rename to docs/observability/how_to_guides/tracing/static/compare_traces/select_trace.png
diff --git a/docs/how_to_guides/static/hello_llm.png b/docs/observability/how_to_guides/tracing/static/hello_llm.png
similarity index 100%
rename from docs/how_to_guides/static/hello_llm.png
rename to docs/observability/how_to_guides/tracing/static/hello_llm.png
diff --git a/docs/how_to_guides/static/hide_inputs_outputs.png b/docs/observability/how_to_guides/tracing/static/hide_inputs_outputs.png
similarity index 100%
rename from docs/how_to_guides/static/hide_inputs_outputs.png
rename to docs/observability/how_to_guides/tracing/static/hide_inputs_outputs.png
diff --git a/docs/how_to_guides/static/langchain_trace.png b/docs/observability/how_to_guides/tracing/static/langchain_trace.png
similarity index 100%
rename from docs/how_to_guides/static/langchain_trace.png
rename to docs/observability/how_to_guides/tracing/static/langchain_trace.png
diff --git a/docs/how_to_guides/tracing/static/langgraph_with_langchain_trace.png b/docs/observability/how_to_guides/tracing/static/langgraph_with_langchain_trace.png
similarity index 100%
rename from docs/how_to_guides/tracing/static/langgraph_with_langchain_trace.png
rename to docs/observability/how_to_guides/tracing/static/langgraph_with_langchain_trace.png
diff --git a/docs/how_to_guides/tracing/static/langgraph_without_langchain_trace.png b/docs/observability/how_to_guides/tracing/static/langgraph_without_langchain_trace.png
similarity index 100%
rename from docs/how_to_guides/tracing/static/langgraph_without_langchain_trace.png
rename to docs/observability/how_to_guides/tracing/static/langgraph_without_langchain_trace.png
diff --git a/docs/how_to_guides/static/model_costs.png b/docs/observability/how_to_guides/tracing/static/model_costs.png
similarity index 100%
rename from docs/how_to_guides/static/model_costs.png
rename to docs/observability/how_to_guides/tracing/static/model_costs.png
diff --git a/docs/how_to_guides/static/model_price_map.png b/docs/observability/how_to_guides/tracing/static/model_price_map.png
similarity index 100%
rename from docs/how_to_guides/static/model_price_map.png
rename to docs/observability/how_to_guides/tracing/static/model_price_map.png
diff --git a/docs/how_to_guides/static/multimodal.png b/docs/observability/how_to_guides/tracing/static/multimodal.png
similarity index 100%
rename from docs/how_to_guides/static/multimodal.png
rename to docs/observability/how_to_guides/tracing/static/multimodal.png
diff --git a/docs/how_to_guides/static/new_price_map_entry.png b/docs/observability/how_to_guides/tracing/static/new_price_map_entry.png
similarity index 100%
rename from docs/how_to_guides/static/new_price_map_entry.png
rename to docs/observability/how_to_guides/tracing/static/new_price_map_entry.png
diff --git a/docs/how_to_guides/static/retriever_trace.png b/docs/observability/how_to_guides/tracing/static/retriever_trace.png
similarity index 100%
rename from docs/how_to_guides/static/retriever_trace.png
rename to docs/observability/how_to_guides/tracing/static/retriever_trace.png
diff --git a/docs/how_to_guides/static/share_trace.png b/docs/observability/how_to_guides/tracing/static/share_trace.png
similarity index 100%
rename from docs/how_to_guides/static/share_trace.png
rename to docs/observability/how_to_guides/tracing/static/share_trace.png
diff --git a/docs/how_to_guides/tracing/static/trace_tree_manual_tracing.png b/docs/observability/how_to_guides/tracing/static/trace_tree_manual_tracing.png
similarity index 100%
rename from docs/how_to_guides/tracing/static/trace_tree_manual_tracing.png
rename to docs/observability/how_to_guides/tracing/static/trace_tree_manual_tracing.png
diff --git a/docs/how_to_guides/static/trace_tree_python_interop.png b/docs/observability/how_to_guides/tracing/static/trace_tree_python_interop.png
similarity index 100%
rename from docs/how_to_guides/static/trace_tree_python_interop.png
rename to docs/observability/how_to_guides/tracing/static/trace_tree_python_interop.png
diff --git a/docs/how_to_guides/static/unshare_trace.png b/docs/observability/how_to_guides/tracing/static/unshare_trace.png
similarity index 100%
rename from docs/how_to_guides/static/unshare_trace.png
rename to docs/observability/how_to_guides/tracing/static/unshare_trace.png
diff --git a/docs/observability/how_to_guides/tracing/static/unshare_trace_list.png b/docs/observability/how_to_guides/tracing/static/unshare_trace_list.png
new file mode 100644
index 000000000..6b5f1f55e
Binary files /dev/null and b/docs/observability/how_to_guides/tracing/static/unshare_trace_list.png differ
diff --git a/docs/how_to_guides/tracing/static/vercel_ai_sdk_trace.png b/docs/observability/how_to_guides/tracing/static/vercel_ai_sdk_trace.png
similarity index 100%
rename from docs/how_to_guides/tracing/static/vercel_ai_sdk_trace.png
rename to docs/observability/how_to_guides/tracing/static/vercel_ai_sdk_trace.png
diff --git a/docs/how_to_guides/tracing/toggle_tracing.mdx b/docs/observability/how_to_guides/tracing/toggle_tracing.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/toggle_tracing.mdx
rename to docs/observability/how_to_guides/tracing/toggle_tracing.mdx
diff --git a/docs/how_to_guides/tracing/trace_generator_functions.mdx b/docs/observability/how_to_guides/tracing/trace_generator_functions.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/trace_generator_functions.mdx
rename to docs/observability/how_to_guides/tracing/trace_generator_functions.mdx
diff --git a/docs/how_to_guides/tracing/trace_with_api.mdx b/docs/observability/how_to_guides/tracing/trace_with_api.mdx
similarity index 96%
rename from docs/how_to_guides/tracing/trace_with_api.mdx
rename to docs/observability/how_to_guides/tracing/trace_with_api.mdx
index 135273ea8..a7ce091fe 100644
--- a/docs/how_to_guides/tracing/trace_with_api.mdx
+++ b/docs/observability/how_to_guides/tracing/trace_with_api.mdx
@@ -84,4 +84,4 @@ patch_run(child_run_id, chat_completion.dict())
patch_run(parent_run_id, {"answer": chat_completion.choices[0].message.content})
```
-See the doc on the [Run (span) data format](../../reference/data_formats/run_data_format) for more information.
+See the doc on the [Run (span) data format](/reference/data_formats/run_data_format) for more information.
diff --git a/docs/how_to_guides/tracing/trace_with_instructor.mdx b/docs/observability/how_to_guides/tracing/trace_with_instructor.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/trace_with_instructor.mdx
rename to docs/observability/how_to_guides/tracing/trace_with_instructor.mdx
diff --git a/docs/how_to_guides/tracing/trace_with_langchain.mdx b/docs/observability/how_to_guides/tracing/trace_with_langchain.mdx
similarity index 98%
rename from docs/how_to_guides/tracing/trace_with_langchain.mdx
rename to docs/observability/how_to_guides/tracing/trace_with_langchain.mdx
index b45d58711..273fb32d4 100644
--- a/docs/how_to_guides/tracing/trace_with_langchain.mdx
+++ b/docs/observability/how_to_guides/tracing/trace_with_langchain.mdx
@@ -44,7 +44,7 @@ No extra code is needed to log a trace to LangSmith. Just run your LangChain cod
By default, the trace will be logged to the project with the name `default`. An example of a trace logged using the above code is made public and can be viewed [here](https://smith.langchain.com/public/e6a46eb2-d785-4804-a1e3-23f167a04300/r).
-
+
## Trace selectively
@@ -84,7 +84,7 @@ await chain.invoke(
### Statically
-As mentioned in the [tracing conceptual guide](../../concepts/tracing) LangSmith uses the concept of a Project to group traces. If left unspecified, the tracer project is set to default. You can set the `LANGCHAIN_PROJECT` environment variable to configure a custom project name for an entire application run. This should be done before executing your application.
+As mentioned in the [tracing conceptual guide](../../concepts) LangSmith uses the concept of a Project to group traces. If left unspecified, the tracer project is set to default. You can set the `LANGCHAIN_PROJECT` environment variable to configure a custom project name for an entire application run. This should be done before executing your application.
```shell
export LANGCHAIN_PROJECT=my-project
@@ -433,7 +433,7 @@ invoke_runnnable("Can you summarize this morning's meetings?", "During this morn
```
This will produce the following trace tree:
-
+
## Interoperability between LangChain.JS and LangSmith SDK
diff --git a/docs/how_to_guides/tracing/trace_with_langgraph.mdx b/docs/observability/how_to_guides/tracing/trace_with_langgraph.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/trace_with_langgraph.mdx
rename to docs/observability/how_to_guides/tracing/trace_with_langgraph.mdx
diff --git a/docs/how_to_guides/tracing/trace_with_vercel_ai_sdk.mdx b/docs/observability/how_to_guides/tracing/trace_with_vercel_ai_sdk.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/trace_with_vercel_ai_sdk.mdx
rename to docs/observability/how_to_guides/tracing/trace_with_vercel_ai_sdk.mdx
diff --git a/docs/how_to_guides/tracing/trace_without_env_vars.mdx b/docs/observability/how_to_guides/tracing/trace_without_env_vars.mdx
similarity index 100%
rename from docs/how_to_guides/tracing/trace_without_env_vars.mdx
rename to docs/observability/how_to_guides/tracing/trace_without_env_vars.mdx
diff --git a/docs/observability/tutorials/index.mdx b/docs/observability/tutorials/index.mdx
new file mode 100644
index 000000000..5c0d5daa0
--- /dev/null
+++ b/docs/observability/tutorials/index.mdx
@@ -0,0 +1,5 @@
+# Observability tutorials
+
+New to LangSmith or to LLM app development in general? Read this material to quickly get up and running.
+
+- [Add observability to your LLM application](./tutorials/observability)
diff --git a/docs/tutorials/Developers/observability.mdx b/docs/observability/tutorials/observability.mdx
similarity index 98%
rename from docs/tutorials/Developers/observability.mdx
rename to docs/observability/tutorials/observability.mdx
index 1952f8018..d09164d89 100644
--- a/docs/tutorials/Developers/observability.mdx
+++ b/docs/observability/tutorials/observability.mdx
@@ -538,8 +538,4 @@ This will lead you back to the runs table with a filtered view:
In this tutorial you saw how to set up your LLM application with best-in-class observability.
No matter what stage your application is in, you will still benefit from observability.
-If you have more in-depth questions about observability, check out the [how-to section](../../how_to_guides) for guides on topics like testing, prompt management, and more.
-
-Observability is not the only thing LangSmith can help with!
-It can also help with evaluation, optimization, and more!
-Check out the [other tutorials](../../tutorials) to see how to get started with those.
+If you have more in-depth questions about observability, check out the [how-to section](../how_to_guides) for guides on topics like testing, prompt management, and more.
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_chain.png b/docs/observability/tutorials/static/tracing_tutorial_chain.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_chain.png
rename to docs/observability/tutorials/static/tracing_tutorial_chain.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_feedback.png b/docs/observability/tutorials/static/tracing_tutorial_feedback.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_feedback.png
rename to docs/observability/tutorials/static/tracing_tutorial_feedback.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_filtering.png b/docs/observability/tutorials/static/tracing_tutorial_filtering.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_filtering.png
rename to docs/observability/tutorials/static/tracing_tutorial_filtering.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_metadata.png b/docs/observability/tutorials/static/tracing_tutorial_metadata.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_metadata.png
rename to docs/observability/tutorials/static/tracing_tutorial_metadata.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_metadata_filtering.png b/docs/observability/tutorials/static/tracing_tutorial_metadata_filtering.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_metadata_filtering.png
rename to docs/observability/tutorials/static/tracing_tutorial_metadata_filtering.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_monitor.png b/docs/observability/tutorials/static/tracing_tutorial_monitor.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_monitor.png
rename to docs/observability/tutorials/static/tracing_tutorial_monitor.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_monitor_drilldown.png b/docs/observability/tutorials/static/tracing_tutorial_monitor_drilldown.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_monitor_drilldown.png
rename to docs/observability/tutorials/static/tracing_tutorial_monitor_drilldown.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_monitor_grouped.png b/docs/observability/tutorials/static/tracing_tutorial_monitor_grouped.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_monitor_grouped.png
rename to docs/observability/tutorials/static/tracing_tutorial_monitor_grouped.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_monitor_metadata.png b/docs/observability/tutorials/static/tracing_tutorial_monitor_metadata.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_monitor_metadata.png
rename to docs/observability/tutorials/static/tracing_tutorial_monitor_metadata.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_openai.png b/docs/observability/tutorials/static/tracing_tutorial_openai.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_openai.png
rename to docs/observability/tutorials/static/tracing_tutorial_openai.png
diff --git a/docs/tutorials/Developers/static/tracing_tutorial_retriever.png b/docs/observability/tutorials/static/tracing_tutorial_retriever.png
similarity index 100%
rename from docs/tutorials/Developers/static/tracing_tutorial_retriever.png
rename to docs/observability/tutorials/static/tracing_tutorial_retriever.png
diff --git a/docs/concepts/prompts/prompts.mdx b/docs/prompt_engineering/concepts/index.mdx
similarity index 99%
rename from docs/concepts/prompts/prompts.mdx
rename to docs/prompt_engineering/concepts/index.mdx
index 3915b66a9..b7ed37626 100644
--- a/docs/concepts/prompts/prompts.mdx
+++ b/docs/prompt_engineering/concepts/index.mdx
@@ -1,4 +1,4 @@
-# Prompts
+# Concepts
Writing good prompts is key to getting the best performance out of your applications. LangSmith provides ways to create, test, and manage prompts.
diff --git a/docs/prompt_engineering/how_to_guides/index.md b/docs/prompt_engineering/how_to_guides/index.md
new file mode 100644
index 000000000..596d0e228
--- /dev/null
+++ b/docs/prompt_engineering/how_to_guides/index.md
@@ -0,0 +1,26 @@
+# Prompt engineering how-to guides
+
+Step-by-step guides that cover key tasks and operations for doing prompt engineering LangSmith.
+
+## Prompt hub
+
+Organize and manage prompts in LangSmith to streamline your LLM development workflow.
+
+- [Create a prompt](./how_to_guides/prompts/create_a_prompt)
+- [Update a prompt](./how_to_guides/prompts/update_a_prompt)
+- [Manage prompts programmatically](./how_to_guides/prompts/manage_prompts_programatically)
+- [LangChain Hub](./how_to_guides/prompts/langchain_hub)
+
+## Playground
+
+Quickly iterate on prompts and models in the LangSmith Playground.
+
+- [Use custom TLS certificates](./how_to_guides/playground/custom_tls_certificates)
+- [Use a custom model](./how_to_guides/playground/custom_endpoint)
+- [Save settings configuration](./how_to_guides/playground/save_model_configuration)
+
+## Few shot prompting
+
+Use LangSmith datasets to serve few shot examples to your application
+
+- [Index a dataset for few shot example selection](../../evaluation/how_to_guides/datasets/index_datasets_for_dynamic_few_shot_example_selection)
diff --git a/docs/how_to_guides/playground/_category_.json b/docs/prompt_engineering/how_to_guides/playground/_category_.json
similarity index 100%
rename from docs/how_to_guides/playground/_category_.json
rename to docs/prompt_engineering/how_to_guides/playground/_category_.json
diff --git a/docs/how_to_guides/playground/custom_endpoint.mdx b/docs/prompt_engineering/how_to_guides/playground/custom_endpoint.mdx
similarity index 97%
rename from docs/how_to_guides/playground/custom_endpoint.mdx
rename to docs/prompt_engineering/how_to_guides/playground/custom_endpoint.mdx
index 3557a2766..d4fecb769 100644
--- a/docs/how_to_guides/playground/custom_endpoint.mdx
+++ b/docs/prompt_engineering/how_to_guides/playground/custom_endpoint.mdx
@@ -39,7 +39,7 @@ Once you have deployed a model server, you can use it in the LangSmith Playgroun
Enter the `URL`. The playground will automatically detect the available endpoints and configurable fields. You can then invoke the model with the desired parameters.
-
+
If everything is set up correctly, you should see the model's response in the playground as well as the configurable fields specified in the `with_configurable_fields`.
diff --git a/docs/how_to_guides/playground/custom_tls_certificates.mdx b/docs/prompt_engineering/how_to_guides/playground/custom_tls_certificates.mdx
similarity index 93%
rename from docs/how_to_guides/playground/custom_tls_certificates.mdx
rename to docs/prompt_engineering/how_to_guides/playground/custom_tls_certificates.mdx
index 803b8a336..dc92a57a5 100644
--- a/docs/how_to_guides/playground/custom_tls_certificates.mdx
+++ b/docs/prompt_engineering/how_to_guides/playground/custom_tls_certificates.mdx
@@ -16,7 +16,7 @@ If you are interested in this plan, please contact sales@langchain.dev for more
This feature is currently only available for the Azure OpenAI model provider. More model providers will be supported in the future.
-This will currently only affect model invocations through the **LangSmith Playground**, not [**Online Evaluation**](../monitoring/online_evaluations).
+This will currently only affect model invocations through the **LangSmith Playground**, not [**Online Evaluation**](../../../observability/how_to_guides/monitoring/online_evaluations).
The TLS certificates will apply to all Azure Deployment configurations in the playground.
:::
@@ -34,4 +34,4 @@ Once you have set these environment variables, enter the playground and select t
Enter the `Deployment Name`, `Azure Endpoint`, and `API Version`, as well as model invocation parameters. Then, the playground will be able to connect to the model provider using the custom TLS certificate.
-
+
diff --git a/docs/how_to_guides/playground/save_model_configuration.mdx b/docs/prompt_engineering/how_to_guides/playground/save_model_configuration.mdx
similarity index 86%
rename from docs/how_to_guides/playground/save_model_configuration.mdx
rename to docs/prompt_engineering/how_to_guides/playground/save_model_configuration.mdx
index d1387f8b1..de03c20ef 100644
--- a/docs/how_to_guides/playground/save_model_configuration.mdx
+++ b/docs/prompt_engineering/how_to_guides/playground/save_model_configuration.mdx
@@ -10,4 +10,4 @@ This helps you quickly apply your frequently-used settings without having to re-
To save the current playground configuration, click on the `Save` button in the top right corner of settings.
You can name the configuration and easily access it later.
-
+
diff --git a/docs/how_to_guides/static/azure_playground.png b/docs/prompt_engineering/how_to_guides/playground/static/azure_playground.png
similarity index 100%
rename from docs/how_to_guides/static/azure_playground.png
rename to docs/prompt_engineering/how_to_guides/playground/static/azure_playground.png
diff --git a/docs/how_to_guides/static/playground_custom_model.png b/docs/prompt_engineering/how_to_guides/playground/static/playground_custom_model.png
similarity index 100%
rename from docs/how_to_guides/static/playground_custom_model.png
rename to docs/prompt_engineering/how_to_guides/playground/static/playground_custom_model.png
diff --git a/docs/how_to_guides/static/saving_custom_model.gif b/docs/prompt_engineering/how_to_guides/playground/static/saving_custom_model.gif
similarity index 100%
rename from docs/how_to_guides/static/saving_custom_model.gif
rename to docs/prompt_engineering/how_to_guides/playground/static/saving_custom_model.gif
diff --git a/docs/how_to_guides/prompts/_category_.json b/docs/prompt_engineering/how_to_guides/prompts/_category_.json
similarity index 100%
rename from docs/how_to_guides/prompts/_category_.json
rename to docs/prompt_engineering/how_to_guides/prompts/_category_.json
diff --git a/docs/how_to_guides/prompts/create_a_prompt.mdx b/docs/prompt_engineering/how_to_guides/prompts/create_a_prompt.mdx
similarity index 92%
rename from docs/how_to_guides/prompts/create_a_prompt.mdx
rename to docs/prompt_engineering/how_to_guides/prompts/create_a_prompt.mdx
index c4eb953a5..e7ef56151 100644
--- a/docs/how_to_guides/prompts/create_a_prompt.mdx
+++ b/docs/prompt_engineering/how_to_guides/prompts/create_a_prompt.mdx
@@ -6,7 +6,7 @@ sidebar_position: 1
Navigate to the **Prompts** section in the left-hand sidebar or from the application homepage.
Click the "+ Prompt" button to enter the Playground. The dropdown next to the button gives you a choice between a chat style prompt and an instructional prompt - chat is the default.
-
+
## Compose your prompt
@@ -19,7 +19,7 @@ To the right, we can enter sample inputs for our prompt variables and then run o
To see the response from the model, click "Start".
-
+
## Save your prompt
@@ -33,13 +33,13 @@ The model and configuration you select in the Playground settings will be saved
The first time you create a public prompt, you'll be asked to set a LangChain Hub handle. All your public prompts will be linked to this handle. In a shared workspace, this handle will be set for the whole workspace.
:::
-
+
## View your prompts
You've just created your first prompt! View a table of your prompts in the prompts tab.
-
+
## Add metadata
@@ -47,4 +47,4 @@ To add metadata to your prompt, click the prompt and then click the "Edit" penci
This brings you to where you can add additional information about the prompt, including a description, a README, and use cases.
For public prompts this information will be visible to anyone who views your prompt in the LangChain Hub.
-
+
diff --git a/docs/how_to_guides/prompts/langchain_hub.mdx b/docs/prompt_engineering/how_to_guides/prompts/langchain_hub.mdx
similarity index 89%
rename from docs/how_to_guides/prompts/langchain_hub.mdx
rename to docs/prompt_engineering/how_to_guides/prompts/langchain_hub.mdx
index 42113bb88..5585e5d1c 100644
--- a/docs/how_to_guides/prompts/langchain_hub.mdx
+++ b/docs/prompt_engineering/how_to_guides/prompts/langchain_hub.mdx
@@ -6,7 +6,7 @@ sidebar_position: 6
Navigate to the **LangChain Hub** section of the left-hand sidebar.
-
+
Here you'll find all of the publicly listed prompts in the LangChain Hub.
You can search for prompts by name, handle, use cases, descriptions, or models. You can fork prompts to your personal organization, view the prompt's details, and run the prompt in the playground.
@@ -14,4 +14,4 @@ You can [pull any public prompt into your code](./manage_prompts_programatically
To view prompts tied to your workspace, visit the Prompts tab in the sidebar.
-
+
diff --git a/docs/how_to_guides/prompts/manage_prompts_programatically.mdx b/docs/prompt_engineering/how_to_guides/prompts/manage_prompts_programatically.mdx
similarity index 100%
rename from docs/how_to_guides/prompts/manage_prompts_programatically.mdx
rename to docs/prompt_engineering/how_to_guides/prompts/manage_prompts_programatically.mdx
diff --git a/docs/how_to_guides/prompts/open_a_prompt_from_a_trace.mdx b/docs/prompt_engineering/how_to_guides/prompts/open_a_prompt_from_a_trace.mdx
similarity index 88%
rename from docs/how_to_guides/prompts/open_a_prompt_from_a_trace.mdx
rename to docs/prompt_engineering/how_to_guides/prompts/open_a_prompt_from_a_trace.mdx
index 28fd07db2..d7303423d 100644
--- a/docs/how_to_guides/prompts/open_a_prompt_from_a_trace.mdx
+++ b/docs/prompt_engineering/how_to_guides/prompts/open_a_prompt_from_a_trace.mdx
@@ -7,7 +7,7 @@ sidebar_position: 5
If you pull a prompt into your code and begin logging traces that use it, you can find a link to the prompt in the Trace UI.
In the run that used the prompt, hover over the Prompt tag. Clicking on this will take you to the prompt. (If you used a LangChain Hub prompt, this tag will say Hub)
-]
+]
In the metadata of the run, you can see more details. Click on an individual prompt metadata value to filter your traces by that attribute. You can filter by prompt handle, prompt name, or prompt commit hash.
-
+
diff --git a/docs/how_to_guides/prompts/prompt_tags.mdx b/docs/prompt_engineering/how_to_guides/prompts/prompt_tags.mdx
similarity index 94%
rename from docs/how_to_guides/prompts/prompt_tags.mdx
rename to docs/prompt_engineering/how_to_guides/prompts/prompt_tags.mdx
index 824a8344a..cd9f91928 100644
--- a/docs/how_to_guides/prompts/prompt_tags.mdx
+++ b/docs/prompt_engineering/how_to_guides/prompts/prompt_tags.mdx
@@ -19,15 +19,15 @@ Prompt tags are labels that attached to specific commits in your prompt's versio
To create a tag, navigate to the commits tab of a prompt. Click on the tag icon next to the commit you want to tag. Click "New Tag" and enter the name of the tag.
-
-
+
+
### Move a tag
To point a tag to a different commit, click on the tag icon next to the destination commit, and select the tag you want to move.
This will automatically update the tag to point to the new commit.
-
+
## Delete a tag
diff --git a/docs/how_to_guides/static/blank_prompts_page.png b/docs/prompt_engineering/how_to_guides/prompts/static/blank_prompts_page.png
similarity index 100%
rename from docs/how_to_guides/static/blank_prompts_page.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/blank_prompts_page.png
diff --git a/docs/how_to_guides/static/create_prompt_playground.png b/docs/prompt_engineering/how_to_guides/prompts/static/create_prompt_playground.png
similarity index 100%
rename from docs/how_to_guides/static/create_prompt_playground.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/create_prompt_playground.png
diff --git a/docs/how_to_guides/static/edit_in_playground.png b/docs/prompt_engineering/how_to_guides/prompts/static/edit_in_playground.png
similarity index 100%
rename from docs/how_to_guides/static/edit_in_playground.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/edit_in_playground.png
diff --git a/docs/how_to_guides/static/edit_prompt.png b/docs/prompt_engineering/how_to_guides/prompts/static/edit_prompt.png
similarity index 100%
rename from docs/how_to_guides/static/edit_prompt.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/edit_prompt.png
diff --git a/docs/how_to_guides/static/langchain_hub.png b/docs/prompt_engineering/how_to_guides/prompts/static/langchain_hub.png
similarity index 100%
rename from docs/how_to_guides/static/langchain_hub.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/langchain_hub.png
diff --git a/docs/how_to_guides/static/metadata_edit_button.png b/docs/prompt_engineering/how_to_guides/prompts/static/metadata_edit_button.png
similarity index 100%
rename from docs/how_to_guides/static/metadata_edit_button.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/metadata_edit_button.png
diff --git a/docs/how_to_guides/static/prompt_commits_tab.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompt_commits_tab.png
similarity index 100%
rename from docs/how_to_guides/static/prompt_commits_tab.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompt_commits_tab.png
diff --git a/docs/how_to_guides/static/prompt_playground_edit_commit.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompt_playground_edit_commit.png
similarity index 100%
rename from docs/how_to_guides/static/prompt_playground_edit_commit.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompt_playground_edit_commit.png
diff --git a/docs/how_to_guides/static/prompt_table.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompt_table.png
similarity index 100%
rename from docs/how_to_guides/static/prompt_table.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompt_table.png
diff --git a/docs/how_to_guides/static/prompt_tags/commits_tab.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompt_tags/commits_tab.png
similarity index 100%
rename from docs/how_to_guides/static/prompt_tags/commits_tab.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompt_tags/commits_tab.png
diff --git a/docs/how_to_guides/static/prompt_tags/create_new_prompt_tag.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompt_tags/create_new_prompt_tag.png
similarity index 100%
rename from docs/how_to_guides/static/prompt_tags/create_new_prompt_tag.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompt_tags/create_new_prompt_tag.png
diff --git a/docs/how_to_guides/static/prompt_tags/move_prompt_tag.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompt_tags/move_prompt_tag.png
similarity index 100%
rename from docs/how_to_guides/static/prompt_tags/move_prompt_tag.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompt_tags/move_prompt_tag.png
diff --git a/docs/how_to_guides/static/prompts_tab.png b/docs/prompt_engineering/how_to_guides/prompts/static/prompts_tab.png
similarity index 100%
rename from docs/how_to_guides/static/prompts_tab.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/prompts_tab.png
diff --git a/docs/how_to_guides/static/run_metadata.png b/docs/prompt_engineering/how_to_guides/prompts/static/run_metadata.png
similarity index 100%
rename from docs/how_to_guides/static/run_metadata.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/run_metadata.png
diff --git a/docs/how_to_guides/static/save_prompt.png b/docs/prompt_engineering/how_to_guides/prompts/static/save_prompt.png
similarity index 100%
rename from docs/how_to_guides/static/save_prompt.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/save_prompt.png
diff --git a/docs/how_to_guides/static/trace_with_prompt_link.png b/docs/prompt_engineering/how_to_guides/prompts/static/trace_with_prompt_link.png
similarity index 100%
rename from docs/how_to_guides/static/trace_with_prompt_link.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/trace_with_prompt_link.png
diff --git a/docs/how_to_guides/static/update_prompt_form.png b/docs/prompt_engineering/how_to_guides/prompts/static/update_prompt_form.png
similarity index 100%
rename from docs/how_to_guides/static/update_prompt_form.png
rename to docs/prompt_engineering/how_to_guides/prompts/static/update_prompt_form.png
diff --git a/docs/how_to_guides/prompts/update_a_prompt.mdx b/docs/prompt_engineering/how_to_guides/prompts/update_a_prompt.mdx
similarity index 79%
rename from docs/how_to_guides/prompts/update_a_prompt.mdx
rename to docs/prompt_engineering/how_to_guides/prompts/update_a_prompt.mdx
index e10347fdc..8180e027d 100644
--- a/docs/how_to_guides/prompts/update_a_prompt.mdx
+++ b/docs/prompt_engineering/how_to_guides/prompts/update_a_prompt.mdx
@@ -10,22 +10,22 @@ Navigate to the **Prompts** section in the left-hand sidebar or from the applica
To update the prompt metadata (description, use cases, etc.) click the "Edit" pencil icon.
-
+
Your prompt metadata will be updated upon save.
-
+
## Update the prompt content
To update the prompt content itself, you need to enter the prompt playground. Click "Edit in playground".
Now you can make changes to the prompt and test it with different inputs. When you're happy with the prompt, click "Commit" to save it.
-
-
+
+
## Version a prompt
When you add a commit to a prompt, a new version of the prompt is created. You can view all historical versions by clicking the "Commits" tab in the prompt view.
-
+
diff --git a/docs/prompt_engineering/tutorials/index.mdx b/docs/prompt_engineering/tutorials/index.mdx
new file mode 100644
index 000000000..c242c90e9
--- /dev/null
+++ b/docs/prompt_engineering/tutorials/index.mdx
@@ -0,0 +1,5 @@
+# Prompt engineering tutorials
+
+New to LangSmith or to LLM app development in general? Read this material to quickly get up and running.
+
+- [Optimize a classifier](./tutorials/optimize_classifier)
diff --git a/docs/tutorials/Developers/optimize_classifier.mdx b/docs/prompt_engineering/tutorials/optimize_classifier.mdx
similarity index 100%
rename from docs/tutorials/Developers/optimize_classifier.mdx
rename to docs/prompt_engineering/tutorials/optimize_classifier.mdx
diff --git a/docs/how_to_guides/static/class-optimization-neg.png b/docs/prompt_engineering/tutorials/static/class-optimization-neg.png
similarity index 100%
rename from docs/how_to_guides/static/class-optimization-neg.png
rename to docs/prompt_engineering/tutorials/static/class-optimization-neg.png
diff --git a/docs/how_to_guides/static/class-optimization-pos.png b/docs/prompt_engineering/tutorials/static/class-optimization-pos.png
similarity index 100%
rename from docs/how_to_guides/static/class-optimization-pos.png
rename to docs/prompt_engineering/tutorials/static/class-optimization-pos.png
diff --git a/docs/reference/authentication_authorization/authentication_methods.mdx b/docs/reference/authentication_authorization/authentication_methods.mdx
index 9199d2442..88c83ed96 100644
--- a/docs/reference/authentication_authorization/authentication_methods.mdx
+++ b/docs/reference/authentication_authorization/authentication_methods.mdx
@@ -14,7 +14,7 @@ Users can alternatively use their credentials from GitHub, Google, or Discord.
### SAML SSO
-Enterprise customers can configure [SAML SSO](../../how_to_guides/setup/set_up_saml_sso.mdx)
+Enterprise customers can configure [SAML SSO](/administration/how_to_guides/organization_management/set_up_saml_sso.mdx)
## Self-Hosted
diff --git a/docs/reference/cloud_architecture_and_scalability.mdx b/docs/reference/cloud_architecture_and_scalability.mdx
index ea5f4ab04..ac9b6380c 100644
--- a/docs/reference/cloud_architecture_and_scalability.mdx
+++ b/docs/reference/cloud_architecture_and_scalability.mdx
@@ -62,7 +62,7 @@ Some additional GCP services we use include:
- Google Cloud Load Balancer for routing traffic to the LangSmith services.
- Google Cloud CDN for caching static assets.
-- Google Cloud Armor for security and rate limits. For more information on rate limits we enforce, please refer to [this guide](../concepts/usage_and_billing/rate_limits).
+- Google Cloud Armor for security and rate limits. For more information on rate limits we enforce, please refer to [this guide](../administration/concepts#rate-limits).

diff --git a/docs/reference/data_formats/feedback_data_format.mdx b/docs/reference/data_formats/feedback_data_format.mdx
index 4d42d6366..7c1d9c968 100644
--- a/docs/reference/data_formats/feedback_data_format.mdx
+++ b/docs/reference/data_formats/feedback_data_format.mdx
@@ -7,17 +7,17 @@ sidebar_position: 2
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on tracing and feedback](../../concepts/tracing)
+- [Conceptual guide on tracing and feedback](/observability/concepts)
:::
**Feedback** is LangSmith's way of storing the criteria and scores from evaluation on a particular trace or intermediate run (span).
Feedback can be produced from a variety of ways, such as:
-1. [Sent up along with a trace](../../how_to_guides/human_feedback/attach_user_feedback) from the LLM application
-2. Generated by a user in the app [inline](../../how_to_guides/human_feedback/annotate_traces_inline) or in an [annotation queue](../../how_to_guides/human_feedback/annotation_queues)
-3. Generated by an automatic evaluator during [offline evaluation](../../how_to_guides/evaluation/evaluate_llm_application)
-4. Generated by an [online evaluator](../../how_to_guides/monitoring/online_evaluations)
+1. [Sent up along with a trace](/evaluation/how_to_guides/human_feedback/attach_user_feedback) from the LLM application
+2. Generated by a user in the app [inline](/evaluation/how_to_guides/human_feedback/annotate_traces_inline) or in an [annotation queue](../../evaluation/how_to_guides/human_feedback/annotation_queues)
+3. Generated by an automatic evaluator during [offline evaluation](/evaluation/how_to_guides/evaluation/evaluate_llm_application)
+4. Generated by an [online evaluator](/observability/how_to_guides/monitoring/online_evaluations)
Feedback is stored in a simple format with the following fields:
diff --git a/docs/reference/data_formats/run_data_format.mdx b/docs/reference/data_formats/run_data_format.mdx
index fd89cc2d3..a11032d91 100644
--- a/docs/reference/data_formats/run_data_format.mdx
+++ b/docs/reference/data_formats/run_data_format.mdx
@@ -7,7 +7,7 @@ sidebar_position: 1
:::tip Recommended Reading
Before diving into this content, it might be helpful to read the following:
-- [Conceptual guide on tracing and runs](../../concepts/tracing)
+- [Conceptual guide on tracing and runs](/observability/concepts)
:::
diff --git a/docs/reference/regions_faq.mdx b/docs/reference/regions_faq.mdx
index 206b6cebf..121e0fc89 100644
--- a/docs/reference/regions_faq.mdx
+++ b/docs/reference/regions_faq.mdx
@@ -27,7 +27,7 @@ The terms are the same for the EU and US regions.
#### _How do I use the EU instance?_
-Follow the instructions [here](../how_to_guides/setup/create_account_api_key.mdx) to create an account and an API key (make sure to change the region to EU in the dropdown)
+Follow the instructions [here](/administration/how_to_guides/organization_management/create_account_api_key.mdx) to create an account and an API key (make sure to change the region to EU in the dropdown)
#### _Are there any functional differences between US and EU cloud-managed LangSmith?_
diff --git a/docs/reference/sdk_reference/langchain_evaluators.mdx b/docs/reference/sdk_reference/langchain_evaluators.mdx
index 0a0abadc4..94182d1b9 100644
--- a/docs/reference/sdk_reference/langchain_evaluators.mdx
+++ b/docs/reference/sdk_reference/langchain_evaluators.mdx
@@ -1,7 +1,7 @@
# LangChain off-the-shelf evaluators
LangChain's evaluation module provides evaluators you can use as-is for common evaluation scenarios.
-To learn how to use these evaluators, please refer to the [following guide](../../how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators).
+To learn how to use these evaluators, please refer to the [following guide](../../../evaluation/how_to_guides/evaluation/use_langchain_off_the_shelf_evaluators).
:::note
diff --git a/docs/self_hosting/configuration/basic_auth.mdx b/docs/self_hosting/configuration/basic_auth.mdx
index b339ae6df..500a1090e 100644
--- a/docs/self_hosting/configuration/basic_auth.mdx
+++ b/docs/self_hosting/configuration/basic_auth.mdx
@@ -21,7 +21,7 @@ LangSmith supports login via username/password with a few limitations:
**Only supported in versions 0.7 and above.**
-Migrating an installation from [None](../../reference/authentication_authorization/authentication_methods.mdx#none) auth mode replaces the single "default" user with a user with the configured credentials and keeps all existing resources.
+Migrating an installation from [None](/reference/authentication_authorization/authentication_methods.mdx#none) auth mode replaces the single "default" user with a user with the configured credentials and keeps all existing resources.
The single pre-existing workspace ID post-migration remains `00000000-0000-0000-0000-000000000000`, but everything else about the migrated installation is standard for a basic auth installation.
To migrate, simply update your configuration as shown below and run `helm upgrade` (or `docker-compose up`) as usual.
@@ -58,6 +58,6 @@ Additionally, in docker-compose you will need to run the bootstrap command to cr
docker-compose exec langchain-backend python hooks/auth_bootstrap.pyc
```
-Once configured, you will see a login screen like the one below. You should be able to login with the `initialOrgAdminEmail` and `initialOrgAdminPassword` values, and your user will be auto-provisioned with role `Organization Admin`. See the [admin guide](../../concepts/admin#organization-roles) for more details on organization roles.
+Once configured, you will see a login screen like the one below. You should be able to login with the `initialOrgAdminEmail` and `initialOrgAdminPassword` values, and your user will be auto-provisioned with role `Organization Admin`. See the [admin guide](../../administration/concepts#organization-roles) for more details on organization roles.

diff --git a/docs/self_hosting/configuration/sso.mdx b/docs/self_hosting/configuration/sso.mdx
index 3e0e08770..33d08d1ac 100644
--- a/docs/self_hosting/configuration/sso.mdx
+++ b/docs/self_hosting/configuration/sso.mdx
@@ -52,7 +52,7 @@ In this version of the flow, your client secret is stored security in the LangSm
### Requirements
:::note
-You may upgrade a [basic auth](./basic_auth.mdx) installation to this mode, but not a [none auth](../../reference/authentication_authorization/authentication_methods.mdx#none) installation.
+You may upgrade a [basic auth](./basic_auth.mdx) installation to this mode, but not a [none auth](/reference/authentication_authorization/authentication_methods.mdx#none) installation.
In order to upgrade, simply remove the basic auth configuration and add the required configuration parameters as shown below. Users may then login via OAuth _only_.
:::
diff --git a/docs/self_hosting/configuration/ttl.mdx b/docs/self_hosting/configuration/ttl.mdx
index dbaf67992..b62a0c1dc 100644
--- a/docs/self_hosting/configuration/ttl.mdx
+++ b/docs/self_hosting/configuration/ttl.mdx
@@ -7,14 +7,14 @@ import {
# TTL and Data Retention
LangSmith Self-Hosted allows enablement of automatic TTL and Data Retention of traces. This can be useful if you're complying with data privacy regulations, or if you want to have more efficient space usage and auto cleanup of your traces.
-Traces will also have their data retention period automatically extended based on certain actions or run rule applications. For more details on Data Retention, take a look at the section on auto-upgrades in the [data retention guide](/concepts/usage_and_billing/data_retention_billing).
+Traces will also have their data retention period automatically extended based on certain actions or run rule applications. For more details on Data Retention, take a look at the section on auto-upgrades in the [data retention guide](/administration/concepts#data-retention).
## Requirements
You can configure retention through helm or environment variable settings. There are a few options that are
configurable:
-- _Enabled:_ Whether data retention is enabled or disabled. If enabled, via the UI you can your default organization and project TTL tiers to apply to traces (see [data retention guide](/concepts/usage_and_billing/data_retention_billing) for details).
+- _Enabled:_ Whether data retention is enabled or disabled. If enabled, via the UI you can your default organization and project TTL tiers to apply to traces (see [data retention guide](/administration/concepts#data-retention) for details).
- _Retention Periods:_ You can configure system-wide retention periods for shortlived and longlived traces. Once configured, you can manage the retention level at each project as well as set an organization-wide default for new projects.
!(i.type === "doc" && i.id.split("/").at(-1) === "index")
);
- sidebarItems.forEach((subItem) => {
+ sidebarItems = sidebarItems.map((subItem) => {
+ const newItem = { ...subItem };
+
// This allows breaking long sidebar labels into multiple lines
// by inserting a zero-width space after each slash.
if (
- "label" in subItem &&
- subItem.label &&
- subItem.label.includes("/")
+ "label" in newItem &&
+ newItem.label &&
+ newItem.label.includes("/")
) {
// eslint-disable-next-line no-param-reassign
- subItem.label = subItem.label.replace(/\//g, "/\u200B");
+ newItem.label = newItem.label.replace(/\//g, "/\u200B");
+ }
+ if (args.item.className) {
+ newItem.className = args.item.className;
}
+ return newItem;
});
return sidebarItems;
},
diff --git a/sidebars.js b/sidebars.js
index 858ae9740..85a4db2be 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -24,49 +24,190 @@ module.exports = {
"index",
{
type: "category",
- label: "Tutorials",
+ label: "Observability",
items: [
{
- type: "autogenerated",
- dirName: "tutorials",
+ type: "category",
+ label: "Conceptual Guide",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "observability/concepts",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "observability/concepts/index" },
+ },
+ {
+ type: "category",
+ label: "How-to Guides",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "observability/how_to_guides",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "observability/how_to_guides/index" },
+ },
+ {
+ type: "category",
+ label: "Tutorials",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "observability/tutorials",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "observability/tutorials/index" },
},
],
- link: { type: "doc", id: "tutorials/index" },
+ link: { type: "doc", id: "observability/concepts/index" },
},
{
type: "category",
- label: "How-to guides",
+ label: "Evaluation",
items: [
{
- type: "autogenerated",
- dirName: "how_to_guides",
+ type: "category",
+ label: "Conceptual Guide",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "evaluation/concepts",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "evaluation/concepts/index" },
+ },
+ {
+ type: "category",
+ label: "How-to Guides",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "evaluation/how_to_guides",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "evaluation/how_to_guides/index" },
+ },
+ {
+ type: "category",
+ label: "Tutorials",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "evaluation/tutorials",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "evaluation/tutorials/index" },
},
],
- link: { type: "doc", id: "how_to_guides/index" },
+ link: { type: "doc", id: "evaluation/concepts/index" },
},
{
type: "category",
- label: "Concepts",
+ label: "Prompt Engineering",
items: [
{
- type: "autogenerated",
- dirName: "concepts",
+ type: "category",
+ label: "Conceptual Guide",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "prompt_engineering/concepts",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "prompt_engineering/concepts/index" },
+ },
+ {
+ type: "category",
+ label: "How-to Guides",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "prompt_engineering/how_to_guides",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "prompt_engineering/how_to_guides/index" },
+ },
+ {
+ type: "category",
+ label: "Tutorials",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "prompt_engineering/tutorials",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "prompt_engineering/tutorials/index" },
},
],
- link: { type: "doc", id: "concepts/index" },
+ link: { type: "doc", id: "prompt_engineering/concepts/index" },
},
+ "langgraph_cloud",
{
type: "category",
- label: "Reference",
+ label: "Administration",
items: [
{
- type: "autogenerated",
- dirName: "reference",
+ type: "category",
+ label: "Conceptual Guide",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "administration/concepts",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "administration/concepts/index" },
+ },
+ {
+ type: "category",
+ label: "How-to Guides",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "administration/how_to_guides",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "administration/how_to_guides/index" },
+ },
+ {
+ type: "category",
+ label: "Tutorials",
+ collapsible: false,
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "administration/tutorials",
+ className: "hidden",
+ },
+ ],
+ link: { type: "doc", id: "administration/tutorials/index" },
},
+ "administration/pricing",
],
- link: { type: "doc", id: "reference/index" },
+ link: { type: "doc", id: "administration/concepts/index" },
},
- "pricing",
{
type: "category",
label: "Self-hosting",
@@ -80,6 +221,16 @@ module.exports = {
],
link: { type: "doc", id: "self_hosting/index" },
},
- "langgraph_cloud",
+ {
+ type: "category",
+ label: "Reference",
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "reference",
+ },
+ ],
+ link: { type: "doc", id: "reference/index" },
+ },
],
};
diff --git a/src/css/custom.css b/src/css/custom.css
index 75fa97a72..055b63e79 100644
--- a/src/css/custom.css
+++ b/src/css/custom.css
@@ -234,6 +234,10 @@ html[data-theme="dark"] {
font-weight: 600;
}
+.hidden {
+ display: none !important;
+}
+
/* Config search */
.DocSearch {
width: 250px;
diff --git a/vercel.json b/vercel.json
index 515576aa3..efb101313 100644
--- a/vercel.json
+++ b/vercel.json
@@ -54,10 +54,6 @@
"source": "/docs/:path*",
"destination": "/:path*"
},
- {
- "source": "/evaluation/:path*",
- "destination": "/old/evaluation/:path*"
- },
{
"source": "/monitoring/:path*",
"destination": "/old/monitoring/:path*"
@@ -93,6 +89,102 @@
{
"source": "/category/release-notes",
"destination": "/self_hosting/release_notes"
+ },
+ {
+ "source": "/how_to_guides/evaluation/:path*",
+ "destination": "/evaluation/how_to_guides/evaluation/:path*"
+ },
+ {
+ "source": "/how_to_guides/human_feedback/:path*",
+ "destination": "/evaluation/how_to_guides/human_feedback/:path*"
+ },
+ {
+ "source": "/how_to_guides/datasets/:path*",
+ "destination": "/evaluation/how_to_guides/datasets/:path*"
+ },
+ {
+ "source": "/how_to_guides/monitoring/:path*",
+ "destination": "/observability/how_to_guides/monitoring/:path*"
+ },
+ {
+ "source": "/how_to_guides/tracing/:path*",
+ "destination": "/observability/how_to_guides/tracing/:path*"
+ },
+ {
+ "source": "/how_to_guides/prompts/:path*",
+ "destination": "/prompt_engineering/how_to_guides/prompts/:path*"
+ },
+ {
+ "source": "/how_to_guides/playground/:path*",
+ "destination": "/prompt_engineering/how_to_guides/playground/:path*"
+ },
+ {
+ "source": "/how_to_guides/setup/:path*",
+ "destination": "/admin/how_to_guides/organization_management/:path*"
+ },
+ {
+ "source": "/how_to_guides",
+ "destination": "/"
+ },
+ {
+ "source": "/concepts",
+ "destination": "/"
+ },
+ {
+ "source": "/tutorials",
+ "destination": "/"
+ },
+ {
+ "source": "/concepts/admin:path*",
+ "destination": "/administration/concepts"
+ },
+ {
+ "source": "/concepts/usage_and_billing:path*",
+ "destination": "/administration/concepts"
+ },
+ {
+ "source": "/concepts/evaluation:path*",
+ "destination": "/evaluation/concepts"
+ },
+ {
+ "source": "/concepts/tracing:path*",
+ "destination": "/observability/concepts"
+ },
+ {
+ "source": "/concepts/prompts:path*",
+ "destination": "/prompt_engineering/concepts"
+ },
+ {
+ "source": "/pricing:path*",
+ "destination": "/administration/pricing"
+ },
+ {
+ "source": "/tutorials/Developers/observability",
+ "destination": "/observability/tutorials/observability"
+ },
+ {
+ "source": "/tutorials/Developers/rag",
+ "destination": "/evaluation/tutorials/rag"
+ },
+ {
+ "source": "/tutorials/Developers/evaluation",
+ "destination": "/evaluation/tutorials/evaluation"
+ },
+ {
+ "source": "/tutorials/Developers/backtesting",
+ "destination": "/evaluation/tutorials/backtesting"
+ },
+ {
+ "source": "/tutorials/Developers/agents",
+ "destination": "/evaluation/tutorials/agents"
+ },
+ {
+ "source": "/tutorials/Developers/swe-benchmark",
+ "destination": "/evaluation/tutorials/swe-benchmark"
+ },
+ {
+ "source": "/tutorials/Developers/optimize_classifier",
+ "destination": "/prompt_engineering/tutorials/optimize_classifier"
}
],
"builds": [
diff --git a/versioned_docs/version-old/index.mdx b/versioned_docs/version-old/index.mdx
index 228e8ccea..44260c941 100644
--- a/versioned_docs/version-old/index.mdx
+++ b/versioned_docs/version-old/index.mdx
@@ -181,7 +181,7 @@ await runOnDataset(
Check out the following sections to learn more about LangSmith:
- **[User Guide](./user_guide.mdx)**: Learn about the workflows LangSmith supports at each stage of the LLM application lifecycle.
-- **[Pricing](/pricing)**: Learn about the pricing model for LangSmith.
+- **[Pricing](./pricing.mdx)**: Learn about the pricing model for LangSmith.
- **[Self-Hosting](./self_hosting)**: Learn about self-hosting options for LangSmith.
- **[Tracing](./tracing/index.mdx)**: Learn about the tracing capabilities of LangSmith.
- **[Evaluation](./evaluation/index.mdx)**: Learn about the evaluation capabilities of LangSmith.
diff --git a/versioned_docs/version-old/pricing.mdx b/versioned_docs/version-old/pricing.mdx
index f17570925..faa97013a 100644
--- a/versioned_docs/version-old/pricing.mdx
+++ b/versioned_docs/version-old/pricing.mdx
@@ -3,6 +3,8 @@ sidebar_label: Pricing
sidebar_position: 4
---
+import { RegionalUrl } from "@site/src/components/RegionalUrls";
+
# Pricing
## Plans
@@ -11,8 +13,8 @@ sidebar_position: 4
Plan |
- Startups |
Developer |
+ Startups |
Plus |
Enterprise |
@@ -81,6 +83,7 @@ sidebar_position: 4
- All features in Developer tier
- Up to 10 seats
+ - Hosted LangServe (beta)
- Longer data retention
- Higher rate limits
- Email support
@@ -116,38 +119,39 @@ sidebar_position: 4
## Plan Comparison
-| | Developer | Plus | Enterprise |
-| ------------------------------------------- | :------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------: | :-------------------------------------------------------: |
-| Features | | | |
-| Debugging Traces | ✅ | ✅ | ✅ |
-| Dataset Collection | ✅ | ✅ | ✅ |
-| Human Labeling | ✅ | ✅ | ✅ |
-| Testing and Evaluation | ✅ | ✅ | ✅ |
-| Prompt Management | ✅ | ✅ | ✅ |
-| Monitoring | ✅ | ✅ | ✅ |
-| Role-Based Access Controls (RBAC) | | | ✅ |
-| Team | | | |
-| Developer Seats | 1 Free Seat | Maximum 10 seats
$39 per seat/month1 | Unlimited seats
Custom pricing |
-| Collaborator Seats | -- | -- | Coming Soon! |
-| Trace Details | | | |
-| Traces | First 5k traces per month for free.
$0.005 per trace thereafter2 | First 10k traces per month for free.
$0.005 per trace thereafter2 | Custom |
-| Rate Limits | | | |
-| Max ingested events / hour3 | 50,0003 / 250,000 | 500,000 | Custom |
-| Total trace size storage / hour4 | 500MB3 / 2.5GB | 5GB | Custom |
-| Security Controls | | | |
-| Single Sign On | -- | Google
GitHub | Custom SSO |
-| Deployment | Hosted in US | Hosted in US | Add-on for self-hosted
deployment in customer's VPC |
-| Support | | | |
-| Support Channels | Community | Email | Email
Shared Slack Channel |
-| Shared Slack Channel | -- | -- | ✅ |
-| Team Training | -- | -- | ✅ |
-| Application Architectural Guidance | -- | -- | ✅ |
-| Dedicated Customer Success Manager | -- | -- | ✅ |
-| SLA | -- | -- | ✅ |
-| Procurement | | | |
-| Billing | Monthly, self-serve
Credit Card | Monthly, self-serve
Credit Card | Annual Invoice
ACH |
-| Custom Terms and Data Privacy Agreement | -- | -- | ✅ |
-| Infosec Review | -- | -- | ✅ |
+| | Developer | Plus | Enterprise |
+| ------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------: |
+| Features | | | |
+| Debugging Traces | ✅ | ✅ | ✅ |
+| Dataset Collection | ✅ | ✅ | ✅ |
+| Human Labeling | ✅ | ✅ | ✅ |
+| Testing and Evaluation | ✅ | ✅ | ✅ |
+| Prompt Management | ✅ | ✅ | ✅ |
+| Hosted LangServe | -- | ✅ | ✅ |
+| Monitoring | ✅ | ✅ | ✅ |
+| Role-Based Access Controls (RBAC) | -- | -- | ✅ |
+| Team | | | |
+| Developer Seats | 1 Free Seat | Maximum 10 seats
$39 per seat/month1 | Custom pricing |
+| Usage | | | |
+| Traces2 | First 5k base traces and extended upgrades per month for free
Pay as you go thereafter:
$0.50 per 1k base traces (14-day retention)
Additional $4.50 per 1k extended traces (400-day retention) | First 10k base traces and extended upgrades per month for free
Pay as you go thereafter:
$0.50 per 1k base traces (14-day retention)
Additional $4.50 per 1k extended traces (400-day retention) | Custom |
+| Max ingested events / hour3 | 50,0003 / 250,000 | 500,000 | Custom |
+| Total trace size storage / hour4 | 500MB3 / 2.5GB | 5GB | Custom |
+| Security Controls | | | |
+| Single Sign On | -- | Google
GitHub | Custom SSO |
+| Deployment | Hosted in US | Hosted in US | Add-on for self-hosted
deployment in customer's VPC |
+| Support | | | |
+| Support Channels | Community | Email | Email
Shared Slack Channel |
+| Shared Slack Channel | -- | -- | ✅ |
+| Team Training | -- | -- | ✅ |
+| Application Architectural Guidance | -- | -- | ✅ |
+| Dedicated Customer Success Manager | -- | -- | ✅ |
+| SLA | -- | -- | ✅ |
+| Procurement | | | |
+| Billing | Monthly, self-serve
Credit Card | Monthly, self-serve
Credit Card | Annual Invoice
ACH |
+| Custom Terms and Data Privacy Agreement | -- | -- | ✅ |
+| Infosec Review | -- | -- | ✅ |
+| Workspaces | Single, default Workspace under Personal Organization | Up to 3 Workspaces per Organization | Up to 10 Workspaces per Organization (contact support@langchain.dev for more) |
+| Organization Roles (User and Admin) | -- | ✅ | ✅ |
1 Seats are billed monthly on the first of the month and in the future
will be prorated if additional seats are purchased in the middle of the month. Seats
@@ -155,8 +159,8 @@ removed mid-month are not credited.
2 You can purchase LangSmith credits for your tracing usage. As long
as you have a valid credit card in your account, we’ll service your traces and
-deduct from your credit balance. You’ll be able to set alerts and auto top-ups
-on credits if you choose.
+deduct from your credit balance. You’ll be able to set monthly ingest limits if
+you choose to control spend.
3 Personal accounts without a credit card on file will be rate limited
to 50,000 ingested events per hour and 500MB of storage per hour.
@@ -170,7 +174,7 @@ trace step and again after it is complete)
### I’ve been using LangSmith since before pricing took effect for new users. When will pricing go into effect for my account?
-If you’ve been using LangSmith already, your usage will be billable starting sometime in May. At that point if you want to add seats or use more than the monthly allotment of free traces, you will need to add a credit card to LangSmith or contact sales. If you are interested in the Enterprise plan with higher rate limits and special deployment options, you can learn more or make a purchase by reaching out to [sales@langchain.dev](mailto:sales@langchain.dev).
+If you’ve been using LangSmith already, your usage will be billable starting in July 2024. At that point if you want to add seats or use more than the monthly allotment of free traces, you will need to add a credit card to LangSmith or contact sales. If you are interested in the Enterprise plan with higher rate limits and special deployment options, you can learn more or make a purchase by reaching out to [sales@langchain.dev](mailto:sales@langchain.dev).
### Which plan is right for me?
@@ -220,9 +224,23 @@ bill you on the first of the month for traces that you submitted in the previous
month. You will be able to set usage limits if you so choose to limit the maximum
charges you could incur in any given month.
+### Can I limit how much I spend on tracing?
+
+You can set limits on the number of traces that can be sent to LangSmith per month on
+the settings page.
+
+:::note
+While we do show you the dollar value of your usage limit for convenience, this limit evaluated
+in terms of number of traces instead of dollar amount. For example, if you are approved for our
+startup plan tier where you are given a generous allotment of free traces, your usage limit will
+not automatically change.
+
+You are not currently able to set a spend limit in the product.
+:::
+
### How can my track my usage so far this month?
-Under the Settings section for your Organization you will see subsection for Usage. There, you will able to see a graph of the daily nunber of billable LangSmith traces from the last 30, 60, or 90 days. Note that this data is delayed by 1-2 hours and so may trail your actual number of runs slightly for the current day.
+Under the Settings section for your Organization you will see subsection for Usage. There, you will able to see a graph of the daily number of billable LangSmith traces from the last 30, 60, or 90 days. Note that this data is delayed by 1-2 hours and so may trail your actual number of runs slightly for the current day.
### I have a question about my bill...
@@ -238,13 +256,11 @@ On the Plus plan, you will also receive preferential, email support at [support@
On the Enterprise plan, you’ll get white-glove support with a Slack channel, a dedicated customer success manager, and monthly check-ins to go over LangSmith and LangChain questions. We can help with anything from debugging, agent and RAG techniques, evaluation approaches, and cognitive architecture reviews. If you purchase the add-on to run LangSmith in your environment, we’ll also support deployments and new releases with our infra engineering team on-call.
-### Where is my data stored?
-
-When using LangSmith hosted at smith.langchain.com, data is stored in GCP region `us-central-1`. If you’re on the Enterprise plan, we can deliver LangSmith to run on your kubernetes cluster in AWS, GCP, or Azure so that data never leaves your environment.
+### Which security frameworks is LangSmith compliant with?
-### Is LangSmith SOC 2 compliant?
+We are SOC 2 Type II, GDPR, and HIPAA compliant.
-We are SOC 2 Type II, GDPR, and HIPAA compliant. You can request more information about our security policies and posture at trust.langchain.com. Please note we only enter into BAAs with customers on our Enterprise plan.
+You can request more information about our security policies and posture at [trust.langchain.com](https://trust.langchain.com). Please note we only enter into BAAs with customers on our Enterprise plan.
### Will you train on the data that I send LangSmith?