Skip to content

# Add Support for Anthropic's Built-in Tools #8248

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion libs/langchain-anthropic/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
"author": "LangChain",
"license": "MIT",
"dependencies": {
"@anthropic-ai/sdk": "^0.39.0",
"@anthropic-ai/sdk": "^0.52.0",
"fast-xml-parser": "^4.4.1",
"zod": "^3.22.4",
"zod-to-json-schema": "^3.22.4"
Expand Down
26 changes: 25 additions & 1 deletion libs/langchain-anthropic/src/chat_models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,27 @@ function isAnthropicTool(tool: any): tool is Anthropic.Messages.Tool {
return "input_schema" in tool;
}

function isBuiltinTool(
tool: unknown
): tool is
| Anthropic.Messages.ToolBash20250124
| Anthropic.Messages.ToolTextEditor20250124
| Anthropic.Messages.WebSearchTool20250305 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a union for built-in tools? If not, it would be good to create one in types.ts.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a export type ToolUnion = Tool | ToolBash20250124 | ToolTextEditor20250124 | WebSearchTool20250305; from the Anthropic SDK

Tool is for custom tools, so it really widens the typing and doesn't seem like a great fit to assert if a Tool is one of the built in tools (see typing below).

Creating one and putting it in types.ts seems best (will put more details in your comment about the type property).

// From @anthropic-ai/sdk's messages.ts 
export interface Tool {
  /**
   * [JSON schema](https://json-schema.org/draft/2020-12) for this tool's input.
   *
   * This defines the shape of the `input` that your tool accepts and that the model
   * will produce.
   */
  input_schema: Tool.InputSchema;

  /**
   * Name of the tool.
   *
   * This is how the tool will be called by the model and in `tool_use` blocks.
   */
  name: string;

  /**
   * Create a cache control breakpoint at this content block.
   */
  cache_control?: CacheControlEphemeral | null;

  /**
   * Description of what this tool does.
   *
   * Tool descriptions should be as detailed as possible. The more information that
   * the model has about what the tool is and how to use it, the better it will
   * perform. You can use natural language descriptions to reinforce important
   * aspects of the tool input JSON schema.
   */
  description?: string;

  type?: 'custom' | null;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if export type AnthropicBuiltInTool = Anthropic.Messages.Tool & { type : null } wouldn't be enough to resolve AnthropicBuiltInTool to what we'd be after, without having to explicitly redefine. We like to derive types (and schema) from the provider SDK when possible, as it makes forward compatibility a bit easier when they make additive changes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed up a change that uses:

export type AnthropicBuiltInToolUnion = Exclude<
  Anthropic.Messages.ToolUnion,
  Anthropic.Messages.Tool
>;

This separates out the custom tool and should make it forward compatible since it's just deriving from the SDK types (theoretically Anthropic could put something unexpected in the union, but this at least the type inference to be "Everything besides the custom tools").

Let me know what you think about that one.

export type AnthropicBuiltInTool = Anthropic.Messages.Tool & { type : null }`` doesn't work because the built-in tools all have string literals with they timestamp (ex. text_editor_20250124) for the their type. The built-in tools and the custom tools don't actually overlap on the type` property.

type actually would let you narrow, but typing "Any string that isn't literal 'custom'" is pretty gross/kluggy 😓

return (
typeof tool === "object" &&
tool !== null &&
"type" in tool &&
"name" in tool &&
typeof tool.type === "string" &&
typeof tool.name === "string" &&
((tool.type === "bash_20250124" && tool.name === "bash_20250124") ||
(tool.type === "text_editor_20250124" &&
tool.name === "text_editor_20250124") ||
(tool.type === "web_search_20250305" &&
tool.name === "web_search_20250305"))
);
}

/**
* Input to AnthropicChat class.
*/
Expand Down Expand Up @@ -720,11 +741,14 @@ export class ChatAnthropicMessages<
*/
formatStructuredToolToAnthropic(
tools: ChatAnthropicCallOptions["tools"]
): Anthropic.Messages.Tool[] | undefined {
): Anthropic.Messages.ToolUnion[] | undefined {
if (!tools || !tools.length) {
return undefined;
}
return tools.map((tool) => {
if (isBuiltinTool(tool)) {
return tool;
}
if (isAnthropicTool(tool)) {
return tool;
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
import { test, expect } from "@jest/globals";
import { ChatAnthropic } from "../chat_models.js";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

Check failure on line 3 in libs/langchain-anthropic/src/tests/chat_models-built-in-tools.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

`@langchain/core/messages` import should occur before import of `../chat_models.js`
import { _convertMessagesToAnthropicPayload } from "../utils/message_inputs.js";

const chatModel = new ChatAnthropic({
model: "claude-3-5-sonnet-20241022",
temperature: 0,
// Enable built-in tools (web search, text editor, bash)
// Note: This requires API access to server tools
});

test("Server Tools Integration - Web Search", async () => {
// Test that we can handle a conversation with web search tool usage
const messages = [
new HumanMessage({
content:
"Search for the latest news about TypeScript 5.7 release and summarize what you find.",
}),
];

const response = await chatModel.invoke(messages);

console.log("Response content:", JSON.stringify(response.content, null, 2));

// The response should be an AIMessage
expect(response).toBeInstanceOf(AIMessage);
expect(response.content).toBeDefined();

// The response should contain meaningful content about TypeScript
expect(
typeof response.content === "string"
? response.content
: JSON.stringify(response.content)
).toMatch(/TypeScript|typescript/i);

console.log("✅ Successfully handled web search request");
}, 30000); // 30 second timeout for API call

test("Server Tools Integration - Message Round Trip", async () => {
// Test that we can properly parse messages with server tool content blocks
const conversation = [
new HumanMessage({
content:
"What are the latest developments in AI research? Please search for current information.",
}),
];

try {
const response1 = await chatModel.invoke(conversation);

console.log("First response:", JSON.stringify(response1.content, null, 2));

// Add the AI response to conversation
conversation.push(response1);

// Continue the conversation
conversation.push(
new HumanMessage({
content:
"Based on your search, what are the most promising areas for future research?",
})
);

const response2 = await chatModel.invoke(conversation);

console.log("Second response:", JSON.stringify(response2.content, null, 2));

// Both responses should be valid
expect(response1).toBeInstanceOf(AIMessage);
expect(response2).toBeInstanceOf(AIMessage);

console.log(
"✅ Successfully completed multi-turn conversation with server tools"
);
} catch (error) {
// If server tools aren't available, the test should still not crash due to unsupported content format
if (
error instanceof Error &&

Check failure on line 79 in libs/langchain-anthropic/src/tests/chat_models-built-in-tools.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

Use of "instanceof" operator is forbidden
error.message.includes("Unsupported message content format")
) {
throw new Error(
"❌ REGRESSION: 'Unsupported message content format' error returned - this should be fixed!"
);
}

// Other errors (like API access issues) are expected and should not fail the test
console.log(
"⚠️ Server tools may not be available for this API key, but no format errors occurred"
);
}
}, 45000); // 45 second timeout for longer conversation

test("Server Tools Integration - Content Block Parsing", async () => {
// Test parsing of messages that contain server tool content blocks
const messageWithServerTool = new AIMessage({
content: [
{
type: "text",
text: "I'll search for that information.",
},
{
type: "server_tool_use",
id: "toolu_01ABC123",
name: "web_search",
input: {
query: "latest AI developments",
},
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you seen how the built-in tool responses are formatted by the OpenAI provider when using the responses API? Would it be possible to make this structure look similar to that? This will cut down on the number of breaking changes we'll need to make when we eventually go to standardize this type of output.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! I didn't! That's helpful context. I'll see if I can get the structure to be similar.

Copy link
Author

@bleafman bleafman May 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benjamincburns I could use your confirmation/feedback before moving forward on how to split up / organize the data returned as part of the tool call.

You can see an example from Anthropic of a response with their built in web search tool here.

One of the big advantages of the built-in tool is that Claude will respond with text + citations side by side and you can use that render results (you're also required as part of the ToS to display the citations with any returned results).

Based on what's happening in the OpenAI provider, specifically in the _convertOpenAIResponseMessageToBaseMessage, it looks like we'd just append the citations as-is to an entry in the content array. Does that sound right?

It end up coming through like below:

 content: [
 {
      "citations": [
          {
              "type": "web_search_result_location",
              "cited_text": "Santolina, also known as cotton lavender, is a small, mounded shrub with silvery, aromatic foliage and button-like yellow flowers. It thrives in dry, ...",
              "url": "https://mygardeninspo.com/mediterranean-flowers/",
              "title": "26 Mediterranean Flowers",
              "encrypted_index": "..."
          }
      ],
      "type": "text",
      "text": "This is a small, mounded shrub with silvery, aromatic foliage and yellow button-like flowers. It's perfect for your needs as it thrives in dry, rocky soils and is often used for edging paths and borders. It prefers full sun and minimal watering"
  },
  // ...more entries
  ]

For the tool use itself, it seems like I should pass that into the tool_calls as a normal tool block, right?

I basically have the content working, the tool_calls are a little tricky since Anthropic expects a specific format in follow up turns but I have some ideas.

Just want to confirm before I go way down the wrong direction or I'm missing something.

Note: The OpenAI Provider does put some data into additional_kwargs.tool_calls but that field is marked as deprecated so I was planning on not replicating that behavior.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bleafman I would think that's OK. We have some opinions about how we want to standardize those citation/annotation outputs, but that can be for a later iteration.
(we're doing something similar with the annotations key for openai here)

],
});

const messageWithSearchResult = new HumanMessage({
content: [
{
type: "web_search_tool_result",
tool_use_id: "toolu_01ABC123",
content: [
{
type: "web_search_result",
title: "AI Breakthrough 2024",
url: "https://example.com/ai-news",
content: "Recent developments in AI...",
},
],
},
],
});

// This should not throw an "Unsupported message content format" error
expect(() => {
const messages = [messageWithServerTool, messageWithSearchResult];
// Try to format these messages - this should work now
const formatted = _convertMessagesToAnthropicPayload(messages);
expect(formatted.messages).toHaveLength(2);

// Verify server_tool_use is preserved
const aiContent = formatted.messages[0].content as any[];

Check failure on line 138 in libs/langchain-anthropic/src/tests/chat_models-built-in-tools.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
expect(
aiContent.find((block) => block.type === "server_tool_use")
).toBeDefined();

// Verify web_search_tool_result is preserved
const userContent = formatted.messages[1].content as any[];

Check failure on line 144 in libs/langchain-anthropic/src/tests/chat_models-built-in-tools.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
expect(
userContent.find((block) => block.type === "web_search_tool_result")
).toBeDefined();
}).not.toThrow();

console.log(
"✅ Successfully parsed server tool content blocks without errors"
);
});

test("Server Tools Integration - Error Handling", async () => {
// Test that malformed server tool content doesn't crash the system
const messageWithMalformedContent = new AIMessage({
content: [
{
type: "text",
text: "Testing error handling",
},
{
type: "server_tool_use",
id: "test_id",
name: "web_search",
input: "malformed input", // This should be converted to object
},
],
});

// This should handle the malformed input gracefully
expect(() => {
const formatted = _convertMessagesToAnthropicPayload([
messageWithMalformedContent,
]);
expect(formatted.messages).toHaveLength(1);

const content = formatted.messages[0].content as any[];

Check failure on line 179 in libs/langchain-anthropic/src/tests/chat_models-built-in-tools.int.test.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
const toolUse = content.find((block) => block.type === "server_tool_use");
expect(toolUse).toBeDefined();
// The malformed string input should be converted to an empty object
expect(typeof toolUse.input).toBe("object");
}).not.toThrow();

console.log("✅ Successfully handled malformed server tool content");
});
Loading
Loading