Integrating Function Calling APIs to LLM interaction #4234
Replies: 8 comments 3 replies
-
🤖 Hey there @Jkrish1011! 🎉 Great to see you back diving into the depths of To integrate the function API calling functionality into your existing langchainjs code, you can follow these steps:
async function callApi() {
// Replace with your actual API call
const response = await fetch('https://api.example.com/data');
const data = await response.json();
return data;
}
const _chain = RunnableSequence.from([
{
question: _standaloneQuestionChain,
original_input: new RunnablePassthrough(),
api_data: callApi, // Add your function here
},
_answerChain,
]);
const sample_answerTemplate = `You are a helpful assistant that can access external functions.
The responses from these function calls will be appended to this dialogue. Please provide responses based on the information from these function calls.
Use the following pieces of context to answer the question at the end and the conversation history.
Try to find the answer from the conversation context.
If you don't know the answer, just say you don't know. DO NOT try to make up an answer.
If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
question: {question}
Conversation History: {chat_history}
API Data: {api_data} // Access the API data here
answer: `; This way, your language model will be able to interact with the API and respond to prompts based on the API information. Please note that this is a simplified example and you might need to adjust it according to your specific needs and the structure of your API data. For more information, you can refer to the LangChainJS documentation and the RunnableSequence API documentation. I hope this helps! If you have any more questions, feel free to ask. -Dosu SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@dosu-bot |
Beta Was this translation helpful? Give feedback.
-
🤖 To integrate the function API calling functionality into your existing langchain code, you can create a new class that extends the import axios from 'axios'; // You can use any library to make HTTP requests
class APITool extends StructuredTool {
schema = z.object({
endpoint: z.string().describe("The API endpoint"),
// Add other parameters as needed
});
name = "api_tool";
description = "Make a request to an API";
constructor() {
super(...arguments);
}
async _call(input: { endpoint: string }) {
const { endpoint } = input;
try {
const response = await axios.get(endpoint); // Replace with the appropriate HTTP method
return JSON.stringify(response.data);
} catch (error) {
console.error(error);
return JSON.stringify({ error: "An error occurred while making the API request" });
}
}
} You can then add an instance of this class to the const tools = [new WeatherTool(), new HumanReadableChecker(), new APITool()];
const agent = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-3.5-turbo-1106",
instructions: "You are a weather bot. Use the provided functions to answer questions.",
name: "Weather Assistant",
tools,
asAgent: true,
}); Now, your language model can interact with the API of your choice by using the Regarding your question about The The I hope this helps! Let me know if you have any other questions. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
🤖 To make a POST request with parameters extracted from the user's query, you can create a function that parses the user's query and extracts the required parameters. This function can then be used to make the API call. Here's an example of how you can do this: function parseUserQuery(query) {
// Parse the user's query to extract the required parameters
// This is a simple example and may need to be adjusted based on the actual format of the user's query
const [name, city] = query.split(' ').slice(-2);
return { name, city };
}
async function callApi(query) {
const { name, city } = parseUserQuery(query);
// Make the API call
const response = await fetch('https://api.example.com', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ name, city }),
});
const data = await response.json();
return data;
} In your const _chain = RunnableSequence.from([
{
question: _standaloneQuestionChain,
original_input: new RunnablePassthrough(),
},
async (context) => {
const apiData = await callApi(context.question);
return { ...context, apiData };
},
_answerChain,
]); In this example, Please note that this is a simplified example and you may need to adjust the I hope this helps! If you have any further questions, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@dosu-bot, Issues - I want the LLM to understand the user query and extract the name and city from the prompt. Then make the API calls and respond with the result of the api as it's context of answer. Current state - The api calls are being made, but the input parameters required are not extracted properly thus the LLM outputing gibberish. |
Beta Was this translation helpful? Give feedback.
-
🤖 I understand your issue. You want to extract specific parameters from the user's query, use these parameters to make an API call, and then use the API response as context for the model's answer. Here's how you can achieve this: First, you need to create a custom Runnable that will parse the user's query and extract the desired parameters. This Runnable can then be added to your RunnableSequence. Here's an example: import { Runnable } from "@langchain/core/runnables";
class ParameterExtractor extends Runnable {
constructor() {
super();
}
async run(input: any) {
// Assuming the input is a string, we can use regular expressions or other string parsing methods to extract the desired parameters.
const nameMatch = input.match(/name is (\w+)/i);
const cityMatch = input.match(/from (\w+)/i);
const name = nameMatch ? nameMatch[1] : null;
const city = cityMatch ? cityMatch[1] : null;
return { name, city };
}
}
// Then, you can add this to your RunnableSequence like so:
const parameterExtractionChain = RunnableSequence.from([
new ParameterExtractor(),
// ...other Runnables
]); In this example, the ParameterExtractor Runnable is looking for phrases like "name is John" and "from New York" in the user's input. It then extracts the name and city from these phrases and returns them. You would need to adjust the regular expressions to match the specific format of your user's queries. Next, you need to create a new tool that will handle the API call and provide the response to the agent. Here's an example: import { Tool } from "@langchain/core/tools";
import axios from "axios";
class WeatherAPI extends Tool {
async invoke(input: { name: string, city: string }) {
// Make the API call
const response = await axios.post(`https://your-api-url.com`, input);
// Return the API response
return response.data;
}
}
// Add the new tool to the tools array
const tools = [
new WeatherAPI(),
// ...other tools
]; In this example, a new tool Please replace I hope this helps! If you have any other questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
🤖 To extract specific parameters like name and city from a user's query, you can use a Named Entity Recognition (NER) model. In JavaScript, libraries like import nlp from 'compromise';
const extractParameters = (input: string) => {
const doc = nlp(input);
const name = doc.people().text();
const city = doc.cities().text();
return { name, city };
};
const { name, city } = extractParameters(conversationalRetrievalQAChainInput.question); In this example, Once you have the name and city, you can pass them to your API call function. Here's an example of how you can do this: const apiResponse = await makeApiCall(name, city); In this example, Finally, you can use the result of the API call as the context for the language model. Here's an example of how you can do this: const result = await conversationalRetrievalQAChain.invoke({
question: conversationalRetrievalQAChainInput.question,
chat_history: conversationalRetrievalQAChainInput.chat_history,
context: apiResponse, // Use the API response as the context
}); In this example, Please note that this is a simplified example and you may need to adjust it according to your specific requirements and the structure of your API response. Also, the NER model may not always correctly extract the parameters, especially if the input text is complex or ambiguous. You may need to train the NER model with more data or use a more advanced NER model for better accuracy. Regarding your question about using a separate RunnableSequence with a question prompt and answer template to understand the user's input and get the information we require out of it, yes, you can do that. However, it might be more efficient to use a NER model to extract the parameters directly from the user's query, as shown in the example above. I hope this helps! If you have any more questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
🤖 In the LangChain JavaScript framework, the In your provided context, the When you call the Here's the relevant code: const llmWithKwargs = (llm as Runnable).bind(llmKwargs);
return prompt.pipe(llmWithKwargs).pipe(outputParser); In this code, So, to ensure that your You can find more details about this in the langchain/src/chains/openai_functions/base.ts file. As for the Yes, it does support asynchronous operations. The You can find more details about this in the langchain-core/src/runnables/base.ts file. Sources
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello team langchain, I am a newbie programmer and I need some guidance here.
Context:
I am using langchain to build a multi-model LLM pipe with the LLM querying an api and return the results based on the api information. For the same I wish to integrate the function api calling (https://js.langchain.com/docs/modules/chains/additional/openai_functions/openapi ) as discussed in this blog here.
Aim: As you all see the code based that I am using currently to chat with the LLM, how do I integrate the function api calling functionality into the code. I cannot find any reference to do that using Runnable Sequence.?
Question:
1/ As discussed in that blog and also this one (https://js.langchain.com/docs/integrations/chat/openai), how do I build the functionality where my model talks to a api of my choice, and then answer the question prompt.
2/ Should I write a function to call the api, or will the internal langchain function will make the call for LLM processing?
3/ Should I make a function call separately and add the result as context for LLM interactions.
any help or guidance would be appreciated.
Code
Beta Was this translation helpful? Give feedback.
All reactions