Langchain, async streaming with memory. #3270
Replies: 2 comments 1 reply
-
🤖 Based on your description, it seems like you're encountering issues with memory management and asynchronous operations when making multiple API calls. To handle this, you can use the Here is an example of how you can use the const chain = prompt
.pipe(model)
.pipe(new JsonOutputFunctionsParser({ diff: true }));
const stream = await chain.stream({
foo: "bears",
});
// Stream a diff as JSON patch operations
for await (const chunk of stream) {
console.log(chunk);
}
const chain2 = prompt.pipe(model).pipe(new JsonOutputFunctionsParser());
const stream2 = await chain2.stream({
foo: "beets",
});
// Stream the entire aggregated JSON object
for await (const chunk of stream2) {
console.log(chunk);
} In this code, the For memory management, LangChain uses the Here is an example of how you can use the const chain = new ConversationChain({
memory: new BufferMemory({ returnMessages: true, memoryKey: "history" }),
prompt: chatPrompt,
llm: chat,
}); In this code, I hope this helps! If you have any further questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Can someone give a more specific advise? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm working on a Node.js application that involves streaming with memory management. While the app functions correctly during the first conversation, I'm encountering issues with asynchronous operations and a need to restart the system to make it work for subsequent API calls for new users. I would greatly appreciate any guidance or suggestions to resolve these issues.
Issue Summary:
The app works fine for the first conversation.
Subsequent conversations require a system restart for the app to function.
The streaming is not asynchronous, leading to performance issues.
All in
router.post('/myAPI', async (req, res) => {
// Initializing memory with chat history I get from my DB
const memory = new BufferMemory({
chatHistory: new ChatMessageHistory(pastMessages),
returnMessages: true, memoryKey: "history"
});
// OpenAI LLM options configuration
const openAILLMOptions = {
modelName: modelname,
cache: true,
openAIApiKey: 'mykey',
temperature: temperature,
streaming: true,
verbose: true,
};
// Chat prompt setup
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." + modifiedContent),
new MessagesPlaceholder("history"),
HumanMessagePromptTemplate.fromTemplate("{input}"),
]);
// Model and streaming chain initialization
const chatModel = new ChatOpenAI(openAILLMOptions);
const chain = new LLMChain({
memory: memory,
prompt: chatPrompt,
llm: chatModel,
});
// Streaming function
const stream = await chain.stream({
handleLLMNewToken(token: any) {
res.write(
data: ${JSON.stringify(sentence)}\n\n
);},
handleLLMEnd() {
res.end();
}
});
Beta Was this translation helpful? Give feedback.
All reactions