Add code to the invoke() function of class BaseChatModel to move the finish_reason from the chatGeneration to the additional_kwargs of the AIMessage sothat the finish_reason can be used for routing when it is returned by the model in a runnableSequence #3605
reemster123
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I was creating my own custom agent and wanted to create a route() function based on the models AImessage, but then i noticed that it is not possible to determine if the model returns an AgentFinish or not since we can not use instanceOf in NodeJS. Then i looked into the BaseChatModel class's invoke() function and indeed it still has a comment which says :
"// TODO: Remove cast after figuring out inheritance" at langchain/dist/chat_models/base.cjs line 50.
I did not figuere out inheritence but i found another way to atleast be able to return the "finish_reason" property without having to rework every related function or class. We could do this by adding the following lines:
const finish_reason = chatGeneration.generationInfo.finish_reason; chatGeneration.message.additional_kwargs.finish_reason = finish_reason;
This will get the finish_reason property from the defined chatGeneration object and move it to the additional_kwags object nested into the AImessage. In this way we could always look into the additional_kwargs to find the finish_reason and we don't have to change the type or structure of the Message the invoke() function returns.
In my own project i created a CustomChatOpenAI class which extends CHatOpenAI and i added these lines and now i'm able to use it in routing. Maybe it helps adding it here to ;)
example :
import { BaseChatModel } from "langchain/chat_models/base";
class CustomChatOpenAI extends ChatOpenAI {
async invoke(input, options) {
const promptValue = BaseChatModel._convertInputToPromptValue(input);
const result = await this.generatePrompt([promptValue], options, options?.callbacks);
const chatGeneration = result.generations[0][0];
// TODO: Remove cast after figuring out inheritance
// CUSTOM ADDED: get the finish_reason from the chatGeneration and move it to the additional_kwargs
const finish_reason = chatGeneration.generationInfo.finish_reason;
chatGeneration.message.additional_kwargs.finish_reason = finish_reason;
}
export { CustomChatOpenAI };
Beta Was this translation helpful? Give feedback.
All reactions