Skip to content

Error when using MultiPromptOptimizer with MLX model #49

Open
@tandetat

Description

@tandetat

Hi,

So when using langmem’s create_multi_prompt_optimizer from a MLXPipeline-loaded model, I get an error inside the GradientPromptOptimizer because it passes two tools and tool_choice=“any” to bind_tools in https://github.com/langchain-ai/langmem/blame/10448879b8815a7c5ee286556528d78bc721f32c/src/langmem/prompts/gradient.py#L150

This line returns an error when we pass more than one tool and set tool_choice:
https://github.com/langchain-ai/langchain-community/blame/bc87773064735e649cfd798185502e156d5e948a/libs/community/langchain_community/chat_models/mlx.py#L249

Am I using this wrong, or should this line also check if tool_choice == “any” or this limit on the length of tools outdated?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions