You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using langchain for my masters thesis of controlling a quadruped robot via LLMs. First of all I have to say that langchain has without a doubt saved me an incredible amount of time and is an amazing tool.
I am able to use it for tool calling through Azure OpenAI models like 4o and so on.
I have tried to implement Llama-3-3-70B-Instruct deployed in Azure AI Foundry. It works, however I can't get the model to use any tools even when specifically asked. I have tried bind_tools() just like in the previous model, but it doesn't seem to work anyway. Could someone please help me figure this out?
Thank you very much
This is my current implementation of the model
`#!/usr/bin/env python3.9
import os
import dotenv
from langchain_openai import AzureChatOpenAI
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from artaban_tools import move_robot
llm_streaming = False
def get_llm(model_type: str = "llama", streaming: bool = False):
"""A helper function to get the LLM instance.
Args:
model_type (str): Type of model to use ("openai" for Azure OpenAI GPT, "llama" for Azure ML).
streaming (bool): Whether to enable streaming output.
Returns:
An instance of the selected Azure-based LLM.
"""
global llm_streaming
dotenv.load_dotenv(dotenv.find_dotenv())
if model_type == "openai":
# Use Azure OpenAI (GPT-4, GPT-3.5)
llm_streaming = streaming
return AzureChatOpenAI(
api_version=os.getenv("AZURE_API_VERSION", "2024-02-15-preview"),
azure_deployment=os.getenv("DEPLOYMENT_ID"),
openai_api_key=os.getenv("AZURE_OPENAI_API_KEY"),
azure_endpoint=os.getenv("API_ENDPOINT"),
streaming=streaming,
openai_api_type="azure",
)
elif model_type == "llama":
# Use Azure Machine Learning (ML) deployment for LLaMA or any other model
llm_streaming = False
return AzureAIChatCompletionsModel(
api_version=os.getenv("AZURE_API_VERSION", "2024-02-15-preview"),
endpoint=os.getenv("AZURE_ML_ENDPOINT"),
credential=os.getenv("AZURE_ML_API_KEY"),
model_name=os.getenv("LLAMA_DEPLOYMENT_ID"),
)
else:
raise ValueError(f"Unsupported model type: {model_type}. Choose 'openai' or 'llama'.")
def get_streaming():
"""Get the current streaming setting."""
global llm_streaming
return llm_streaming
def main():
# Select model type (default to "openai")
model_type = os.getenv("LLM_MODEL_TYPE", "llama")
# Get the LLM instance
llm = get_llm(model_type)
if llm is None:
raise ValueError("LLM instance is None. Check API keys and deployment settings.")
# Send a test message
try:
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you use the tool move the robot for demonstration to move the robot forward?"},
]
llm.bind_tools([move_robot])
ai_msg = llm.invoke(messages)
print(f"AI response: {ai_msg.content}")
except Exception as e:
print(f"Error during LLM invocation: {e}")
if name == "main":
main()
`
As for the tool, we can use a hypothetical dummy function that illustrates the point
`@tool
def move_robot(linear_x: float, linear_y: float, angular_z: float) -> str:
"""
Move the robot based on the provided linear and angular velocities.
All values must be within the interval <-1, 1>.
:param linear_x: The linear velocity in the x-direction (Positive numbers move robot forward).
:param linear_y: The linear velocity in the y-direction (Positive numbers move robot to the left).
:param angular_z: The angular velocity around the z-axis (Positive numbers rotate the robot counterclockwise).
:return: A success message.
"""
print(f"Moving robot with linear_x={linear_x}, linear_y={linear_y}, angular_z={angular_z}")
return f"Success"`
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
I am using langchain for my masters thesis of controlling a quadruped robot via LLMs. First of all I have to say that langchain has without a doubt saved me an incredible amount of time and is an amazing tool.
I am able to use it for tool calling through Azure OpenAI models like 4o and so on.
I have tried to implement Llama-3-3-70B-Instruct deployed in Azure AI Foundry. It works, however I can't get the model to use any tools even when specifically asked. I have tried bind_tools() just like in the previous model, but it doesn't seem to work anyway. Could someone please help me figure this out?
Thank you very much
This is my current implementation of the model
`#!/usr/bin/env python3.9
import os
import dotenv
from langchain_openai import AzureChatOpenAI
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from artaban_tools import move_robot
llm_streaming = False
def get_llm(model_type: str = "llama", streaming: bool = False):
"""A helper function to get the LLM instance.
def get_streaming():
"""Get the current streaming setting."""
global llm_streaming
return llm_streaming
def main():
if name == "main":
main()
`
As for the tool, we can use a hypothetical dummy function that illustrates the point
`@tool
def move_robot(linear_x: float, linear_y: float, angular_z: float) -> str:
"""
Move the robot based on the provided linear and angular velocities.
All values must be within the interval <-1, 1>.
Beta Was this translation helpful? Give feedback.
All reactions