Skip to content

feat(vertex-ai): function calling: convert markdown 'samples' to python #12811

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion generative_ai/function_calling/basic_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def generate_function_call() -> GenerationResponse:
vertexai.init(project=PROJECT_ID, location="us-central1")

# Initialize Gemini model
model = GenerativeModel("gemini-1.5-flash-002")
model = GenerativeModel("gemini-1.5-pro-002")
Copy link
Member Author

@Valeriy-Burlaka Valeriy-Burlaka Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gemini Flash is not working anymore with this code sample (see the log):

  File "/workspace/generative_ai/function_calling/test_function_calling.py", line 31, in test_function_calling
    response = basic_example.generate_function_call()
  File "/workspace/generative_ai/function_calling/basic_example.py", line 75, in generate_function_call
    function_call = response.candidates[0].function_calls[0]
IndexError: list index out of range

The actual response from the model is now:

candidates {
  content {
    role: "model"
    parts {
      text: "I cannot answer this question. The available function `get_current_weather` has no implementation."
    }
  }
  avg_logprobs: -0.17872051000595093
  finish_reason: STOP
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created a general issue to initiate the discussion about current CI process.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reporting this, Valeriy-Burlaka. The Gemini Flash model appears to have been deprecated. The change to gemini-1.5-pro-002 in the pull request reflects the current, supported model. Your created issue is the appropriate place to track the broader CI implications.


# Define the user's prompt in a Content object that we can reuse in model calls
user_prompt_content = Content(
Expand Down
74 changes: 74 additions & 0 deletions generative_ai/function_calling/function_declaration_from_func.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os

from typing import List

from vertexai.generative_models import FunctionDeclaration


PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")


def function_declaration_from_func():
# [START generativeaionvertexai_gemini_function_calling_declare_from_function]
# Define a function. Could be a local function or you can import the requests library to call an API
def multiply_numbers(numbers: List[int]) -> int:
Copy link
Member Author

@Valeriy-Burlaka Valeriy-Burlaka Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is almost a copy of the original function declaration.
I added the typings to fix the error 400 that was lurking in this doc sample:

...
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.INVALID_ARGUMENT
        details = "Unable to submit request because one or more function parameters didn't specify the schema type field. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling"
        debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.75.10:443 {created_time:"2024-12-02T15:20:19.676923+01:00", grpc_status:3, grpc_message:"Unable to submit request because one or more function parameters didn\'t specify the schema type field. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling"}"
>
...
google.api_core.exceptions.InvalidArgument: 400 Unable to submit request because one or more function parameters didn't specify the schema type field. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling

The unit-test utilizing a chat message is necessary to test correctness of the function declaration.

"""
Calculates the product of all numbers in an array.

Args:
numbers: An array of numbers to be multiplied.

Returns:
The product of all the numbers. If the array is empty, returns 1.
"""

if not numbers: # Handle empty array
return 1

product = 1
for num in numbers:
product *= num

return product

multiply_number_func = FunctionDeclaration.from_func(multiply_numbers)

'''
multiply_number_func contains the following schema:

name: "multiply_numbers"
description: "Calculates the product of all numbers in an array."
parameters {
type_: OBJECT
properties {
key: "numbers"
value {
description: "An array of numbers to be multiplied."
title: "Numbers"
}
}
required: "numbers"
description: "Calculates the product of all numbers in an array."
title: "multiply_numbers"
}
'''
# [END generativeaionvertexai_gemini_function_calling_declare_from_function]
return multiply_number_func


if __name__ == "__main__":
function_declaration_from_func()
15 changes: 15 additions & 0 deletions generative_ai/function_calling/test_function_calling.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,18 @@
# limitations under the License.

import backoff
import pytest

from google.api_core.exceptions import ResourceExhausted
from vertexai.generative_models import GenerativeModel, Tool

import advanced_example
import basic_example
import chat_example
import chat_function_calling_basic
import chat_function_calling_config
import parallel_function_calling_example
import declare_function_from_function


@backoff.on_exception(backoff.expo, ResourceExhausted, max_time=10)
Expand Down Expand Up @@ -52,12 +55,14 @@ def test_function_calling_advanced_function_selection() -> None:
)


@pytest.mark.skip(reason="Blocked on b/... ")
Copy link
Member Author

@Valeriy-Burlaka Valeriy-Burlaka Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both samples that use OpenAI client are not working anymore (log):

  File "/workspace/generative_ai/function_calling/test_function_calling.py", line 59, in test_function_calling_basic
    response = chat_function_calling_basic.generate_text()
  File "/workspace/generative_ai/function_calling/chat_function_calling_basic.py", line 39, in generate_text
    client = openai.OpenAI(
  File "/workspace/generative_ai/function_calling/.nox/py-3-9/lib/python3.9/site-packages/openai/_client.py", line 122, in __init__
    super().__init__(
  File "/workspace/generative_ai/function_calling/.nox/py-3-9/lib/python3.9/site-packages/openai/_base_client.py", line 825, in __init__
    self._client = http_client or SyncHttpxClientWrapper(
  File "/workspace/generative_ai/function_calling/.nox/py-3-9/lib/python3.9/site-packages/openai/_base_client.py", line 723, in __init__
    super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'proxies'

- generated xml file: /workspace/generative_ai/function_calling/sponge_log.xml -
=========================== short test summary info ============================
FAILED test_function_calling.py::test_function_calling_basic - TypeError: __i...
FAILED test_function_calling.py::test_function_calling_config - TypeError: __...
========================= 2 failed, 5 passed in 14.85s =========================

Link to the doc section with OpenAI samples: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/function-calling#python-openai

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created a general issue to initiate the discussion about current CI process.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error TypeError: __init__() got an unexpected keyword argument 'proxies' in the OpenAI client indicates a version mismatch or incompatibility with the openai library and the way it handles proxy settings. The proxies argument was likely added in a newer version. To resolve this, you should:

  1. Check the openai library version: Ensure you're using a version compatible with your current environment. Check your requirements.txt and pyproject.toml (if used) for the specified version. If it's outdated, update it to the latest version that supports the proxies argument (or a version that doesn't require it if you're not using proxies).
  2. Review proxy configuration: If you are using proxies, double-check your proxy settings in your environment variables or configuration files. Incorrectly formatted proxy settings can also cause this error. If you are not using proxies, remove any proxy settings from your code or environment.
  3. Virtual environment consistency: Ensure your testing environment is consistent with your development environment. Use the same virtual environment or container for both to avoid version conflicts.
  4. Dependency resolution: Try running pip install --upgrade pip and then pip install -r requirements.txt (or your equivalent package manager command) to ensure all dependencies are correctly resolved and updated.

The issue you've created is a good first step to track this problem. Consider adding more details about your environment (Python version, OpenAI library version, proxy configuration if applicable) to help diagnose the root cause.

@backoff.on_exception(backoff.expo, ResourceExhausted, max_time=10)
def test_function_calling_basic() -> None:
response = chat_function_calling_basic.generate_text()
assert "get_current_weather" in response.choices[0].message.tool_calls[0].id


@pytest.mark.skip(reason="Blocked on b/... ")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will need help with creating a real ticket in Buganizer.

@backoff.on_exception(backoff.expo, ResourceExhausted, max_time=10)
def test_function_calling_config() -> None:
response = chat_function_calling_config.generate_text()
Expand Down Expand Up @@ -85,3 +90,13 @@ def test_function_calling_chat() -> None:
def test_parallel_function_calling() -> None:
response = parallel_function_calling_example.parallel_function_calling_example()
assert response is not None


def test_prototype() -> None:
func_declaration = declare_function_from_function.function_declaration_as_func()
tools = Tool(function_declarations=[func_declaration])
model = GenerativeModel(model_name="gemini-1.5-pro-002", tools=[tools])
chat_session = model.start_chat()
response = chat_session.send_message("What will be 1 multiplied by 2?")

assert response is not None
Loading