diff --git a/docs/core_docs/docs/integrations/chat/oci_genai.ipynb b/docs/core_docs/docs/integrations/chat/oci_genai.ipynb new file mode 100644 index 000000000000..5f891bb939e7 --- /dev/null +++ b/docs/core_docs/docs/integrations/chat/oci_genai.ipynb @@ -0,0 +1,333 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Oracle Cloud Infrastructure Generative AI" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n", + "Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available [here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm) and [here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/).\n", + "\n", + "This notebook explains how to use OCI's Genrative AI models with LangChainJS." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "In order to use this integration you will need the following:\n", + "1. An OCI tenancy. If you do not already have and account, please create one [here](https://signup.cloud.oracle.com?sourceType=:ex:of:::::LangChainJSIntegration&SC=:ex:of:::::LangChainJSIntegration&pcode=).\n", + "2. Setup an [authentication method](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) (Using a [configuration file](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) with [API Key authentication](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#apisigningkey_topic_How_to_Generate_an_API_Signing_Key_Console) is the simplest to start with).\n", + "3. Please make sure that your OCI tenancy is registered in one of the [supported regions](https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#regions).\n", + "4. You will need the ID (aka OCID) of a compartment in which your OCI user has [access to use the Generative AI service](https://docs.oracle.com/en-us/iaas/Content/generative-ai/iam-policies.htm). You can either use the `root` compartment or [create your own](https://docs.oracle.com/en-us/iaas/Content/Identity/compartments/To_create_a_compartment.htm).\n", + "5. Retrieve the desired model name from the [available models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/pretrained-models.htm) list (please make sure not to select a deprecated model)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Installation\n", + "The integration makes use of the [OCI TypeScript SDK](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/typescriptsdk.htm).\n", + "To install the integration dependencies, execute the following:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "shellscript" + } + }, + "outputs": [], + "source": [ + "npm install oci-common oci-generativeaiinference" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Instantiation\n", + "The OCI Generative AI service supports two groups of LLMs:\n", + "1. Cohere family of LLMs.\n", + "2. Generic family of LLMs which include model such as Llama.\n", + "\n", + "The following code demonstrates how to create an instance for each of the families.\n", + "The only mandatory two parameters are:\n", + "1. `compartmentId` - A compartment OCID in which the user you are using for authentication was granted permissions to access the Generative AI service.\n", + "2. `onDemandModelId` or `dedicatedEndpointId` - Either a [pre-trained model](https://docs.oracle.com/en-us/iaas/Content/generative-ai/pretrained-models.htm) name/OCID or a dedicated endpoint OCID for an endpoint configured on a [dedicated AI cluster (DAC)](https://docs.oracle.com/en-us/iaas/Content/generative-ai/ai-cluster.htm). Either `onDemandModelId` or `dedicatedEndpointId` must be provided but not both.\n", + "\n", + "In this example, since no other parameters are specified, a default SDK client will be created with the following configuration:\n", + "1. Authentication will be attempted using a [configuration file](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) which should be already setup and available under `~/.oci/config`. The `config` file is expected to contain a `DEFAULT` profile with the correct information. Please see the prerequisites for more information.\n", + "2. The retry strategy will be set to a single attempt. If the first API call was not successful, the request will fail.\n", + "3. The region will be set to `us-chicago-1`. Please make sure that your tenancy is registered this region." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "javascript" + } + }, + "outputs": [], + "source": [ + "import { OciGenAiCohereChat } from \"@langchain/community/chat_models/oci_genai/cohere_chat\";\n", + "import { OciGenAiGenericChat } from \"@langchain/community/chat_models/oci_genai/generic_chat\";\n", + "\n", + "const cohereLlm = new OciGenAiCohereChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"cohere.command-r-plus-08-2024\",\n", + " // dedicatedEndpointId: \"oci.dedicatedendpoint...\"\n", + "});\n", + "\n", + "const genericLlm = new OciGenAiGenericChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"meta.llama-3.3-70b-instruct\",\n", + " // dedicatedEndpointId: \"oci.dedicatedendpoint...\"\n", + "});" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## SDK client options\n", + "The above example used default values to create the SDK client behind the scenes.\n", + "If you need more control in the creation of the client, here are additional options (the options are the same for `OciGenAiCohereChat` and `OciGenAiGenericChat`).\n", + "\n", + "The first example will create an SDK client with the following configuration:\n", + "1. [Instance Principal authentication](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm#sdk_authentication_methods_instance_principaldita). Please note that this authentication method requires [configuration](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm).\n", + "2. Using the Sao Paulo region.\n", + "3. Up to 3 attempts will be made in case API calls fail." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "javascript" + } + }, + "outputs": [], + "source": [ + "import { MaxAttemptsTerminationStrategy, Region } from \"oci-common\"\n", + "import { OciGenAiCohereChat } from \"@langchain/community/chat_models/oci_genai/cohere_chat\";\n", + "import { OciGenAiNewClientAuthType } from \"@langchain/community/chat_models/oci_genai/types\";\n", + "\n", + "const cohereLlm = new OciGenAiCohereChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"cohere.command-r-plus-08-2024\",\n", + " newClientParams: {\n", + " authType: OciGenAiNewClientAuthType.InstancePrincipal,\n", + " regionId: Region.SA_SAOPAULO_1.regionId,\n", + " clientConfiguration: {\n", + " retryConfiguration: {\n", + " terminationStrategy: new MaxAttemptsTerminationStrategy(3)\n", + " }\n", + " }\n", + " }\n", + "});" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The second example will create an SDK client with the following configuration:\n", + "1. Config file authentication.\n", + "1. Use the config file: `/my/path/config`.\n", + "1. Use the details under the `MY_PROFILE_IN_CONFIG_FILE` profile in the specified config file.\n", + "1. The retry strategy will be set to a single attempt. If the first API call was not successful, the request will fail.\n", + "1. The region will be set to `us-chicago-1`. Please make sure that your tenancy is registered this region." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "javascript" + } + }, + "outputs": [], + "source": [ + "import { OciGenAiCohereChat } from \"@langchain/community/chat_models/oci_genai/cohere_chat\";\n", + "import { OciGenAiNewClientAuthType } from \"@langchain/community/chat_models/oci_genai/types\";\n", + "\n", + "const cohereLlm = new OciGenAiCohereChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"cohere.command-r-plus-08-2024\",\n", + " newClientParams: {\n", + " authType: OciGenAiNewClientAuthType.ConfigFile,\n", + " authParams: {\n", + " clientConfigFilePath: \"/my/path/config\",\n", + " clientProfile: \"MY_PROFILE_IN_CONFIG_FILE\"\n", + " }\n", + " }\n", + "});" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The third example will create an SDK client with the following configuration:\n", + "1. Config file authentication.\n", + "1. Use [Resource Principal](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm#sdk_authentication_methods_resource_principal) authentication.\n", + "1. The retry strategy will be set to a single attempt. If the first API call was not successful, the request will fail.\n", + "1. The region will be set to `us-chicago-1`. Please make sure that your tenancy is registered this region." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "javascript" + } + }, + "outputs": [], + "source": [ + "import { OciGenAiCohereChat } from \"@langchain/community/chat_models/oci_genai/cohere_chat\";\n", + "import { OciGenAiNewClientAuthType } from \"@langchain/community/chat_models/oci_genai/types\";\n", + "\n", + "const cohereLlm = new OciGenAiCohereChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"cohere.command-r-plus-08-2024\",\n", + " newClientParams: {\n", + " authType: OciGenAiNewClientAuthType.Other,\n", + " authParams: {\n", + " authenticationDetailsProvider: await ResourcePrincipalAuthenticationDetailsProvider.builder()\n", + " },\n", + " }\n", + "});" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also instantiate the OCI Generative AI chat classes using `GenerativeAiInferenceClient` that you create on your own. This way you control the creation and configuration of the client to suit your specific needs:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "javascript" + } + }, + "outputs": [], + "source": [ + "import { ConfigFileAuthenticationDetailsProvider } from \"oci-common\";\n", + "import { GenerativeAiInferenceClient } from \"oci-generativeaiinference\";\n", + "import { OciGenAiCohereChat } from \"@langchain/community/chat_models/oci_genai/cohere_chat\";\n", + "\n", + "const client = new GenerativeAiInferenceClient({\n", + " authenticationDetailsProvider: new ConfigFileAuthenticationDetailsProvider()\n", + "});\n", + "\n", + "const cohereLlm = new OciGenAiCohereChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"cohere.command-r-plus-08-2024\",\n", + " client\n", + "});" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Invocation\n", + "In this example, we make a simple call to the OCI Generative AI service while leveraging the power of the `cohere.command-r-plus-08-2024` model.\n", + "Please note that you can pass additional request parameters under the `requestParams` key as shown in the `invoke` call below. For more information please see the [Cohere request parameters](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai-inference/20231130/datatypes/CohereChatRequest) (the `apiFormat`, `chatHistory`, `isStream`, `message` & `stopSequences` parameters are automatically generated or inferred from the call context) and the [Generic request parameters](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai-inference/20231130/datatypes/GenericChatRequest) (the `apiFormat`, `isStream`, `messages` & `stop` parameters are automatically generated or inferred from the call context).\n", + "\n", + "If you wish to specify the chat history for a Cohere request, the list of messages passed into the request will be analyzed and split into the current message and history messages. The last `Human` message sent in the list (regardless of it's position in the list) will be considered as the `message` parameter for the request and the rest of the messages will be added to the `chatHistory` parameter. If there are more than one `Human` messages, the very last one will be considered as the `message` to be sent to the LLM in the current request and the other will be appended to the `chatHistory`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "javascript" + } + }, + "outputs": [], + "source": [ + "import { OciGenAiCohereChat } from \"@langchain/community/chat_models/oci_genai/cohere_chat\";\n", + "\n", + "const llm = new OciGenAiCohereChat({\n", + " compartmentId: \"oci.compartment...\",\n", + " onDemandModelId: \"cohere.command-r-plus-08-2024\"\n", + "});\n", + "\n", + "const result = await llm.invoke(\"Tell me a joke about beagles\", {\n", + " requestParams: {\n", + " temperature: 1,\n", + " maxTokens: 300\n", + " }\n", + "});\n", + "\n", + "console.log(result);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "AIMessage {\n", + " \"content\": \"Why did the beagle cross the road?\\n\\nBecause he was tied to the chicken!\\n\\nI hope you enjoyed the joke! Would you like to hear another one?\",\n", + " \"additional_kwargs\": {},\n", + " \"response_metadata\": {},\n", + " \"tool_calls\": [],\n", + " \"invalid_tool_calls\": []\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Additional information\n", + "For additional information, please checkout the [OCI Generative AI service documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm).\n", + "\n", + "If you are interested in the python version of this integration, you can find more information [here](https://python.langchain.com/docs/integrations/llms/oci_generative_ai/)." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "oci_langchain", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/docs/core_docs/docs/integrations/chat/oci_genai.mdx b/docs/core_docs/docs/integrations/chat/oci_genai.mdx new file mode 100644 index 000000000000..009059a71396 --- /dev/null +++ b/docs/core_docs/docs/integrations/chat/oci_genai.mdx @@ -0,0 +1,263 @@ +--- +title: Oracle Cloud Infrastructure Generative AI +--- + +Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed +service that provides a set of state-of-the-art, customizable large +language models (LLMs) that cover a wide range of use cases, and which +is available through a single API. Using the OCI Generative AI service +you can access ready-to-use pretrained models, or create and host your +own fine-tuned custom models based on your own data on dedicated AI +clusters. Detailed documentation of the service and API is available +[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm) +and +[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/). + +This notebook explains how to use OCI’s Genrative AI models with +LangChainJS. + +## Prerequisites + +In order to use this integration you will need the following: 1. An OCI +tenancy. If you do not already have and account, please create one +[here](https://signup.cloud.oracle.com?sourceType=:ex:of:::::LangChainJSIntegration&SC=:ex:of:::::LangChainJSIntegration&pcode=). 2. Setup an [authentication +method](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) +(Using a [configuration +file](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) +with [API Key +authentication](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#apisigningkey_topic_How_to_Generate_an_API_Signing_Key_Console) +is the simplest to start with). 3. Please make sure that your OCI +tenancy is registered in one of the [supported +regions](https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#regions). 4. You will need the ID (aka OCID) of a compartment in which your OCI +user has [access to use the Generative AI +service](https://docs.oracle.com/en-us/iaas/Content/generative-ai/iam-policies.htm). +You can either use the `root` compartment or [create your +own](https://docs.oracle.com/en-us/iaas/Content/Identity/compartments/To_create_a_compartment.htm). 5. Retrieve the desired model name from the [available +models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/pretrained-models.htm) +list (please make sure not to select a deprecated model). + +## Installation + +The integration makes use of the [OCI TypeScript +SDK](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/typescriptsdk.htm). +To install the integration dependencies, execute the following: + +```python +npm install oci-common oci-generativeaiinference +``` + +## Instantiation + +The OCI Generative AI service supports two groups of LLMs: 1. Cohere +family of LLMs. 2. Generic family of LLMs which include model such as +Llama. + +The following code demonstrates how to create an instance for each of +the families. The only mandatory two parameters are: 1. +`compartmentId` - A compartment OCID in which the user you are using for +authentication was granted permissions to access the Generative AI +service. 2. `onDemandModelId` or `dedicatedEndpointId` - Either a +[pre-trained +model](https://docs.oracle.com/en-us/iaas/Content/generative-ai/pretrained-models.htm) +name/OCID or a dedicated endpoint OCID for an endpoint configured on a +[dedicated AI cluster +(DAC)](https://docs.oracle.com/en-us/iaas/Content/generative-ai/ai-cluster.htm). +Either `onDemandModelId` or `dedicatedEndpointId` must be provided but +not both. + +In this example, since no other parameters are specified, a default SDK +client will be created with the following configuration: 1. +Authentication will be attempted using a [configuration +file](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) +which should be already setup and available under `~/.oci/config`. The +`config` file is expected to contain a `DEFAULT` profile with the +correct information. Please see the prerequisites for more information. 2. The retry strategy will be set to a single attempt. If the first API +call was not successful, the request will fail. 3. The region will be +set to `us-chicago-1`. Please make sure that your tenancy is registered +this region. + +```python +import { OciGenAiCohereChat } from "@langchain/community/chat_models/oci_genai/cohere_chat"; +import { OciGenAiGenericChat } from "@langchain/community/chat_models/oci_genai/generic_chat"; + +const cohereLlm = new OciGenAiCohereChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "cohere.command-r-plus-08-2024", + // dedicatedEndpointId: "oci.dedicatedendpoint..." +}); + +const genericLlm = new OciGenAiGenericChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "meta.llama-3.3-70b-instruct", + // dedicatedEndpointId: "oci.dedicatedendpoint..." +}); +``` + +## SDK client options + +The above example used default values to create the SDK client behind +the scenes. If you need more control in the creation of the client, here +are additional options (the options are the same for +`OciGenAiCohereChat` and `OciGenAiGenericChat`). + +The first example will create an SDK client with the following +configuration: 1. [Instance Principal +authentication](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm#sdk_authentication_methods_instance_principaldita). +Please note that this authentication method requires +[configuration](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm). 2. Using the Sao Paulo region. 3. Up to 3 attempts will be made in case +API calls fail. + +```python +import { MaxAttemptsTerminationStrategy, Region } from "oci-common" +import { OciGenAiCohereChat } from "@langchain/community/chat_models/oci_genai/cohere_chat"; +import { OciGenAiNewClientAuthType } from "@langchain/community/chat_models/oci_genai/types"; + +const cohereLlm = new OciGenAiCohereChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "cohere.command-r-plus-08-2024", + newClientParams: { + authType: OciGenAiNewClientAuthType.InstancePrincipal, + regionId: Region.SA_SAOPAULO_1.regionId, + clientConfiguration: { + retryConfiguration: { + terminationStrategy: new MaxAttemptsTerminationStrategy(3) + } + } + } +}); +``` + +The second example will create an SDK client with the following +configuration: 1. Config file authentication. 1. Use the config file: +`/my/path/config`. 1. Use the details under the +`MY_PROFILE_IN_CONFIG_FILE` profile in the specified config file. 1. The +retry strategy will be set to a single attempt. If the first API call +was not successful, the request will fail. 1. The region will be set to +`us-chicago-1`. Please make sure that your tenancy is registered this +region. + +```python +import { OciGenAiCohereChat } from "@langchain/community/chat_models/oci_genai/cohere_chat"; +import { OciGenAiNewClientAuthType } from "@langchain/community/chat_models/oci_genai/types"; + +const cohereLlm = new OciGenAiCohereChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "cohere.command-r-plus-08-2024", + newClientParams: { + authType: OciGenAiNewClientAuthType.ConfigFile, + authParams: { + clientConfigFilePath: "/my/path/config", + clientProfile: "MY_PROFILE_IN_CONFIG_FILE" + } + } +}); +``` + +The third example will create an SDK client with the following +configuration: 1. Config file authentication. 1. Use [Resource +Principal](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm#sdk_authentication_methods_resource_principal) +authentication. 1. The retry strategy will be set to a single attempt. +If the first API call was not successful, the request will fail. 1. The +region will be set to `us-chicago-1`. Please make sure that your tenancy +is registered this region. + +```python +import { OciGenAiCohereChat } from "@langchain/community/chat_models/oci_genai/cohere_chat"; +import { OciGenAiNewClientAuthType } from "@langchain/community/chat_models/oci_genai/types"; + +const cohereLlm = new OciGenAiCohereChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "cohere.command-r-plus-08-2024", + newClientParams: { + authType: OciGenAiNewClientAuthType.Other, + authParams: { + authenticationDetailsProvider: await ResourcePrincipalAuthenticationDetailsProvider.builder() + }, + } +}); +``` + +You can also instantiate the OCI Generative AI chat classes using +`GenerativeAiInferenceClient` that you create on your own. This way you +control the creation and configuration of the client to suit your +specific needs: + +```python +import { ConfigFileAuthenticationDetailsProvider } from "oci-common"; +import { GenerativeAiInferenceClient } from "oci-generativeaiinference"; +import { OciGenAiCohereChat } from "@langchain/community/chat_models/oci_genai/cohere_chat"; + +const client = new GenerativeAiInferenceClient({ + authenticationDetailsProvider: new ConfigFileAuthenticationDetailsProvider() +}); + +const cohereLlm = new OciGenAiCohereChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "cohere.command-r-plus-08-2024", + client +}); +``` + +## Invocation + +In this example, we make a simple call to the OCI Generative AI service +while leveraging the power of the `cohere.command-r-plus-08-2024` model. +Please note that you can pass additional request parameters under the +`requestParams` key as shown in the `invoke` call below. For more +information please see the [Cohere request +parameters](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai-inference/20231130/datatypes/CohereChatRequest) +(the `apiFormat`, `chatHistory`, `isStream`, `message` & `stopSequences` +parameters are automatically generated or inferred from the call +context) and the [Generic request +parameters](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai-inference/20231130/datatypes/GenericChatRequest) +(the `apiFormat`, `isStream`, `messages` & `stop` parameters are +automatically generated or inferred from the call context). + +If you wish to specify the chat history for a Cohere request, the list +of messages passed into the request will be analyzed and split into the +current message and history messages. The last `Human` message sent in +the list (regardless of it’s position in the list) will be considered as +the `message` parameter for the request and the rest of the messages +will be added to the `chatHistory` parameter. If there are more than one +`Human` messages, the very last one will be considered as the `message` +to be sent to the LLM in the current request and the other will be +appended to the `chatHistory`. + +```python +import { OciGenAiCohereChat } from "@langchain/community/chat_models/oci_genai/cohere_chat"; + +const llm = new OciGenAiCohereChat({ + compartmentId: "oci.compartment...", + onDemandModelId: "cohere.command-r-plus-08-2024" +}); + +const result = await llm.invoke("Tell me a joke about beagles", { + requestParams: { + temperature: 1, + maxTokens: 300 + } +}); + +console.log(result); +``` + +AIMessage { “content”: “Why did the beagle cross the road?he was tied to +the chicken!hope you enjoyed the joke! Would you like to hear another +one?”, “additional_kwargs”: {}, “response_metadata”: {}, “tool_calls”: +\[\], “invalid_tool_calls”: \[\] } + +## Additional information + +For additional information, please checkout the [OCI Generative AI +service +documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm). + +If you are interested in the python version of this integration, you can +find more information +[here](https://python.langchain.com/docs/integrations/llms/oci_generative_ai/). + + +## Related + +- Chat model [conceptual guide](/docs/concepts/#chat-models) +- Chat model [how-to guides](/docs/how_to/#chat-models) diff --git a/libs/langchain-community/.gitignore b/libs/langchain-community/.gitignore index ae0258bd42da..b9c7b92ab78f 100644 --- a/libs/langchain-community/.gitignore +++ b/libs/langchain-community/.gitignore @@ -582,6 +582,10 @@ chat_models/ollama.cjs chat_models/ollama.js chat_models/ollama.d.ts chat_models/ollama.d.cts +chat_models/oci_genai.cjs +chat_models/oci_genai.js +chat_models/oci_genai.d.ts +chat_models/oci_genai.d.cts chat_models/portkey.cjs chat_models/portkey.js chat_models/portkey.d.ts diff --git a/libs/langchain-community/langchain.config.js b/libs/langchain-community/langchain.config.js index dca43ba96d44..f3cc9ddcce16 100644 --- a/libs/langchain-community/langchain.config.js +++ b/libs/langchain-community/langchain.config.js @@ -184,6 +184,7 @@ export const config = { "chat_models/moonshot": "chat_models/moonshot", "chat_models/novita": "chat_models/novita", "chat_models/ollama": "chat_models/ollama", + "chat_models/oci_genai": "chat_models/oci_genai/index", "chat_models/portkey": "chat_models/portkey", "chat_models/premai": "chat_models/premai", "chat_models/tencent_hunyuan": "chat_models/tencent_hunyuan/index", @@ -435,6 +436,7 @@ export const config = { "chat_models/bedrock", "chat_models/bedrock/web", "chat_models/llama_cpp", + "chat_models/oci_genai", "chat_models/portkey", "chat_models/premai", "chat_models/tencent_hunyuan", diff --git a/libs/langchain-community/package.json b/libs/langchain-community/package.json index fb0fc45c6297..ce8fd7450fd4 100644 --- a/libs/langchain-community/package.json +++ b/libs/langchain-community/package.json @@ -193,6 +193,8 @@ "neo4j-driver": "^5.17.0", "node-llama-cpp": "3.1.1", "notion-to-md": "^3.1.0", + "oci-common": "^2.102.2", + "oci-generativeaiinference": "^2.102.2", "officeparser": "^4.0.4", "openai": "*", "pdf-parse": "1.1.1", @@ -324,6 +326,8 @@ "mysql2": "^3.9.8", "neo4j-driver": "*", "notion-to-md": "^3.1.0", + "oci-common": "^2.102.2", + "oci-generativeaiinference": "^2.102.2", "officeparser": "^4.0.4", "openai": "*", "pdf-parse": "1.1.1", @@ -646,6 +650,12 @@ "notion-to-md": { "optional": true }, + "oci-common": { + "optional": true + }, + "oci-generativeaiinference": { + "optional": true + }, "officeparser": { "optional": true }, @@ -2031,6 +2041,15 @@ "import": "./chat_models/ollama.js", "require": "./chat_models/ollama.cjs" }, + "./chat_models/oci_genai": { + "types": { + "import": "./chat_models/oci_genai.d.ts", + "require": "./chat_models/oci_genai.d.cts", + "default": "./chat_models/oci_genai.d.ts" + }, + "import": "./chat_models/oci_genai.js", + "require": "./chat_models/oci_genai.cjs" + }, "./chat_models/portkey": { "types": { "import": "./chat_models/portkey.d.ts", @@ -3807,6 +3826,10 @@ "chat_models/ollama.js", "chat_models/ollama.d.ts", "chat_models/ollama.d.cts", + "chat_models/oci_genai.cjs", + "chat_models/oci_genai.js", + "chat_models/oci_genai.d.ts", + "chat_models/oci_genai.d.cts", "chat_models/portkey.cjs", "chat_models/portkey.js", "chat_models/portkey.d.ts", diff --git a/libs/langchain-community/src/chat_models/oci_genai/cohere_chat.ts b/libs/langchain-community/src/chat_models/oci_genai/cohere_chat.ts new file mode 100644 index 000000000000..2a8ed1088124 --- /dev/null +++ b/libs/langchain-community/src/chat_models/oci_genai/cohere_chat.ts @@ -0,0 +1,151 @@ +import { + CohereChatBotMessage, + CohereChatRequest, + CohereChatResponse, + CohereMessage, + CohereSystemMessage, + CohereUserMessage, +} from "oci-generativeaiinference/lib/model"; + +import { BaseMessage } from "@langchain/core/messages"; +import { LangSmithParams } from "@langchain/core/language_models/chat_models"; +import { OciGenAiBaseChat } from "./index.js"; + +interface HistoryMessageInfo { + chatHistory: CohereMessage[]; + message: string; +} + +interface CohereStreamedResponseChunkData { + apiFormat: string; + text: string; +} + +export type CohereCallOptions = Omit< + CohereChatRequest, + "apiFormat" | "message" | "chatHistory" | "isStream" | "stopSequences" +>; + +export class OciGenAiCohereChat extends OciGenAiBaseChat { + override _createRequest( + messages: BaseMessage[], + options: this["ParsedCallOptions"], + stream?: boolean + ): CohereChatRequest { + const historyMessage: HistoryMessageInfo = + OciGenAiCohereChat._splitMessageAndHistory(messages); + + return { + apiFormat: CohereChatRequest.apiFormat, + message: historyMessage.message, + chatHistory: historyMessage.chatHistory, + ...options.requestParams, + isStream: !!stream, + stopSequences: options.stop, + }; + } + + override _parseResponse(response: CohereChatResponse | undefined): string { + if (!OciGenAiCohereChat._isCohereResponse(response)) { + throw new Error("Invalid CohereResponse object"); + } + + return response.text; + } + + override _parseStreamedResponseChunk(chunk: unknown): string { + if (OciGenAiCohereChat._isCohereChunkData(chunk)) { + return chunk.text; + } + + throw new Error("Invalid streamed response chunk data"); + } + + static _splitMessageAndHistory(messages: BaseMessage[]): HistoryMessageInfo { + const chatHistory: CohereMessage[] = []; + let lastUserMessage = ""; + let lastUserMessageIndex = -1; + + for (let i = 0; i < messages.length; i += 1) { + const cohereMessage: CohereMessage = + this._convertBaseMessageToCohereMessage(messages[i]); + chatHistory.push(cohereMessage); + + if (cohereMessage.role === CohereUserMessage.role) { + lastUserMessage = (cohereMessage).message; + lastUserMessageIndex = i; + } + } + + if (lastUserMessageIndex !== -1) { + chatHistory.splice(lastUserMessageIndex, 1); + } + + return { + chatHistory, + message: lastUserMessage, + }; + } + + static _convertBaseMessageToCohereMessage( + baseMessage: BaseMessage + ): CohereMessage { + const messageType: string = baseMessage.getType(); + const message: string = baseMessage.content as string; + + switch (messageType) { + case "ai": + return { + role: CohereChatBotMessage.role, + message, + }; + + case "system": + return { + role: CohereSystemMessage.role, + message, + }; + + case "human": + return { + role: CohereUserMessage.role, + message, + }; + + default: + throw new Error(`Message type '${messageType}' is not supported`); + } + } + + static _isCohereResponse(response: unknown): response is CohereChatResponse { + return ( + response !== null && + typeof response === "object" && + typeof (response).text === "string" + ); + } + + static _isCohereChunkData( + chunkData: unknown + ): chunkData is CohereStreamedResponseChunkData { + return ( + chunkData !== null && + typeof chunkData === "object" && + typeof (chunkData).text === "string" && + (chunkData).apiFormat === + CohereChatRequest.apiFormat + ); + } + + override getLsParams(options: this["ParsedCallOptions"]): LangSmithParams { + return { + ls_provider: "oci_genai_cohere", + ls_model_name: + this._params.onDemandModelId || this._params.dedicatedEndpointId || "", + ls_model_type: "chat", + ls_temperature: options.requestParams?.temperature || 0, + ls_max_tokens: options.requestParams?.maxTokens || 0, + ls_stop: options.stop || [] + }; + } +} diff --git a/libs/langchain-community/src/chat_models/oci_genai/generic_chat.ts b/libs/langchain-community/src/chat_models/oci_genai/generic_chat.ts new file mode 100644 index 000000000000..168652911920 --- /dev/null +++ b/libs/langchain-community/src/chat_models/oci_genai/generic_chat.ts @@ -0,0 +1,189 @@ +import { BaseMessage } from "@langchain/core/messages"; +import { LangSmithParams } from "@langchain/core/language_models/chat_models"; + +import { + AssistantMessage, + GenericChatRequest, + GenericChatResponse, + Message, + SystemMessage, + TextContent, + UserMessage, + ChatChoice, + ChatContent, +} from "oci-generativeaiinference/lib/model"; + +import { OciGenAiBaseChat } from "./index.js"; + +export type GenericCallOptions = Omit< + GenericChatRequest, + "apiFormat" | "messages" | "isStream" | "stop" +>; + +export class OciGenAiGenericChat extends OciGenAiBaseChat { + override _createRequest( + messages: BaseMessage[], + options: this["ParsedCallOptions"], + stream?: boolean + ): GenericChatRequest { + return { + apiFormat: GenericChatRequest.apiFormat, + messages: + OciGenAiGenericChat._convertBaseMessagesToGenericMessages(messages), + ...options.requestParams, + isStream: !!stream, + stop: options.stop, + }; + } + + override _parseResponse(response: GenericChatResponse): string { + if (!OciGenAiGenericChat._isGenericResponse(response)) { + throw new Error("Invalid GenericChatResponse object"); + } + + return response.choices + ?.map((choice: ChatChoice) => + choice.message.content + ?.map((content: ChatContent) => (content).text) + .join("") + ) + .join(""); + } + + override _parseStreamedResponseChunk(chunk: unknown): string | undefined { + if (!OciGenAiGenericChat._isValidChatChoice(chunk)) { + throw new Error("Invalid streamed response chunk data"); + } + + if (OciGenAiGenericChat._isFinalChunk(chunk)) { + return undefined; + } + + return OciGenAiGenericChat._getChunkDataText(chunk); + } + + static _convertBaseMessagesToGenericMessages( + messages: BaseMessage[] + ): Message[] { + return messages.map(this._convertBaseMessageToGenericMessage); + } + + static _convertBaseMessageToGenericMessage( + baseMessage: BaseMessage + ): Message { + const messageType: string = baseMessage.getType(); + const text: string = baseMessage.content as string; + const messageRole: string = + OciGenAiGenericChat._convertBaseMessageTypeToRole(messageType); + + return OciGenAiGenericChat._createMessage(messageRole, text); + } + + static _convertBaseMessageTypeToRole(baseMessageType: string): string { + switch (baseMessageType) { + case "ai": + return AssistantMessage.role; + + case "system": + return SystemMessage.role; + + case "human": + return UserMessage.role; + + default: + throw new Error(`Message type '${baseMessageType}' is not supported`); + } + } + + static _createMessage(role: string, text: string): Message { + return { + role, + content: OciGenAiGenericChat._createTextContent(text), + }; + } + + static _createTextContent(text: string): TextContent[] { + return [ + { + type: TextContent.type, + text, + }, + ]; + } + + static _isGenericResponse( + response: unknown + ): response is GenericChatResponse { + return ( + response !== null && + typeof response === "object" && + this._isValidChoicesArray((response).choices) + ); + } + + static _isValidChoicesArray(choices: unknown): choices is ChatChoice[] { + return ( + Array.isArray(choices) && + choices.every(OciGenAiGenericChat._isValidChatChoice) + ); + } + + static _isValidChatChoice(choice: unknown): choice is ChatChoice { + return ( + choice !== null && + typeof choice === "object" && + (OciGenAiGenericChat._isValidMessage((choice).message) || + OciGenAiGenericChat._isFinalChunk(choice)) + ); + } + + static _isValidMessage(message: unknown): message is Message { + return ( + message !== null && + typeof message === "object" && + OciGenAiGenericChat._isValidContentArray((message).content) + ); + } + + static _isValidContentArray(content: TextContent[] | undefined): boolean { + return ( + Array.isArray(content) && + content.every(OciGenAiGenericChat._isValidTextContent) + ); + } + + static _isValidTextContent(content: unknown): content is TextContent { + return ( + content !== null && + typeof content === "object" && + (content).type === TextContent.type && + typeof (content).text === "string" + ); + } + + static _getChunkDataText(chunkData: ChatChoice): string | undefined { + return chunkData.message?.content + ?.map((message: TextContent) => message.text) + .join(" "); + } + + static _isFinalChunk(chunkData: unknown) { + return ( + chunkData !== null && + typeof chunkData === "object" && + typeof (chunkData).finishReason === "string" + ); + } + + override getLsParams(options: this["ParsedCallOptions"]): LangSmithParams { + return { + ls_provider: "oci_genai_generic", + ls_model_name: + this._params.onDemandModelId || this._params.dedicatedEndpointId || "", + ls_model_type: "chat", + ls_temperature: options.requestParams?.temperature || 0, + ls_max_tokens: options.requestParams?.maxTokens || 0, + ls_stop: options.stop || [], + }; + } +} diff --git a/libs/langchain-community/src/chat_models/oci_genai/index.ts b/libs/langchain-community/src/chat_models/oci_genai/index.ts new file mode 100644 index 000000000000..ec2b09edf265 --- /dev/null +++ b/libs/langchain-community/src/chat_models/oci_genai/index.ts @@ -0,0 +1,221 @@ +import { AIMessageChunk, BaseMessage } from "@langchain/core/messages"; +import { ChatGenerationChunk } from "@langchain/core/outputs"; +import { SimpleChatModel } from "@langchain/core/language_models/chat_models"; +import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager"; + +import { ChatResponse } from "oci-generativeaiinference/lib/response"; +import { ChatRequest } from "oci-generativeaiinference/lib/request"; +import { + DedicatedServingMode, + OnDemandServingMode, +} from "oci-generativeaiinference/lib/model"; + +import { + OciGenAiChatCallResponseType, + OciGenAiModelBaseParams, + OciGenAiModelCallOptions, + OciGenAiSupportedRequestType, + OciGenAiSupportedResponseType, +} from "./types.js"; + +import { OciGenAiSdkClient } from "./oci_genai_sdk_client.js"; +import { JsonServerEventsIterator } from "./server_events_iterator.js"; + +export abstract class OciGenAiBaseChat extends SimpleChatModel< + OciGenAiModelCallOptions +> { + _sdkClient: OciGenAiSdkClient | undefined; + + _params: Partial; + + constructor(params?: Partial) { + super(params ?? {}); + this._params = params ?? {}; + } + + abstract _createRequest( + messages: BaseMessage[], + options: this["ParsedCallOptions"], + stream?: boolean + ): OciGenAiSupportedRequestType; + + abstract _parseResponse( + response: OciGenAiSupportedResponseType | undefined + ): string; + + abstract _parseStreamedResponseChunk(chunk: unknown): string | undefined; + + async _call( + messages: BaseMessage[], + options: this["ParsedCallOptions"] + ): Promise { + const response: ChatResponse = await this._makeRequest(messages, options); + return this._parseResponse(response?.chatResult?.chatResponse); + } + + override async *_streamResponseChunks( + messages: BaseMessage[], + options: this["ParsedCallOptions"], + runManager?: CallbackManagerForLLMRun + ): AsyncGenerator { + const response: ReadableStream = await this._makeRequest( + messages, + options, + true + ); + const responseChunkIterator = new JsonServerEventsIterator(response); + + for await (const responseChunk of responseChunkIterator) { + yield* this._streamResponseChunk(responseChunk, runManager); + } + } + + async *_streamResponseChunk( + responseChunkData: unknown, + runManager?: CallbackManagerForLLMRun + ): AsyncGenerator { + const text: string | undefined = + this._parseStreamedResponseChunk(responseChunkData); + + if (text === undefined) { + return; + } + + yield this._createStreamResponse(text); + await runManager?.handleLLMNewToken(text); + } + + async _makeRequest( + messages: BaseMessage[], + options: this["ParsedCallOptions"], + stream?: boolean + ): Promise { + const request: OciGenAiSupportedRequestType = this._prepareRequest( + messages, + options, + stream + ); + await this._setupClient(); + return await this._chat(request); + } + + async _setupClient() { + if (this._sdkClient) { + return; + } + + this._sdkClient = await OciGenAiSdkClient.create(this._params); + } + + _createStreamResponse(text: string) { + return new ChatGenerationChunk({ + message: new AIMessageChunk({ content: text }), + text, + }); + } + + _prepareRequest( + messages: BaseMessage[], + options: this["ParsedCallOptions"], + stream?: boolean + ): OciGenAiSupportedRequestType { + this._assertMessages(messages); + return this._createRequest(messages, options, stream); + } + + _assertMessages(messages: BaseMessage[]) { + if (messages.length === 0) { + throw new Error("No messages provided"); + } + + for (const message of messages) { + if (typeof message.content !== "string") { + throw new Error("Only text messages are supported"); + } + } + } + + async _chat( + chatRequest: OciGenAiSupportedRequestType + ): Promise { + try { + return await this._callChat(chatRequest); + } catch (error) { + throw new Error( + `Error executing chat API, error: ${(error)?.message}` + ); + } + } + + async _callChat( + chatRequest: OciGenAiSupportedRequestType + ): Promise { + if (!OciGenAiBaseChat._isSdkClient(this._sdkClient)) { + throw new Error("OCI SDK client not initialized"); + } + + const fullChatRequest: ChatRequest = this._composeFullRequest(chatRequest); + return await this._sdkClient.client.chat(fullChatRequest); + } + + _composeFullRequest(chatRequest: OciGenAiSupportedRequestType): ChatRequest { + return { + chatDetails: { + chatRequest, + compartmentId: this._getCompartmentId(), + servingMode: this._getServingMode(), + }, + }; + } + + static _isSdkClient(sdkClient: unknown): sdkClient is OciGenAiSdkClient { + return ( + sdkClient !== null && + typeof sdkClient === "object" && + typeof (sdkClient).client === "object" + ); + } + + _getServingMode(): OnDemandServingMode | DedicatedServingMode { + this._assertServingMode(); + + if (typeof this._params?.onDemandModelId === "string") { + return { + servingType: OnDemandServingMode.servingType, + modelId: this._params.onDemandModelId, + }; + } + + return { + servingType: DedicatedServingMode.servingType, + endpointId: this._params.dedicatedEndpointId, + }; + } + + _getCompartmentId(): string { + if (!OciGenAiBaseChat._isValidString(this._params.compartmentId)) { + throw new Error("Invalid compartmentId"); + } + + return this._params.compartmentId; + } + + _assertServingMode() { + if ( + !OciGenAiBaseChat._isValidString(this._params.onDemandModelId) && + !OciGenAiBaseChat._isValidString(this._params.dedicatedEndpointId) + ) { + throw new Error( + "Either onDemandModelId or dedicatedEndpointId must be supplied" + ); + } + } + + static _isValidString(value: unknown): value is string { + return typeof value === "string" && value.length > 0; + } + + _llmType() { + return "custom"; + } +} diff --git a/libs/langchain-community/src/chat_models/oci_genai/oci_genai_sdk_client.ts b/libs/langchain-community/src/chat_models/oci_genai/oci_genai_sdk_client.ts new file mode 100644 index 000000000000..4e4b75029c3c --- /dev/null +++ b/libs/langchain-community/src/chat_models/oci_genai/oci_genai_sdk_client.ts @@ -0,0 +1,129 @@ +import { + AuthenticationDetailsProvider, + AuthParams, + ClientConfiguration, + ConfigFileAuthenticationDetailsProvider, + InstancePrincipalsAuthenticationDetailsProviderBuilder, + MaxAttemptsTerminationStrategy, + Region, +} from "oci-common"; + +import { GenerativeAiInferenceClient } from "oci-generativeaiinference"; + +import { + ConfigFileAuthParams, + OciGenAiClientParams, + OciGenAiNewClientAuthType, +} from "./types.js"; + +export class OciGenAiSdkClient { + static readonly _DEFAULT_REGION_ID = Region.US_CHICAGO_1.regionId; + + private constructor(private _client: GenerativeAiInferenceClient) {} + + get client(): GenerativeAiInferenceClient { + return this._client; + } + + static async create( + params: OciGenAiClientParams + ): Promise { + const client: GenerativeAiInferenceClient = await this._getClient(params); + return new OciGenAiSdkClient(client); + } + + static async _getClient( + params: OciGenAiClientParams + ): Promise { + if (params.client) { + return params.client; + } + + return await this._createAndSetupNewClient(params); + } + + static async _createAndSetupNewClient( + params: OciGenAiClientParams + ): Promise { + const client: GenerativeAiInferenceClient = await this._createNewClient( + params + ); + + if (!params.newClientParams?.regionId) { + client.regionId = this._DEFAULT_REGION_ID; + } else { + client.regionId = params.newClientParams.regionId; + } + + return client; + } + + static async _createNewClient( + params: OciGenAiClientParams + ): Promise { + const authParams: AuthParams = await this._getClientAuthParams(params); + const clientConfiguration: ClientConfiguration = + this._getClientConfiguration(params.newClientParams?.clientConfiguration); + return new GenerativeAiInferenceClient(authParams, clientConfiguration); + } + + static async _getClientAuthParams( + params: OciGenAiClientParams + ): Promise { + if (params.newClientParams?.authType === OciGenAiNewClientAuthType.Other) { + return params.newClientParams.authParams; + } + + const authenticationDetailsProvider: AuthenticationDetailsProvider = + await this._getAuthProvider(params); + return { authenticationDetailsProvider }; + } + + static async _getAuthProvider( + params: OciGenAiClientParams + ): Promise { + switch (params.newClientParams?.authType) { + case undefined: + case OciGenAiNewClientAuthType.ConfigFile: + return this._getConfigFileAuthProvider(params); + + case OciGenAiNewClientAuthType.InstancePrincipal: + return await this._getInstancePrincipalAuthProvider(); + + default: + throw new Error("Invalid authentication type"); + } + } + + static _getConfigFileAuthProvider( + params: OciGenAiClientParams + ): AuthenticationDetailsProvider { + const configFileAuthParams: ConfigFileAuthParams = ( + params.newClientParams?.authParams + ); + return new ConfigFileAuthenticationDetailsProvider( + configFileAuthParams?.clientConfigFilePath, + configFileAuthParams?.clientProfile + ); + } + + static async _getInstancePrincipalAuthProvider(): Promise { + const instancePrincipalAuthenticationBuilder = + new InstancePrincipalsAuthenticationDetailsProviderBuilder(); + return await instancePrincipalAuthenticationBuilder.build(); + } + + static _getClientConfiguration( + clientConfiguration: ClientConfiguration | undefined + ): ClientConfiguration { + if (clientConfiguration) { + return clientConfiguration; + } + + return { + retryConfiguration: { + terminationStrategy: new MaxAttemptsTerminationStrategy(1), + }, + }; + } +} diff --git a/libs/langchain-community/src/chat_models/oci_genai/server_events_iterator.ts b/libs/langchain-community/src/chat_models/oci_genai/server_events_iterator.ts new file mode 100644 index 000000000000..0232cdfcfc4b --- /dev/null +++ b/libs/langchain-community/src/chat_models/oci_genai/server_events_iterator.ts @@ -0,0 +1,76 @@ +import { IterableReadableStream } from "@langchain/core/utils/stream"; + +export class JsonServerEventsIterator { + static readonly _SERVER_EVENT_DATA_PREFIX: string = "data: "; + + static readonly _SERVER_EVENT_DATA_PREFIX_LENGTH: number = + this._SERVER_EVENT_DATA_PREFIX.length; + + _eventsStream: IterableReadableStream; + + _textDecoder: TextDecoder; + + constructor(sourceStream: ReadableStream) { + this._eventsStream = + IterableReadableStream.fromReadableStream(sourceStream); + this._textDecoder = new TextDecoder(); + } + + async *[Symbol.asyncIterator](): AsyncIterator { + for await (const eventRawData of this._eventsStream) { + yield this._parseEvent(eventRawData); + } + } + + _parseEvent(eventRawData: Uint8Array): unknown { + const eventDataText: string = this._getEventDataText(eventRawData); + const eventData: unknown = + JsonServerEventsIterator._getEventDataAsJson(eventDataText); + JsonServerEventsIterator._assertEventData(eventData); + + return eventData; + } + + _getEventDataText(eventData: Uint8Array): string { + JsonServerEventsIterator._assertEventRawData(eventData); + const eventDataText: string = this._textDecoder.decode(eventData); + JsonServerEventsIterator._assertEventText(eventDataText); + return eventDataText; + } + + static _assertEventRawData(eventRawData: Uint8Array) { + if (eventRawData.length < this._SERVER_EVENT_DATA_PREFIX_LENGTH) { + throw new Error("Event raw data is empty or too short to be valid"); + } + } + + static _assertEventText(eventText: string) { + if ( + eventText.length < this._SERVER_EVENT_DATA_PREFIX_LENGTH || + !eventText.startsWith(this._SERVER_EVENT_DATA_PREFIX) + ) { + throw new Error("Event text is empty, too short or malformed"); + } + } + + static _assertEventData(eventData: unknown) { + if (eventData === null || typeof eventData !== "object") { + throw new Error("Event data could not be parsed into an object"); + } + } + + static _getEventDataAsJson(eventDataText: string): unknown { + try { + const eventJsonText: string = this._getEventJsonText(eventDataText); + return JSON.parse(eventJsonText); + } catch { + throw new Error("Could not parse event data as JSON"); + } + } + + static _getEventJsonText(eventDataText: string): string { + return eventDataText.substring( + JsonServerEventsIterator._SERVER_EVENT_DATA_PREFIX_LENGTH + ); + } +} diff --git a/libs/langchain-community/src/chat_models/oci_genai/types.ts b/libs/langchain-community/src/chat_models/oci_genai/types.ts new file mode 100644 index 000000000000..2457bd2cb5b0 --- /dev/null +++ b/libs/langchain-community/src/chat_models/oci_genai/types.ts @@ -0,0 +1,65 @@ +import { + BaseChatModelCallOptions, + BaseChatModelParams, +} from "@langchain/core/language_models/chat_models"; +import { AuthParams, ClientConfiguration } from "oci-common"; +import { GenerativeAiInferenceClient } from "oci-generativeaiinference"; + +import { + ChatDetails, + CohereChatRequest, + CohereChatResponse, + GenericChatRequest, + GenericChatResponse, +} from "oci-generativeaiinference/lib/model"; + +import { ChatResponse } from "oci-generativeaiinference/lib/response"; + +export enum OciGenAiNewClientAuthType { + ConfigFile, + InstancePrincipal, + Other, +} + +export interface ConfigFileAuthParams { + clientConfigFilePath: string; + clientProfile: string; +} + +export interface OciGenAiNewClientParams { + authType: OciGenAiNewClientAuthType; + regionId?: string; + authParams?: ConfigFileAuthParams | AuthParams; + clientConfiguration?: ClientConfiguration; +} + +export interface OciGenAiClientParams { + client?: GenerativeAiInferenceClient; + newClientParams?: OciGenAiNewClientParams; +} + +export interface OciGenAiServingParams { + onDemandModelId?: string; + dedicatedEndpointId?: string; +} + +export type OciGenAiSupportedRequestType = + | GenericChatRequest + | CohereChatRequest; +export type OciGenAiModelBaseParams = BaseChatModelParams & + OciGenAiClientParams & + Omit & + OciGenAiServingParams; + +export interface OciGenAiModelCallOptions + extends BaseChatModelCallOptions { + requestParams?: RequestType; +} + +export type OciGenAiSupportedResponseType = + | GenericChatResponse + | CohereChatResponse; +export type OciGenAiChatCallResponseType = + | ChatResponse + | ReadableStream + | null; diff --git a/libs/langchain-community/src/chat_models/tests/chatoci_genai.int.test.ts b/libs/langchain-community/src/chat_models/tests/chatoci_genai.int.test.ts new file mode 100644 index 000000000000..793aa48f85f6 --- /dev/null +++ b/libs/langchain-community/src/chat_models/tests/chatoci_genai.int.test.ts @@ -0,0 +1,67 @@ +/* eslint-disable no-process-env */ +/* eslint-disable @typescript-eslint/no-explicit-any */ + +import { BaseChatModel } from "@langchain/core/language_models/chat_models"; + +import { OciGenAiCohereChat } from "../oci_genai/cohere_chat.js"; +import { OciGenAiGenericChat } from "../oci_genai/generic_chat.js"; + +type OciGenAiChatConstructor = new (args: any) => BaseChatModel; + +/* + * OciGenAiChat tests + */ + +const compartmentId = process.env.OCI_GENAI_INTEGRATION_TESTS_COMPARTMENT_ID; +const creationParameters = [ + [ + { + compartmentId, + onDemandModelId: + process.env.OCI_GENAI_INTEGRATION_TESTS_COHERE_ON_DEMAND_MODEL_ID, + }, + ], + [ + { + compartmentId, + onDemandModelId: + process.env.OCI_GENAI_INTEGRATION_TESTS_GENERIC_ON_DEMAND_MODEL_ID, + }, + ], +]; + +test("OCI GenAI chat invoke", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor, creationParams: any[]) => { + for (const params of creationParams) { + const chatClass = new ChatClassType(params); + const response = await chatClass.invoke( + "generate a single, very short mission statement for a pet insurance company" + ); + expect(response.content.length).toBeGreaterThan(0); + } + }, + creationParameters + ); +}); + +/* + * Utils + */ + +async function testEachChatModelType( + testFunction: ( + ChatClassType: OciGenAiChatConstructor, + parameter?: any | undefined + ) => Promise, + parameters?: any[] +) { + const chatClassTypes: OciGenAiChatConstructor[] = [ + OciGenAiCohereChat, + OciGenAiGenericChat, + ]; + + for (let i = 0; i < chatClassTypes.length; i += 1) { + await testFunction(chatClassTypes[i], parameters?.at(i)); + } +} diff --git a/libs/langchain-community/src/chat_models/tests/chatoci_genai.standard.int.test.ts b/libs/langchain-community/src/chat_models/tests/chatoci_genai.standard.int.test.ts new file mode 100644 index 000000000000..31997783055d --- /dev/null +++ b/libs/langchain-community/src/chat_models/tests/chatoci_genai.standard.int.test.ts @@ -0,0 +1,79 @@ +/* eslint-disable @typescript-eslint/no-explicit-any */ +/* eslint-disable @typescript-eslint/no-non-null-assertion */ +/* eslint-disable no-process-env */ +import { test, expect } from "@jest/globals"; + +import { AIMessageChunk } from "@langchain/core/messages"; +import { BaseChatModelCallOptions } from "@langchain/core/language_models/chat_models"; +import { ChatModelIntegrationTests } from "@langchain/standard-tests"; + +import { OciGenAiCohereChat } from "../oci_genai/cohere_chat.js"; +import { OciGenAiGenericChat } from "../oci_genai/generic_chat.js"; + +type OciGenAiChatConstructor = new (args: any) => + | OciGenAiCohereChat + | OciGenAiGenericChat; + +class OciGenAiChatStandardIntegrationTests extends ChatModelIntegrationTests< + BaseChatModelCallOptions, + AIMessageChunk +> { + constructor( + classTypeToTest: OciGenAiChatConstructor, + private classTypeName: string, + onDemandModelId: string + ) { + super({ + Cls: classTypeToTest, + chatModelHasToolCalling: false, + chatModelHasStructuredOutput: false, + supportsParallelToolCalls: false, + constructorArgs: { + compartmentId: process.env.OCI_GENAI_INTEGRATION_TESTS_COMPARTMENT_ID, + onDemandModelId, + }, + }); + } + + async testCacheComplexMessageTypes() { + this._skipTestMessage("testCacheComplexMessageTypes"); + } + + async testStreamTokensWithToolCalls() { + this._skipTestMessage("testStreamTokensWithToolCalls"); + } + + async testUsageMetadata() { + this._skipTestMessage("testUsageMetadata"); + } + + async testUsageMetadataStreaming() { + this._skipTestMessage("testUsageMetadataStreaming"); + } + + _skipTestMessage(testName: string) { + this.skipTestMessage(testName, this.classTypeName, "Not implemented"); + } +} + +const ociGenAiCohereChatTestClass = new OciGenAiChatStandardIntegrationTests( + OciGenAiCohereChat, + "OciGenAiCohereChat", + process.env.OCI_GENAI_INTEGRATION_TESTS_COHERE_ON_DEMAND_MODEL_ID! +); + +test("ociGenAiCohereChatTestClass", async () => { + const testResults = await ociGenAiCohereChatTestClass.runTests(); + expect(testResults).toBe(true); +}); + +const ociGenAiGenericChatTestClass = new OciGenAiChatStandardIntegrationTests( + OciGenAiGenericChat, + "OciGenAiGenericChat", + process.env.OCI_GENAI_INTEGRATION_TESTS_GENERIC_ON_DEMAND_MODEL_ID! +); + +test("ociGenAiGenericChatTestClass", async () => { + const testResults = await ociGenAiGenericChatTestClass.runTests(); + expect(testResults).toBe(true); +}); diff --git a/libs/langchain-community/src/chat_models/tests/chatoci_genai.standard.test.ts b/libs/langchain-community/src/chat_models/tests/chatoci_genai.standard.test.ts new file mode 100644 index 000000000000..62b9bf7da8ad --- /dev/null +++ b/libs/langchain-community/src/chat_models/tests/chatoci_genai.standard.test.ts @@ -0,0 +1,54 @@ +import { test, expect } from "@jest/globals"; +import { ChatModelUnitTests } from "@langchain/standard-tests"; +import { AIMessageChunk } from "@langchain/core/messages"; +import { BaseChatModelCallOptions } from "@langchain/core/language_models/chat_models"; +import { OciGenAiCohereChat } from "../oci_genai/cohere_chat.js"; +import { OciGenAiGenericChat } from "../oci_genai/generic_chat.js"; + +class OciGenAiCohereChatStandardUnitTests extends ChatModelUnitTests< + BaseChatModelCallOptions, + AIMessageChunk +> { + constructor() { + super({ + Cls: OciGenAiCohereChat, + chatModelHasToolCalling: false, + chatModelHasStructuredOutput: false, + constructorArgs: { + compartmentId: "oci.compartment.ocid", + onDemandModelId: "oci.model.ocid", + }, + }); + } +} + +const ociGenAiCohereTestClass = new OciGenAiCohereChatStandardUnitTests(); + +test("OciGenAiCohereChatStandardUnitTests", () => { + const testResults = ociGenAiCohereTestClass.runTests(); + expect(testResults).toBe(true); +}); + +class OciGenAiGenericChatStandardUnitTests extends ChatModelUnitTests< + BaseChatModelCallOptions, + AIMessageChunk +> { + constructor() { + super({ + Cls: OciGenAiGenericChat, + chatModelHasToolCalling: false, + chatModelHasStructuredOutput: false, + constructorArgs: { + compartmentId: "oci.compartment.ocid", + onDemandModelId: "oci.model.ocid", + }, + }); + } +} + +const ociGenAiGenericTestClass = new OciGenAiGenericChatStandardUnitTests(); + +test("OciGenAiGenericChatStandardUnitTests", () => { + const testResults = ociGenAiGenericTestClass.runTests(); + expect(testResults).toBe(true); +}); diff --git a/libs/langchain-community/src/chat_models/tests/chatoci_genai.test.ts b/libs/langchain-community/src/chat_models/tests/chatoci_genai.test.ts new file mode 100644 index 000000000000..4374dd256c61 --- /dev/null +++ b/libs/langchain-community/src/chat_models/tests/chatoci_genai.test.ts @@ -0,0 +1,1435 @@ +/* eslint-disable @typescript-eslint/no-explicit-any */ +import { + AIMessage, + BaseMessage, + HumanMessage, + HumanMessage as LangChainHumanMessage, + SystemMessage as LangChainSystemMessage, + ToolMessage as LangChainToolMessage, + SystemMessage, + ToolMessage, +} from "@langchain/core/messages"; + +import { GenerativeAiInferenceClient } from "oci-generativeaiinference"; +import { + CohereChatRequest, + CohereSystemMessage as OciGenAiCohereSystemMessage, + CohereUserMessage as OciGenAiCohereUserMessage, + Message, + GenericChatRequest, + TextContent, + CohereMessage, + CohereChatBotMessage, + CohereSystemMessage, + CohereUserMessage, + AssistantMessage as GenericAssistantMessage, + UserMessage as GenericUserMessage, + SystemMessage as GenericSystemMessage, +} from "oci-generativeaiinference/lib/model"; + +import { MaxAttemptsTerminationStrategy } from "oci-common"; + +import { OciGenAiBaseChat } from "../oci_genai/index.js"; +import { OciGenAiCohereChat } from "../oci_genai/cohere_chat.js"; +import { OciGenAiGenericChat } from "../oci_genai/generic_chat.js"; +import { JsonServerEventsIterator } from "../oci_genai/server_events_iterator.js"; +import { OciGenAiSdkClient } from "../oci_genai/oci_genai_sdk_client.js"; +import { + OciGenAiClientParams, + OciGenAiNewClientAuthType, +} from "../oci_genai/types.js"; + +type OciGenAiChatConstructor = new (args: any) => + | OciGenAiCohereChat + | OciGenAiGenericChat; + +/* + * JsonServerEventsIterator tests + */ + +const invalidServerEvents: string[][] = [ + [{} as string], + ["invalid event data", 'data: {"test":5}'], + ['{"prop":"val"}'], + [""], + [" "], + [' ata: {"final": true}'], + ['data {"prop":"val"}'], + ['data: {"prop":"val"'], + ["data:"], + ["data: "], + ["data: 5"], + ["data: fail"], + ['data: "testing 1, 2, 3"'], + ["data: null"], + ["data: -345.345345"], + ["\u{1F600}e\u0301"], +]; + +const invalidEventDataErrors = new RegExp( + "Event text is empty, too short or malformed|" + + "Event data is empty or too short to be valid|" + + "Could not parse event data as JSON|" + + "Event raw data is empty or too short to be valid|" + + "Event data could not be parsed into an object" +); + +const validServerEvents: string[] = [ + 'data: {"test":5}', + 'data: {"message":"this is a message"}', + 'data: {"finalReason":"i j`us`t felt like stopping", "terminate": true}', + "data: {}", + 'data: \n{"message":"this is a message"\n,"ignore":{"yes":"no"}}', +]; + +interface ValidServerEventProps { + finalReason: string; + terminate: boolean; +} + +const validServerEventsProps: string[] = [ + `data: ${JSON.stringify({ + finalReason: "reason 1", + terminate: true, + })}`, + `data: ${JSON.stringify({ + finalReason: "this is a message", + terminate: true, + })}`, + `data: ${JSON.stringify({ + finalReason: "i just felt like stopping", + terminate: true, + })}`, +]; + +test("JsonServerEventsIterator invalid events", async () => { + for (const values of invalidServerEvents) { + const stream: ReadableStream = + createStreamFromStringArray(values); + const streamIterator = new JsonServerEventsIterator(stream); + await testInvalidValues(streamIterator); + } +}); + +test("JsonServerEventsIterator empty events", async () => { + await testNumExpectedServerEvents([], 0); +}); + +test("JsonServerEventsIterator valid events", async () => { + await testNumExpectedServerEvents( + validServerEvents, + validServerEvents.length + ); +}); + +test("JsonServerEventsIterator valid events check properties", async () => { + const stream: ReadableStream = createStreamFromStringArray( + validServerEventsProps + ); + const streamIterator = new JsonServerEventsIterator(stream); + + for await (const event of streamIterator) { + expect(typeof (event).finalReason).toBe("string"); + expect((event).terminate).toBe(true); + } +}); + +/* + * OciGenAiSdkClient tests + */ + +const authenticationDetailsProvider = { + getPassphrase() { + return ""; + }, + async getKeyId(): Promise { + return ""; + }, + getPrivateKey() { + return `-----BEGIN RSA PRIVATE KEY----- +MIICXQIBAAKBgQDTkUM7vYZSUYtm2bY/OmcvF9dQ37I3HMyKIKmFPck7Q4u5LqPB +qTuDNnd0tHBFfRaGpVsgcT46g1sIJwvfCnB5VFkAsheMHc8uUOBUD0DqBbkOLFGU +KI45rD0BUzOzjRW/NI5YFWUJJZGuD7tUP1gEwmr0wIvqTdpPI/CyN0pUTQIDAQAB +AoGAJzg1g3yVyurM8csIKt5zxFoiEx701ZykGjMF2epjRHY4D6MivkLWAnP1XxAY +A/m1VE6Q/wmfJI+3L2K1o6o2wSDUqbU+qW3xHVxc3U63JpUBa2MFQaupriEaA8ky +4iq5Zhs2OlRL02+A9KHvfus6MFhWWPLnkNrSx8cIaJycGgECQQDyFIuB9z76OUCU +B63TbqeRhzbBsVUc/6hErWacb4JCUtGk6s141l5V5pDNO2+w3mQ6HxqWLSct+19t +5BormrDNAkEA37uQj+OkjYBoeGEuB00PJBnlUIaQ/qHv7863aLlKcFdnFvmrzztA +A06QhjNCFBwJHwdSLz95ztDTpccmLIAxgQJBAO/Q4pOR+FWyugLryIwYpvBIXzpr +DsJ3kp7WmTyISyahHQafhYYb98BpdTGbm/4/klLx1UjI2nN2/wbCXhqsWFECQAu/ +PGLhr/UiBdo0OAd4G1Bo76pftmM4O3Ha57Re7jKh1C7Xoxa5ZK4HxPzW2iRWKIBx +kPYcHhgmzMYKg82YWYECQQCejFaH73vZO3qUn+2pdHg3mUYYYQA7r/ms7MQ7mckg +1wPuzmfsEfsAzOaMvs8SsyG5sOdBLWfsGRabFaleBntX +-----END RSA PRIVATE KEY-----`; + }, +}; + +const defaultClient = { + newClientParams: { + authType: OciGenAiNewClientAuthType.Other, + authParams: { authenticationDetailsProvider }, + }, +}; + +test("OciGenAiSdkClient create default client", async () => { + const sdkClient = await OciGenAiSdkClient.create(defaultClient); + testSdkClient(sdkClient, OciGenAiSdkClient._DEFAULT_REGION_ID, 0); +}); + +test("OciGenAiSdkClient create client based on parameters", async () => { + const newClientParams: OciGenAiClientParams = { + newClientParams: { + authType: OciGenAiNewClientAuthType.Other, + regionId: "mars", + authParams: { authenticationDetailsProvider }, + clientConfiguration: { + retryConfiguration: { + terminationStrategy: new MaxAttemptsTerminationStrategy(5), + }, + }, + }, + }; + + const sdkClient = await OciGenAiSdkClient.create(newClientParams); + testSdkClient(sdkClient, "mars", 4); +}); + +test("OciGenAiSdkClient create client based on some parameters #2", async () => { + const sdkClient = await OciGenAiSdkClient.create(defaultClient); + testSdkClient(sdkClient, OciGenAiSdkClient._DEFAULT_REGION_ID, 0); +}); + +test("OciGenAiSdkClient pre-configured client", async () => { + const client = new GenerativeAiInferenceClient( + { authenticationDetailsProvider }, + { + retryConfiguration: { + terminationStrategy: new MaxAttemptsTerminationStrategy(10), + }, + } + ); + + client.regionId = "venus"; + const sdkClient = await OciGenAiSdkClient.create({ client }); + testSdkClient(sdkClient, "venus", 9); +}); + +/* + * Chat models tests + */ + +const compartmentId = "oci.compartment.ocid"; +const onDemandModelId = "oci.model.ocid"; +const dedicatedEndpointId = "oci.dedicated.oci"; +const createParams = { + compartmentId, + onDemandModelId, +}; + +const DummyClient = { + chat() {}, +}; + +test("OCI GenAI chat models creation", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor) => { + let instance = new ChatClassType({ client: DummyClient }); + await expect(instance.invoke("prompt")).rejects.toThrow( + "Invalid compartmentId" + ); + + instance = new ChatClassType({ + compartmentId, + client: DummyClient, + }); + + await expect(instance.invoke("prompt")).rejects.toThrow( + "Either onDemandModelId or dedicatedEndpointId must be supplied" + ); + + instance = new ChatClassType({ + compartmentId, + onDemandModelId: "", + client: DummyClient, + }); + + await expect(instance.invoke("prompt")).rejects.toThrow( + "Either onDemandModelId or dedicatedEndpointId must be supplied" + ); + + instance = new ChatClassType({ + compartmentId, + onDemandModelId, + client: DummyClient, + }); + + await expect(instance.invoke("prompt")).rejects.toThrow( + /Invalid CohereResponse object|Invalid GenericChatResponse object/ + ); + + expect(instance._params.compartmentId).toBe(compartmentId); + expect(instance._params.onDemandModelId).toBe(onDemandModelId); + } + ); +}); + +const chatClassReturnValues = [ + { + chatResult: { + chatResponse: { + text: "response text", + }, + }, + }, + { + chatResult: { + chatResponse: { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: "response text", + }, + ], + }, + }, + ], + }, + }, + }, +]; + +test("OCI GenAI chat models invoke with unsupported message", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor) => { + const chatClass = new ChatClassType(createParams); + + await expect( + chatClass.invoke([ + new LangChainToolMessage({ content: "tools message" }, "tool_id"), + ]) + ).rejects.toThrow("Message type 'tool' is not supported"); + }, + chatClassReturnValues + ); +}); + +const lastHumanMessage = "Last human message"; +const messages = [ + new LangChainHumanMessage("Human message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainHumanMessage(lastHumanMessage), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), +]; + +const callOptions = { + stop: ["\n", "."], + requestParams: { + temperature: 0.32, + maxTokens: 1, + }, +}; + +const createRequestParams = [ + { + test: (cohereRequest: CohereChatRequest, params: any) => { + expect(cohereRequest.apiFormat).toBe(CohereChatRequest.apiFormat); + expect(cohereRequest.message).toBe(lastHumanMessage); + expect(cohereRequest.chatHistory).toStrictEqual( + removeElements(params.convertMessages(messages), [3]) + ); + expect(cohereRequest.isStream).toBe(true); + expect(cohereRequest.stopSequences).toStrictEqual(callOptions.stop); + expect(cohereRequest.temperature).toBe( + callOptions.requestParams.temperature + ); + expect(cohereRequest.maxTokens).toBe(callOptions.requestParams.maxTokens); + }, + convertMessages: (messages: BaseMessage[]): Message[] => { + return messages.map( + OciGenAiCohereChat._convertBaseMessageToCohereMessage + ); + }, + }, + { + test: (genericRequest: GenericChatRequest, params: any) => { + expect(genericRequest.apiFormat).toBe(GenericChatRequest.apiFormat); + expect(genericRequest.messages).toStrictEqual( + params.convertMessages(messages) + ); + expect(genericRequest.isStream).toBe(true); + expect(genericRequest.stop).toStrictEqual(callOptions.stop); + expect(genericRequest.temperature).toBe( + callOptions.requestParams.temperature + ); + expect(genericRequest.maxTokens).toBe( + callOptions.requestParams.maxTokens + ); + }, + convertMessages: (messages: BaseMessage[]): Message[] => { + return messages.map( + OciGenAiGenericChat._convertBaseMessageToGenericMessage + ); + }, + }, +]; + +const invalidMessages = [ + [], + [ + new LangChainToolMessage("Human message", "tool"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainHumanMessage(lastHumanMessage), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + ], + [ + new LangChainSystemMessage({ + content: [ + { + type: "image_url", + image_url: "data:image/pgn;base64,blah", + }, + ], + }), + ], +]; + +test("OCI GenAI chat create request", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor, params) => { + const chatClass = new ChatClassType(createParams); + const request = chatClass._prepareRequest(messages, callOptions, true); + params.test(request, params); + }, + createRequestParams + ); +}); + +test("OCI GenAI chat create invalid request messages", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor) => { + const chatClass = new ChatClassType(createParams); + expect(() => + chatClass._prepareRequest(invalidMessages[0], callOptions, true) + ).toThrow("No messages provided"); + expect(() => + chatClass._prepareRequest(invalidMessages[1], callOptions, true) + ).toThrow("Message type 'tool' is not supported"); + expect(() => + chatClass._prepareRequest(invalidMessages[2], callOptions, true) + ).toThrow("Only text messages are supported"); + } + ); +}); + +const invalidCohereResponseValues = [ + undefined, + null, + {}, + { props: true }, + { text: 5505 }, + { text: ["hello "] }, + [], +]; + +test("OCI GenAI chat Cohere parse invalid response", async () => { + const cohereChat = new OciGenAiCohereChat(createParams); + + for (const invalidValue of invalidCohereResponseValues) { + expect(() => cohereChat._parseResponse(invalidValue)).toThrow( + "Invalid CohereResponse object" + ); + } +}); + +const validCohereResponseValues = [ + { + apiFormat: CohereChatRequest.apiFormat, + value: undefined, + text: "This is the response text", + }, + { + text: "This is the response text", + }, +]; + +test("OCI GenAI Cohere parse valid response", async () => { + const cohereChat = new OciGenAiCohereChat(createParams); + + for (const validValue of validCohereResponseValues) { + expect(cohereChat._parseResponse(validValue)).toBe( + "This is the response text" + ); + } +}); + +const invalidCGenericResponseValues = [ + undefined, + null, + {}, + [], + { props: true }, + { choices: 5505 }, + { choices: ["hello "] }, + { choices: null }, + { choices: {} }, + { + choices: [ + { + content: undefined, + }, + ], + }, + { + message: { + content: {}, + }, + }, + { + message: { + content: [], + }, + }, + { finishReason: {} }, + { finishReason: false }, + { + choices: [5], + }, + { + choices: [ + { + message: "bad value", + }, + ], + }, + { + choices: [ + { + message: {}, + }, + ], + }, + { + choices: [ + { + message: null, + }, + ], + }, + { + choices: [ + { + message: { + content: null, + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [{}], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [null], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + text: "some text", + }, + ], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + type: "IMAGE", + text: "some text", + }, + ], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: [1, 2, 3, 4], + }, + ], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: null, + }, + ], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: "This is ", + }, + ], + }, + }, + { + message: { + content: [ + { + type: TextContent.type, + text: false, + }, + ], + }, + }, + ], + }, +]; + +test("OCI GenAI Generic parse invalid response", async () => { + const genericChat = new OciGenAiGenericChat(createParams); + + for (const invalidValue of invalidCGenericResponseValues) { + expect(() => genericChat._parseResponse(invalidValue)).toThrow( + "Invalid GenericChatResponse object" + ); + } +}); + +const validGenericResponseValues = [ + { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: "This is the response text", + }, + ], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: "This is ", + }, + { + type: TextContent.type, + text: "the ", + }, + { + type: TextContent.type, + text: "response text", + }, + ], + }, + }, + ], + }, + { + choices: [ + { + message: { + content: [ + { + type: TextContent.type, + text: "This is ", + }, + ], + }, + }, + { + message: { + content: [ + { + type: TextContent.type, + text: "the response text", + }, + ], + }, + }, + ], + }, +]; + +test("OCI GenAI Generic parse valid response", async () => { + const genericChat = new OciGenAiGenericChat(createParams); + + for (const validValue of validGenericResponseValues) { + expect(["This is the response text", ""]).toContain( + genericChat._parseResponse(validValue) + ); + } +}); + +const invalidCohereStreamedChunks = [ + null, + {}, + { + ext: "this is some text", + prop: true, + }, + { + ext: "this is some text", + message: ["hello"], + }, + { + apiFormat: CohereChatRequest.apiFormat, + }, +]; + +test("OCI GenAI Cohere parse invalid streamed chunks", async () => { + const cohereChat = new OciGenAiCohereChat(createParams); + + for (const invalidValue of invalidCohereStreamedChunks) { + expect(() => cohereChat._parseStreamedResponseChunk(invalidValue)).toThrow( + "Invalid streamed response chunk data" + ); + } +}); + +const validCohereStreamedChunks = [ + { + apiFormat: CohereChatRequest.apiFormat, + text: "this is some text", + }, + { + apiFormat: CohereChatRequest.apiFormat, + text: "this is some text", + pad: "aaaaa", + }, +]; + +test("OCI GenAI Cohere parse invalid streamed chunks", async () => { + const cohereChat = new OciGenAiCohereChat(createParams); + + for (const invalidValue of validCohereStreamedChunks) { + expect(cohereChat._parseStreamedResponseChunk(invalidValue)).toBe( + "this is some text" + ); + } +}); + +const invalidGenericStreamedChunks = [ + null, + {}, + { + ext: "this is some text", + prop: true, + }, + { + ext: "this is some text", + message: ["hello"], + }, + { + apiFormat: CohereChatRequest.apiFormat, + }, +]; + +test("OCI GenAI Generic parse invalid streamed chunks", async () => { + const genericChat = new OciGenAiGenericChat(createParams); + + for (const invalidValue of invalidGenericStreamedChunks) { + expect(() => genericChat._parseStreamedResponseChunk(invalidValue)).toThrow( + "Invalid streamed response chunk data" + ); + } +}); + +const validGenericStreamedChunks = [ + { + message: { + content: [ + { + type: TextContent.type, + text: "this is some text", + }, + ], + }, + }, + { + finishReason: "stop sequence", + }, +]; + +test("OCI GenAI Generic parse invalid streamed chunks", async () => { + const genericChat = new OciGenAiGenericChat(createParams); + + for (const invalidValue of validGenericStreamedChunks) { + expect(["this is some text", undefined]).toContain( + genericChat._parseStreamedResponseChunk(invalidValue) + ); + } +}); + +test("OCI GenAI cohere history and message split", async () => { + const lastHumanMessage = "Last human message"; + + testCohereMessageHistorySplit({ + messages: [], + lastHumanMessage: "", + numExpectedMessagesInHistory: 0, + numExpectedHumanMessagesInHistory: 0, + numExpectedOtherMessagesInHistory: 0, + }); + + testCohereMessageHistorySplit({ + messages: [new LangChainHumanMessage(lastHumanMessage)], + lastHumanMessage, + numExpectedMessagesInHistory: 0, + numExpectedHumanMessagesInHistory: 0, + numExpectedOtherMessagesInHistory: 0, + }); + + testCohereMessageHistorySplit({ + messages: [ + new LangChainHumanMessage("Human message"), + new LangChainSystemMessage("System message"), + new LangChainHumanMessage("Human message"), + new LangChainSystemMessage("System message"), + new LangChainHumanMessage(lastHumanMessage), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + ], + lastHumanMessage, + numExpectedMessagesInHistory: 6, + numExpectedHumanMessagesInHistory: 2, + numExpectedOtherMessagesInHistory: 4, + }); + + testCohereMessageHistorySplit({ + messages: [ + new LangChainHumanMessage(lastHumanMessage), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + ], + lastHumanMessage, + numExpectedMessagesInHistory: 4, + numExpectedHumanMessagesInHistory: 0, + numExpectedOtherMessagesInHistory: 4, + }); + + testCohereMessageHistorySplit({ + messages: [ + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainHumanMessage(lastHumanMessage), + ], + lastHumanMessage, + numExpectedMessagesInHistory: 4, + numExpectedHumanMessagesInHistory: 0, + numExpectedOtherMessagesInHistory: 4, + }); + + testCohereMessageHistorySplit({ + messages: [ + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + new LangChainSystemMessage("System message"), + ], + lastHumanMessage: "", + numExpectedMessagesInHistory: 4, + numExpectedHumanMessagesInHistory: 0, + numExpectedOtherMessagesInHistory: 4, + }); +}); + +test("OCI GenAI chat cohere _convertBaseMessageToCohereMessage", () => { + const messageContent = "message content"; + const testCases = [ + { + message: new AIMessage(messageContent), + expectedRole: CohereChatBotMessage.role, + }, + { + message: new SystemMessage(messageContent), + expectedRole: CohereSystemMessage.role, + }, + { + message: new HumanMessage(messageContent), + expectedRole: CohereUserMessage.role, + }, + { + message: new ToolMessage(messageContent, "tool id"), + expectedError: "Message type 'tool' is not supported", + }, + ]; + + testCases.forEach((testCase) => { + if (testCase.expectedError) { + expect(() => + OciGenAiCohereChat._convertBaseMessageToCohereMessage(testCase.message) + ).toThrowError(testCase.expectedError); + } else { + expect( + OciGenAiCohereChat._convertBaseMessageToCohereMessage(testCase.message) + ).toEqual({ + role: testCase.expectedRole, + message: messageContent, + }); + } + }); +}); + +test("OCI GenAI chat generic _convertBaseMessagesToGenericMessages", () => { + const testCases = [ + { + input: [], + expectedOutput: [], + }, + { + input: [new AIMessage("Hello")], + expectedOutput: [ + { + role: GenericAssistantMessage.role, + content: [ + { + text: "Hello", + type: TextContent.type, + }, + ], + }, + ], + }, + { + input: [ + new AIMessage("Hello"), + new HumanMessage("Hi"), + new SystemMessage("Welcome"), + ], + expectedOutput: [ + { + role: GenericAssistantMessage.role, + content: [ + { + text: "Hello", + type: TextContent.type, + }, + ], + }, + { + role: GenericUserMessage.role, + content: [ + { + text: "Hi", + type: TextContent.type, + }, + ], + }, + { + role: GenericSystemMessage.role, + content: [ + { + text: "Welcome", + type: TextContent.type, + }, + ], + }, + ], + }, + { + input: [ + new AIMessage("Hello"), + new ToolMessage("Hi", "id"), + new HumanMessage("Hi"), + ], + expectedError: "Message type 'tool' is not supported", + }, + ]; + + testCases.forEach((testCase) => { + if (testCase.expectedError) { + expect(() => + OciGenAiGenericChat._convertBaseMessagesToGenericMessages( + testCase.input + ) + ).toThrow(testCase.expectedError); + } else { + expect( + OciGenAiGenericChat._convertBaseMessagesToGenericMessages( + testCase.input + ) + ).toEqual(testCase.expectedOutput); + } + }); +}); + +test("OCI GenAI chat Cohere _isCohereResponse", () => { + const testCaseArray = [ + { + input: { + text: "Hello World!", + apiFormat: "json", + }, + expectedResult: true, + }, + { + input: null, + expectedResult: false, + }, + { + input: "not an object", + expectedResult: false, + }, + { + input: 123, + expectedResult: false, + }, + { + input: undefined, + expectedResult: false, + }, + { + input: { + foo: "bar", + apiFormat: "json", + }, + expectedResult: false, + }, + { + input: { + text: 123, + apiFormat: "json", + }, + expectedResult: false, + }, + ]; + + testCaseArray.forEach(({ input, expectedResult }) => { + expect(OciGenAiCohereChat._isCohereResponse(input)).toBe(expectedResult); + }); +}); + +test("OCI GenAI chat generic _isGenericResponse", () => { + const testCases = [ + { + input: { + timeCreated: new Date(), + choices: [ + { + index: 1, + message: { + role: "assistant", + content: [{ type: "text", text: "Hello" }], + }, + finishReason: "", + }, + ], + apiFormat: "v1", + }, + expectedOutput: true, + }, + { + input: null, + expectedOutput: false, + }, + { + input: "not an object", + expectedOutput: false, + }, + { + input: { + timeCreated: new Date(), + apiFormat: "v1", + }, + expectedOutput: false, + }, + { + input: { + timeCreated: new Date(), + choices: "not an array", + apiFormat: "v1", + }, + expectedOutput: false, + }, + { + input: { + timeCreated: new Date(), + choices: [], + apiFormat: "v1", + }, + expectedOutput: true, + }, + { + input: { + timeCreated: new Date(), + choices: [ + { + index: 1, + message: "not an object", + }, + ], + apiFormat: "v1", + }, + expectedOutput: false, + }, + ]; + + testCases.forEach(({ input, expectedOutput }) => { + expect(OciGenAiGenericChat._isGenericResponse(input)).toBe(expectedOutput); + }); +}); + +test("OCI GenAI chat models invoke + check sdkClient cache logic", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor, parameter) => { + const chatClass = new ChatClassType({ + compartmentId, + onDemandModelId, + client: { + chat: () => parameter, + }, + }); + + expect(OciGenAiBaseChat._isSdkClient(chatClass._sdkClient)).toBe(false); + await chatClass.invoke("this is a prompt"); + await chatClass.invoke("this is a prompt"); + expect(OciGenAiBaseChat._isSdkClient(chatClass._sdkClient)).toBe(true); + }, + chatClassReturnValues + ); +}); + +test("OCI GenAI chat models invoke API fail", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor) => { + const chatClass = new ChatClassType({ + compartmentId, + onDemandModelId, + client: { + chat: () => { + throw new Error("API error"); + }, + }, + }); + + expect(OciGenAiBaseChat._isSdkClient(chatClass._sdkClient)).toBe(false); + await expect(chatClass.invoke("this is a prompt")).rejects.toThrow( + "Error executing chat API, error: API error" + ); + await expect(chatClass.invoke("this is a prompt")).rejects.toThrow( + "Error executing chat API, error: API error" + ); + expect(OciGenAiBaseChat._isSdkClient(chatClass._sdkClient)).toBe(true); + } + ); +}); + +test("OCI GenAI chat models invoke with with no initialized SDK client", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor) => { + const chatClass = new ChatClassType({ + compartmentId, + dedicatedEndpointId, + client: { + chat: () => true, + }, + }); + + await expect( + chatClass._chat(chatClass._prepareRequest(messages, callOptions, true)) + ).rejects.toThrow( + "Error executing chat API, error: OCI SDK client not initialized" + ); + } + ); +}); + +test("OCI GenAI chat models invoke with sdk client uninitialized", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor) => { + const chatClass = new ChatClassType({ + compartmentId, + dedicatedEndpointId, + client: { + chat: () => true, + }, + }); + + await expect( + chatClass._chat(chatClass._prepareRequest(messages, callOptions, true)) + ).rejects.toThrow( + "Error executing chat API, error: OCI SDK client not initialized" + ); + } + ); +}); + +test("OCI GenAI chat models invoke with dedicated endpoint", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor, params) => { + const chatClass = new ChatClassType({ + compartmentId, + dedicatedEndpointId, + client: { + chat: () => { + console.log(params); + return params; + }, + }, + }); + + expect( + async () => await chatClass.invoke("this is a message") + ).not.toThrow(); + }, + chatClassReturnValues + ); +}); + +const chatStreamReturnValues: string[][] = [ + [ + `data: {"apiFormat":"${CohereChatRequest.apiFormat}", "text":"this is some text"}`, + `data: {"apiFormat":"${CohereChatRequest.apiFormat}", "text":"this is some more text"}`, + ], + [ + `data: {"message":{"content":[{"type":"${TextContent.type}","text":"this is some text"}]}}`, + `data: {"message":{"content":[{"type":"${TextContent.type}","text":"this is some more text"}]}}`, + 'data: {"finishReason":"stop sequence"}', + ], +]; + +test("OCI GenAI chat models stream", async () => { + await testEachChatModelType( + async (ChatClassType: OciGenAiChatConstructor, parameter) => { + let numApiCalls = 0; + const chatClass = new ChatClassType({ + compartmentId, + onDemandModelId, + client: { + chat: () => { + numApiCalls += 1; + return createStreamFromStringArray(parameter); + }, + }, + }); + + expect(OciGenAiBaseChat._isSdkClient(chatClass._sdkClient)).toBe(false); + let numMessages = 0; + + for await (const _message of await chatClass.stream([ + "this is a prompt", + ])) { + numMessages += 1; + } + + expect(numMessages).toBe(2); + expect(numApiCalls).toBe(1); + expect(OciGenAiBaseChat._isSdkClient(chatClass._sdkClient)).toBe(true); + }, + chatStreamReturnValues + ); +}); + +/* + * Utils + */ + +async function testInvalidValues( + streamIterator: JsonServerEventsIterator +): Promise { + let numRuns = 0; + + try { + for await (const _event of streamIterator) { + numRuns += 1; + } + } catch (error) { + expect((error)?.message).toMatch(invalidEventDataErrors); + } + + expect(numRuns).toBe(0); +} + +async function testNumExpectedServerEvents( + serverEvents: string[], + numExpectedServerEvents: number +) { + const stream = createStreamFromStringArray(serverEvents); + const streamIterator = new JsonServerEventsIterator(stream); + let numEvents = 0; + + for await (const _event of streamIterator) { + numEvents += 1; + } + + expect(numEvents).toBe(numExpectedServerEvents); +} + +function testSdkClient( + sdkClient: OciGenAiSdkClient, + regionId: string, + maxAttempts: number +) { + expect(OciGenAiBaseChat._isSdkClient(sdkClient)).toBe(true); + expect((sdkClient.client)._regionId).toBe(regionId); + expect( + (sdkClient.client)._clientConfiguration?.retryConfiguration + ?.terminationStrategy?._maxAttempts + ).toBe(maxAttempts); +} + +class StringArrayToInt8ArraySource implements UnderlyingSource { + private valuesIndex = 0; + + private textEncoder = new TextEncoder(); + + constructor(private values: string[]) {} + + pull(controller: ReadableStreamDefaultController) { + if (this.valuesIndex < this.values.length) { + controller.enqueue( + this.textEncoder.encode(this.values[this.valuesIndex]) + ); + this.valuesIndex += 1; + } else { + controller.close(); + } + } + + cancel() { + this.valuesIndex = this.values.length; + } +} + +function createStreamFromStringArray( + values: string[] +): ReadableStream { + return new ReadableStream(new StringArrayToInt8ArraySource(values)); +} + +async function testEachChatModelType( + testFunction: ( + ChatClassType: OciGenAiChatConstructor, + parameter?: any | undefined + ) => Promise, + parameters?: any[] +) { + const chatClassTypes: OciGenAiChatConstructor[] = [ + OciGenAiCohereChat, + OciGenAiGenericChat, + ]; + + for (let i = 0; i < chatClassTypes.length; i += 1) { + await testFunction(chatClassTypes[i], parameters?.at(i)); + } +} + +interface TestMessageHistorySplitParams { + messages: BaseMessage[]; + lastHumanMessage: string; + numExpectedMessagesInHistory: number; + numExpectedHumanMessagesInHistory: number; + numExpectedOtherMessagesInHistory: number; +} + +function testCohereMessageHistorySplit(params: TestMessageHistorySplitParams) { + const messageAndHistory = OciGenAiCohereChat._splitMessageAndHistory( + params.messages + ); + + expect(messageAndHistory.message).toBe(params.lastHumanMessage); + expect(messageAndHistory.chatHistory.length).toBe( + params.numExpectedMessagesInHistory + ); + + let numHumanMessages = params.numExpectedHumanMessagesInHistory; + let numOtherMessages = params.numExpectedOtherMessagesInHistory; + + for (const message of messageAndHistory.chatHistory) { + testCohereMessageHistorySplitMessage(message, params.lastHumanMessage); + + if (message.role === OciGenAiCohereUserMessage.role) { + numHumanMessages -= 1; + } else { + numOtherMessages -= 1; + } + } + + expect(numHumanMessages).toBe(0); + expect(numOtherMessages).toBe(0); +} + +function testCohereMessageHistorySplitMessage( + message: CohereMessage, + lastHumanMessage: string +) { + expect([ + OciGenAiCohereSystemMessage.role, + OciGenAiCohereUserMessage.role, + ]).toContain(message.role); + expect((message).message).not.toBe(lastHumanMessage); +} + +function removeElements(originalArray: any[], removeIndexes: number[]): any[] { + for (const removeIndex of removeIndexes) { + originalArray.splice(removeIndex, 1); + } + + return originalArray; +} diff --git a/libs/langchain-community/src/load/import_constants.ts b/libs/langchain-community/src/load/import_constants.ts index 512843703c9a..9e50b095ea4d 100644 --- a/libs/langchain-community/src/load/import_constants.ts +++ b/libs/langchain-community/src/load/import_constants.ts @@ -90,6 +90,7 @@ export const optionalImportEntrypoints: string[] = [ "langchain_community/chat_models/iflytek_xinghuo", "langchain_community/chat_models/iflytek_xinghuo/web", "langchain_community/chat_models/llama_cpp", + "langchain_community/chat_models/oci_genai", "langchain_community/chat_models/portkey", "langchain_community/chat_models/premai", "langchain_community/chat_models/tencent_hunyuan", diff --git a/yarn.lock b/yarn.lock index 1d74ba899993..db655f1890d9 100644 --- a/yarn.lock +++ b/yarn.lock @@ -8569,6 +8569,8 @@ __metadata: neo4j-driver: ^5.17.0 node-llama-cpp: 3.1.1 notion-to-md: ^3.1.0 + oci-common: ^2.102.2 + oci-generativeaiinference: ^2.102.2 officeparser: ^4.0.4 openai: "*" pdf-parse: 1.1.1 @@ -8702,6 +8704,7 @@ __metadata: mysql2: ^3.9.8 neo4j-driver: "*" notion-to-md: ^3.1.0 + oci-generativeaiinference: ^2.102.2 officeparser: ^4.0.4 openai: "*" pdf-parse: 1.1.1 @@ -8924,6 +8927,8 @@ __metadata: optional: true notion-to-md: optional: true + oci-generativeaiinference: + optional: true officeparser: optional: true pdf-parse: @@ -14182,6 +14187,13 @@ __metadata: languageName: node linkType: hard +"@types/isomorphic-fetch@npm:0.0.35": + version: 0.0.35 + resolution: "@types/isomorphic-fetch@npm:0.0.35" + checksum: ae57d2605d24f799fd3d838d59986e88e8a345c65c1751122f6b39b62de504e0a61cd62f04e6e157406c84cba60a6a39eb6d5d2c0a9e9ac153759d5354b5e7e9 + languageName: node + linkType: hard + "@types/istanbul-lib-coverage@npm:*, @types/istanbul-lib-coverage@npm:^2.0.0, @types/istanbul-lib-coverage@npm:^2.0.1": version: 2.0.4 resolution: "@types/istanbul-lib-coverage@npm:2.0.4" @@ -14272,6 +14284,15 @@ __metadata: languageName: node linkType: hard +"@types/jsonwebtoken@npm:9.0.0": + version: 9.0.0 + resolution: "@types/jsonwebtoken@npm:9.0.0" + dependencies: + "@types/node": "*" + checksum: c7791354ba895759524c18ba609ea04efdc576e2b660bd6f80d5b917db8dc4b01acd4d1bc115a62d35406a82e627067973d475c4b36035dabaa27862b141ae49 + languageName: node + linkType: hard + "@types/jsonwebtoken@npm:^9": version: 9.0.6 resolution: "@types/jsonwebtoken@npm:9.0.6" @@ -14290,6 +14311,13 @@ __metadata: languageName: node linkType: hard +"@types/jssha@npm:2.0.0": + version: 2.0.0 + resolution: "@types/jssha@npm:2.0.0" + checksum: 74b888f86e38d6c8366d81c5bb60fcdd13a19a2c7258147ddfcc5c32e41542efc072cec22edf796643e974767c8f235f071d42add8c77fa59dd2ad86edb8467c + languageName: node + linkType: hard + "@types/keyv@npm:^3.1.1": version: 3.1.4 resolution: "@types/keyv@npm:3.1.4" @@ -14511,6 +14539,15 @@ __metadata: languageName: node linkType: hard +"@types/opossum@npm:4.1.1": + version: 4.1.1 + resolution: "@types/opossum@npm:4.1.1" + dependencies: + "@types/node": "*" + checksum: d5d28851154b088b03068357272d5e80154265dafd074429ac1aae45b0a77aac2c4be212b8f52d7d8e90c437d04841bf165048c0265cfb7fbada6d2b0601d572 + languageName: node + linkType: hard + "@types/pad-left@npm:2.1.1": version: 2.1.1 resolution: "@types/pad-left@npm:2.1.1" @@ -14833,6 +14870,15 @@ __metadata: languageName: node linkType: hard +"@types/sshpk@npm:1.10.3": + version: 1.10.3 + resolution: "@types/sshpk@npm:1.10.3" + dependencies: + "@types/node": "*" + checksum: 7cf44b871ce5a48ebd3514739cfb59ebf5b076a5b920faf76812fb96b4726aab0bd34cde23b2c746baf532005ec1b86bf159bbed57b5730e42bb59b21ad8ab0f + languageName: node + linkType: hard + "@types/stack-utils@npm:^2.0.0": version: 2.0.1 resolution: "@types/stack-utils@npm:2.0.1" @@ -16584,6 +16630,15 @@ __metadata: languageName: node linkType: hard +"asn1@npm:~0.2.3": + version: 0.2.6 + resolution: "asn1@npm:0.2.6" + dependencies: + safer-buffer: ~2.1.0 + checksum: 39f2ae343b03c15ad4f238ba561e626602a3de8d94ae536c46a4a93e69578826305366dc09fbb9b56aec39b4982a463682f259c38e59f6fa380cd72cd61e493d + languageName: node + linkType: hard + "assemblyai@npm:^4.6.0": version: 4.6.0 resolution: "assemblyai@npm:4.6.0" @@ -16593,6 +16648,13 @@ __metadata: languageName: node linkType: hard +"assert-plus@npm:1.0.0, assert-plus@npm:^1.0.0": + version: 1.0.0 + resolution: "assert-plus@npm:1.0.0" + checksum: 19b4340cb8f0e6a981c07225eacac0e9d52c2644c080198765d63398f0075f83bbc0c8e95474d54224e297555ad0d631c1dcd058adb1ddc2437b41a6b424ac64 + languageName: node + linkType: hard + "ast-types-flow@npm:^0.0.7": version: 0.0.7 resolution: "ast-types-flow@npm:0.0.7" @@ -17080,6 +17142,15 @@ __metadata: languageName: node linkType: hard +"bcrypt-pbkdf@npm:^1.0.0": + version: 1.0.2 + resolution: "bcrypt-pbkdf@npm:1.0.2" + dependencies: + tweetnacl: ^0.14.3 + checksum: 4edfc9fe7d07019609ccf797a2af28351736e9d012c8402a07120c4453a3b789a15f2ee1530dc49eee8f7eb9379331a8dd4b3766042b9e502f74a68e7f662291 + languageName: node + linkType: hard + "before-after-hook@npm:^2.2.0": version: 2.2.3 resolution: "before-after-hook@npm:2.2.3" @@ -18924,6 +18995,13 @@ __metadata: languageName: node linkType: hard +"core-util-is@npm:1.0.2": + version: 1.0.2 + resolution: "core-util-is@npm:1.0.2" + checksum: 7a4c925b497a2c91421e25bf76d6d8190f0b2359a9200dbeed136e63b2931d6294d3b1893eda378883ed363cd950f44a12a401384c609839ea616befb7927dab + languageName: node + linkType: hard + "core-util-is@npm:~1.0.0": version: 1.0.3 resolution: "core-util-is@npm:1.0.3" @@ -19846,6 +19924,15 @@ __metadata: languageName: node linkType: hard +"dashdash@npm:^1.12.0": + version: 1.14.1 + resolution: "dashdash@npm:1.14.1" + dependencies: + assert-plus: ^1.0.0 + checksum: 3634c249570f7f34e3d34f866c93f866c5b417f0dd616275decae08147dcdf8fccfaa5947380ccfb0473998ea3a8057c0b4cd90c875740ee685d0624b2983598 + languageName: node + linkType: hard + "data-uri-to-buffer@npm:^4.0.0": version: 4.0.1 resolution: "data-uri-to-buffer@npm:4.0.1" @@ -20851,6 +20938,16 @@ __metadata: languageName: node linkType: hard +"ecc-jsbn@npm:~0.1.1": + version: 0.1.2 + resolution: "ecc-jsbn@npm:0.1.2" + dependencies: + jsbn: ~0.1.0 + safer-buffer: ^2.1.0 + checksum: 22fef4b6203e5f31d425f5b711eb389e4c6c2723402e389af394f8411b76a488fa414d309d866e2b577ce3e8462d344205545c88a8143cc21752a5172818888a + languageName: node + linkType: hard + "ecdsa-sig-formatter@npm:1.0.11, ecdsa-sig-formatter@npm:^1.0.11": version: 1.0.11 resolution: "ecdsa-sig-formatter@npm:1.0.11" @@ -21347,6 +21444,13 @@ __metadata: languageName: node linkType: hard +"es6-promise@npm:4.2.6": + version: 4.2.6 + resolution: "es6-promise@npm:4.2.6" + checksum: 51eb72d480d10db55fb0d17e2adc1a09dac0a1afd18f9c37bd697be72066d1b3913f8fd5e32ed40d0cb47dcae6a9012bfc45d48d9f888b03fc6623adad218453 + languageName: node + linkType: hard + "es6-symbol@npm:^3.1.1, es6-symbol@npm:^3.1.3": version: 3.1.3 resolution: "es6-symbol@npm:3.1.3" @@ -22882,6 +22986,32 @@ __metadata: languageName: node linkType: hard +"extsprintf@npm:1.3.0": + version: 1.3.0 + resolution: "extsprintf@npm:1.3.0" + checksum: cee7a4a1e34cffeeec18559109de92c27517e5641991ec6bab849aa64e3081022903dd53084f2080d0d2530803aa5ee84f1e9de642c365452f9e67be8f958ce2 + languageName: node + linkType: hard + +"extsprintf@npm:^1.2.0": + version: 1.4.1 + resolution: "extsprintf@npm:1.4.1" + checksum: a2f29b241914a8d2bad64363de684821b6b1609d06ae68d5b539e4de6b28659715b5bea94a7265201603713b7027d35399d10b0548f09071c5513e65e8323d33 + languageName: node + linkType: hard + +"faiss-node@npm:^0.5.1": + version: 0.5.1 + resolution: "faiss-node@npm:0.5.1" + dependencies: + bindings: ^1.5.0 + node-addon-api: ^6.0.0 + node-gyp: latest + prebuild-install: ^7.1.1 + checksum: 9c8ba45c004151be6e94460a30b46fdd854de5f067fd18757f388e103276bb4d479db66cd0475961c447c40c11df629612144d31af932984d4b5ca5c5276f508 + languageName: node + linkType: hard + "fast-deep-equal@npm:3.1.3, fast-deep-equal@npm:^3.1.1, fast-deep-equal@npm:^3.1.3": version: 3.1.3 resolution: "fast-deep-equal@npm:3.1.3" @@ -24041,6 +24171,15 @@ __metadata: languageName: node linkType: hard +"getpass@npm:^0.1.1": + version: 0.1.7 + resolution: "getpass@npm:0.1.7" + dependencies: + assert-plus: ^1.0.0 + checksum: ab18d55661db264e3eac6012c2d3daeafaab7a501c035ae0ccb193c3c23e9849c6e29b6ac762b9c2adae460266f925d55a3a2a3a3c8b94be2f222df94d70c046 + languageName: node + linkType: hard + "git-up@npm:^7.0.0": version: 7.0.0 resolution: "git-up@npm:7.0.0" @@ -25234,6 +25373,17 @@ __metadata: languageName: node linkType: hard +"http-signature@npm:1.3.1": + version: 1.3.1 + resolution: "http-signature@npm:1.3.1" + dependencies: + assert-plus: ^1.0.0 + jsprim: ^1.2.2 + sshpk: ^1.14.1 + checksum: e3f5a55359b4badcc089d381f4da5e6064f6e5358dab423826761c62937724c07b580504d604d4408edecd565551259a4a80993e6020e5463deafef5f768ee8e + languageName: node + linkType: hard + "http2-wrapper@npm:^2.1.10": version: 2.2.0 resolution: "http2-wrapper@npm:2.2.0" @@ -26539,7 +26689,7 @@ __metadata: languageName: node linkType: hard -"isomorphic-fetch@npm:^3.0.0": +"isomorphic-fetch@npm:3.0.0, isomorphic-fetch@npm:^3.0.0": version: 3.0.0 resolution: "isomorphic-fetch@npm:3.0.0" dependencies: @@ -27645,6 +27795,13 @@ __metadata: languageName: node linkType: hard +"jsbn@npm:~0.1.0": + version: 0.1.1 + resolution: "jsbn@npm:0.1.1" + checksum: e5ff29c1b8d965017ef3f9c219dacd6e40ad355c664e277d31246c90545a02e6047018c16c60a00f36d561b3647215c41894f5d869ada6908a2e0ce4200c88f2 + languageName: node + linkType: hard + "jsdom@npm:^22.1.0": version: 22.1.0 resolution: "jsdom@npm:22.1.0" @@ -27822,6 +27979,13 @@ __metadata: languageName: node linkType: hard +"json-schema@npm:0.4.0": + version: 0.4.0 + resolution: "json-schema@npm:0.4.0" + checksum: 66389434c3469e698da0df2e7ac5a3281bcff75e797a5c127db7c5b56270e01ae13d9afa3c03344f76e32e81678337a8c912bdbb75101c62e487dc3778461d72 + languageName: node + linkType: hard + "json-stable-stringify-without-jsonify@npm:^1.0.1": version: 1.0.1 resolution: "json-stable-stringify-without-jsonify@npm:1.0.1" @@ -27887,6 +28051,18 @@ __metadata: languageName: node linkType: hard +"jsonwebtoken@npm:9.0.0": + version: 9.0.0 + resolution: "jsonwebtoken@npm:9.0.0" + dependencies: + jws: ^3.2.2 + lodash: ^4.17.21 + ms: ^2.1.1 + semver: ^7.3.8 + checksum: b9181cecf9df99f1dc0253f91ba000a1aa4d91f5816d1608c0dba61a5623726a0bfe200b51df25de18c1a6000825d231ad7ce2788aa54fd48dcb760ad9eb9514 + languageName: node + linkType: hard + "jsonwebtoken@npm:^9.0.0": version: 9.0.1 resolution: "jsonwebtoken@npm:9.0.1" @@ -27917,6 +28093,25 @@ __metadata: languageName: node linkType: hard +"jsprim@npm:^1.2.2": + version: 1.4.2 + resolution: "jsprim@npm:1.4.2" + dependencies: + assert-plus: 1.0.0 + extsprintf: 1.3.0 + json-schema: 0.4.0 + verror: 1.10.0 + checksum: 2ad1b9fdcccae8b3d580fa6ced25de930eaa1ad154db21bbf8478a4d30bbbec7925b5f5ff29b933fba9412b16a17bd484a8da4fdb3663b5e27af95dd693bab2a + languageName: node + linkType: hard + +"jssha@npm:2.4.1": + version: 2.4.1 + resolution: "jssha@npm:2.4.1" + checksum: c9e4a41922b423c211d0f05f80cd3098cc69f4a28fa0879ecfc71ed99a46b1a5bf07c61d5dd27dd3ed66ccf61f2694da8f8ff6ceb1006f6387dfd2542de42d57 + languageName: node + linkType: hard + "jsx-ast-utils@npm:^2.4.1 || ^3.0.0, jsx-ast-utils@npm:^3.3.3, jsx-ast-utils@npm:^3.3.5": version: 3.3.5 resolution: "jsx-ast-utils@npm:3.3.5" @@ -30767,6 +30962,47 @@ __metadata: languageName: node linkType: hard +"oci-common@npm:2.102.2, oci-common@npm:^2.102.2": + version: 2.102.2 + resolution: "oci-common@npm:2.102.2" + dependencies: + "@types/isomorphic-fetch": 0.0.35 + "@types/jsonwebtoken": 9.0.0 + "@types/jssha": 2.0.0 + "@types/opossum": 4.1.1 + "@types/sshpk": 1.10.3 + es6-promise: 4.2.6 + http-signature: 1.3.1 + isomorphic-fetch: 3.0.0 + jsonwebtoken: 9.0.0 + jssha: 2.4.1 + opossum: 5.0.1 + sshpk: 1.16.1 + uuid: 3.3.3 + checksum: 4a96e1353727578ab90381f03a256a848bdd8f0789babb4866999f5da62e348ff475ce8bc57fcf0be2405b26efcf3a7c0821e06fd91455266f14a3adedc71c21 + languageName: node + linkType: hard + +"oci-generativeaiinference@npm:^2.102.2": + version: 2.102.2 + resolution: "oci-generativeaiinference@npm:2.102.2" + dependencies: + oci-common: 2.102.2 + oci-workrequests: 2.102.2 + checksum: df8a86aeb33fcb7bfb6f360eb5ebd66c6060b00d2a2d8fac5b15b333a1dfa3a81a9acca1ef088869fe7a650d72b073ea579e2572acf7c463c4d1ae2d1283a810 + languageName: node + linkType: hard + +"oci-workrequests@npm:2.102.2": + version: 2.102.2 + resolution: "oci-workrequests@npm:2.102.2" + dependencies: + oci-common: 2.102.2 + oci-workrequests: 2.102.2 + checksum: 6cdb96157374c381adc2232f6fed39bed7835431af11ad2a0324ba92fbaf812d96939a74981221cc07e0af7a48c9d0abff7606475e2ea8e67b23a873b4e41fac + languageName: node + linkType: hard + "octokit@npm:^4.0.2": version: 4.0.2 resolution: "octokit@npm:4.0.2" @@ -31040,6 +31276,13 @@ __metadata: languageName: node linkType: hard +"opossum@npm:5.0.1": + version: 5.0.1 + resolution: "opossum@npm:5.0.1" + checksum: e86892a5ec67d3923139bec11b834912481ae8d3f91b82e8facc0055574c2c4d0dd4a6bb534daf16e538f8aafed3c2d850ec4a989ebb74ca215a2e376ba6b828 + languageName: node + linkType: hard + "option@npm:~0.2.1": version: 0.2.4 resolution: "option@npm:0.2.4" @@ -34647,7 +34890,7 @@ __metadata: languageName: node linkType: hard -"safer-buffer@npm:>= 2.1.2 < 3, safer-buffer@npm:>= 2.1.2 < 3.0.0": +"safer-buffer@npm:>= 2.1.2 < 3, safer-buffer@npm:>= 2.1.2 < 3.0.0, safer-buffer@npm:^2.0.2, safer-buffer@npm:^2.1.0, safer-buffer@npm:~2.1.0": version: 2.1.2 resolution: "safer-buffer@npm:2.1.2" checksum: cab8f25ae6f1434abee8d80023d7e72b598cf1327164ddab31003c51215526801e40b66c5e65d658a0af1e9d6478cadcb4c745f4bd6751f97d8644786c0978b0 @@ -35630,6 +35873,48 @@ __metadata: languageName: node linkType: hard +"sshpk@npm:1.16.1": + version: 1.16.1 + resolution: "sshpk@npm:1.16.1" + dependencies: + asn1: ~0.2.3 + assert-plus: ^1.0.0 + bcrypt-pbkdf: ^1.0.0 + dashdash: ^1.12.0 + ecc-jsbn: ~0.1.1 + getpass: ^0.1.1 + jsbn: ~0.1.0 + safer-buffer: ^2.0.2 + tweetnacl: ~0.14.0 + bin: + sshpk-conv: bin/sshpk-conv + sshpk-sign: bin/sshpk-sign + sshpk-verify: bin/sshpk-verify + checksum: 5e76afd1cedc780256f688b7c09327a8a650902d18e284dfeac97489a735299b03c3e72c6e8d22af03dbbe4d6f123fdfd5f3c4ed6bedbec72b9529a55051b857 + languageName: node + linkType: hard + +"sshpk@npm:^1.14.1": + version: 1.18.0 + resolution: "sshpk@npm:1.18.0" + dependencies: + asn1: ~0.2.3 + assert-plus: ^1.0.0 + bcrypt-pbkdf: ^1.0.0 + dashdash: ^1.12.0 + ecc-jsbn: ~0.1.1 + getpass: ^0.1.1 + jsbn: ~0.1.0 + safer-buffer: ^2.0.2 + tweetnacl: ~0.14.0 + bin: + sshpk-conv: bin/sshpk-conv + sshpk-sign: bin/sshpk-sign + sshpk-verify: bin/sshpk-verify + checksum: 01d43374eee3a7e37b3b82fdbecd5518cbb2e47ccbed27d2ae30f9753f22bd6ffad31225cb8ef013bc3fb7785e686cea619203ee1439a228f965558c367c3cfa + languageName: node + linkType: hard + "ssri@npm:^10.0.0": version: 10.0.5 resolution: "ssri@npm:10.0.5" @@ -37065,6 +37350,13 @@ __metadata: languageName: node linkType: hard +"tweetnacl@npm:^0.14.3, tweetnacl@npm:~0.14.0": + version: 0.14.5 + resolution: "tweetnacl@npm:0.14.5" + checksum: 6061daba1724f59473d99a7bb82e13f211cdf6e31315510ae9656fefd4779851cb927adad90f3b488c8ed77c106adc0421ea8055f6f976ff21b27c5c4e918487 + languageName: node + linkType: hard + "type-check@npm:^0.4.0, type-check@npm:~0.4.0": version: 0.4.0 resolution: "type-check@npm:0.4.0" @@ -38257,6 +38549,15 @@ __metadata: languageName: node linkType: hard +"uuid@npm:3.3.3": + version: 3.3.3 + resolution: "uuid@npm:3.3.3" + bin: + uuid: ./bin/uuid + checksum: 21133d0e8a85e607f59a66913bf4f1dd79bdc4a80979a872b913f7ec75f530255edfe8bc6b69ce32017b8367f0d60a8b24ccd2af99156dc1ef2b8b9fe0ec8065 + languageName: node + linkType: hard + "uuid@npm:^10.0.0": version: 10.0.0 resolution: "uuid@npm:10.0.0" @@ -38350,6 +38651,17 @@ __metadata: languageName: node linkType: hard +"verror@npm:1.10.0": + version: 1.10.0 + resolution: "verror@npm:1.10.0" + dependencies: + assert-plus: ^1.0.0 + core-util-is: 1.0.2 + extsprintf: ^1.2.0 + checksum: c431df0bedf2088b227a4e051e0ff4ca54df2c114096b0c01e1cbaadb021c30a04d7dd5b41ab277bcd51246ca135bf931d4c4c796ecae7a4fef6d744ecef36ea + languageName: node + linkType: hard + "vfile-location@npm:^3.0.0, vfile-location@npm:^3.2.0": version: 3.2.0 resolution: "vfile-location@npm:3.2.0"