Skip to content

ChatXAI returns malformed usage_metadata #8220

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
5 tasks done
NicholasDullam opened this issue May 21, 2025 · 0 comments
Open
5 tasks done

ChatXAI returns malformed usage_metadata #8220

NicholasDullam opened this issue May 21, 2025 · 0 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@NicholasDullam
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

await ChatXAI({ model: 'grok-3-mini', temperature: 0.3, reasoningEffort: 'high' }).invoke(...)

AIMessageChunk {
  ...
  "response_metadata": {
    "usage": {
      "prompt_tokens": 2645,
      "completion_tokens": 30,
      "total_tokens": 4378,
      "prompt_tokens_details": {
        "text_tokens": 2645,
        "audio_tokens": 0,
        "image_tokens": 0,
        "cached_tokens": 0
      },
      "completion_tokens_details": {
        "reasoning_tokens": 1703,
        "audio_tokens": 0,
        "accepted_prediction_tokens": 0,
        "rejected_prediction_tokens": 0
      }
    }
  },
  ...
  "usage_metadata": {
    "input_tokens": null,
    "output_tokens": null,
    "total_tokens": null,
    "input_token_details": {
      "audio": 0,
      "cache_read": 0
    },
    "output_token_details": {
      "audio": 0,
      "reasoning": 1703
    }
  }
}

Error Message and Stack Trace (if applicable)

No response

Description

When using invoke with grok-3-mini or grok-3-mini-fast under ChatXAI, the usage_metadata is not properly extracted from the model API response; the proper, raw response can be viewed under the response_metadata.

The usage_metadata should include the proper token counts; and should propagate properly in the resulting LangSmith dashboards.

System Info

[email protected] | MIT | deps: 12 | versions: 321
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/

keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

dist
.tarball: https://registry.npmjs.org/langchain/-/langchain-0.3.26.tgz
.shasum: 5e1dec39172d1589bfee369a710b2e2459c3df8e
.integrity: sha512-W/9phB4wiAnj+PnpMWmv/ptIp7i5ygY2aK8yjKlxccHPbaNeMoy7njzFz8d0/xfcPyA3MvG4AuZnJ1j3/E2/Ig==
.unpackedSize: 2.9 MB

dependencies:
@langchain/openai: >=0.1.0 <0.6.0 jsonpointer: ^5.0.1 uuid: ^10.0.0
@langchain/textsplitters: >=0.0.0 <0.2.0 langsmith: ^0.3.29 yaml: ^2.2.1
js-tiktoken: ^1.0.12 openapi-types: ^12.1.3 zod-to-json-schema: ^3.22.3
js-yaml: ^4.1.0 p-retry: 4 zod: ^3.22.4

maintainers:

dist-tags:
latest: 0.3.26 tag-for-publishing-older-releases: 0.2.20
next: 0.3.2-rc.0

published 5 days ago by benjamincburns [email protected]

node Version: v18.20.8
pnpm version: v10.2.0

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label May 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant