Skip to content

DOC: <Issue related to /observability/how_to_guides/data_export>Issue Creating Bulk Export Destination for Google Cloud Storage (GCS) - "Access Denied" Despite Valid Credentials #807

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
adamlundqvist opened this issue May 21, 2025 · 0 comments

Comments

@adamlundqvist
Copy link

Dear Team,

I am encountering an issue while trying to set up a bulk data export destination to my Google Cloud Storage (GCS) bucket using your S3-compatible API endpoint: https://api.smith.langchain.com/api/v1/bulk-exports/destinations.

Problem:
When I attempt to create the destination via the API, I receive a 400 error with the detail: {'detail': 'Failed to validate S3 destination: Access denied.'}.

My GCP Setup (details generalized for privacy):

  1. GCP Project ID: [MY_GCP_PROJECT_ID] (e.g., my-company-development)
  2. GCS Bucket Name: [MY_BUCKET_NAME] (e.g., my-gcp-project-id-langsmith-traces)
    • Location: europe-west1
    • Uniform Bucket-Level Access: Enabled
    • Versioning: Enabled
  3. Service Account for Export: [MY_EXPORT_SA_EMAIL] (e.g., langsmith-export-sa@[MY_GCP_PROJECT_ID].iam.gserviceaccount.com)
  4. Service Account Permissions: This service account has been granted roles/storage.objectAdmin directly on the GCS bucket.
  5. HMAC Key: I have manually generated an active HMAC key for this service account directly from the GCS Console (Storage > Settings > Interoperability).
    • Access Key ID: [A_VALID_GCS_HMAC_ACCESS_KEY_ID]
    • (Secret Key is known and has been verified by me)

Key Diagnostic Finding:
Using the exact same manually generated HMAC Access Key ID and Secret Key, I can successfully upload files to and list objects in my GCS bucket using Google's gsutil command-line tool (when gsutil is configured to use these HMAC credentials).

For example, this command works:
gsutil cp test_file.txt gs://[MY_BUCKET_NAME]/test_file.txt (after gsutil is configured with the aforementioned HMAC key).

This successful test with gsutil strongly indicates that:

  • The HMAC key pair is valid and active.
  • My service account has sufficient permissions (storage.objectAdmin) on the bucket.
  • My GCS bucket is correctly configured for S3-compatible access using these credentials from a Google-native tool.

API Call Details (from my Python script that fails with your API):

  • Endpoint: POST https://api.smith.langchain.com/api/v1/bulk-exports/destinations
  • Headers (API Key and Tenant ID are correct and can be provided privately if needed):
    • Content-Type: application/json
    • X-API-Key: [MY_LANGSMITH_API_KEY_TYPE_BUT_NOT_VALUE] (e.g., "lsv2_pt_...")
    • X-Tenant-Id: [MY_LANGSMITH_WORKSPACE_ID_TYPE_BUT_NOT_VALUE]
  • Payload Sent (structure):
    {
      "destination_type": "s3",
      "display_name": "GCS Export for My Project", // Valid characters used
      "config": {
        "bucket_name": "[MY_BUCKET_NAME]",
        "prefix": "langsmith_exports",
        "endpoint_url": "https://storage.googleapis.com"
      },
      "credentials": {
        "access_key_id": "[A_VALID_GCS_HMAC_ACCESS_KEY_ID]",
        "secret_access_key": "[MY_CORRECT_MANUALLY_GENERATED_HMAC_SECRET_WAS_USED_HERE]"
      }
    }
  • Error Received from your API: 400 Bad Request - {'detail': 'Failed to validate S3 destination: Access denied.'}

Question:
Given that gsutil works correctly with these HMAC credentials and GCS bucket setup, could you please:

  1. Provide any specific S3 client configuration details or GCS interoperability settings that your backend uses or expects (e.g., regarding addressing style, signing region for the global GCS endpoint, handling of specific headers like checksums) that might differ from a standard gsutil interaction? We have tried to ensure our awscli tests (which also failed with SignatureDoesNotMatch) used path-style addressing, SigV4, and us-east-1 signing region to the storage.googleapis.com endpoint, as is common advice for GCS S3 interop.
  2. Check your backend logs for more specific error details from GCS when your service attempts to validate my S3 destination? The generic "Access Denied" makes it difficult to pinpoint the exact cause from my end.

I suspect there might be a subtle incompatibility or configuration nuance in how LangSmith's backend S3 client interacts with the GCS S3 API for my setup.

Thank you for your time and assistance.

Sincerely,
Adam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant