You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am encountering an issue while trying to set up a bulk data export destination to my Google Cloud Storage (GCS) bucket using your S3-compatible API endpoint: https://api.smith.langchain.com/api/v1/bulk-exports/destinations.
Problem:
When I attempt to create the destination via the API, I receive a 400 error with the detail: {'detail': 'Failed to validate S3 destination: Access denied.'}.
Service Account for Export:[MY_EXPORT_SA_EMAIL] (e.g., langsmith-export-sa@[MY_GCP_PROJECT_ID].iam.gserviceaccount.com)
Service Account Permissions: This service account has been granted roles/storage.objectAdmin directly on the GCS bucket.
HMAC Key: I have manually generated an active HMAC key for this service account directly from the GCS Console (Storage > Settings > Interoperability).
Access Key ID:[A_VALID_GCS_HMAC_ACCESS_KEY_ID]
(Secret Key is known and has been verified by me)
Key Diagnostic Finding: Using the exact same manually generated HMAC Access Key ID and Secret Key, I can successfully upload files to and list objects in my GCS bucket using Google's gsutil command-line tool (when gsutil is configured to use these HMAC credentials).
For example, this command works: gsutil cp test_file.txt gs://[MY_BUCKET_NAME]/test_file.txt (after gsutil is configured with the aforementioned HMAC key).
This successful test with gsutil strongly indicates that:
The HMAC key pair is valid and active.
My service account has sufficient permissions (storage.objectAdmin) on the bucket.
My GCS bucket is correctly configured for S3-compatible access using these credentials from a Google-native tool.
API Call Details (from my Python script that fails with your API):
Error Received from your API:400 Bad Request - {'detail': 'Failed to validate S3 destination: Access denied.'}
Question:
Given that gsutil works correctly with these HMAC credentials and GCS bucket setup, could you please:
Provide any specific S3 client configuration details or GCS interoperability settings that your backend uses or expects (e.g., regarding addressing style, signing region for the global GCS endpoint, handling of specific headers like checksums) that might differ from a standard gsutil interaction? We have tried to ensure our awscli tests (which also failed with SignatureDoesNotMatch) used path-style addressing, SigV4, and us-east-1 signing region to the storage.googleapis.com endpoint, as is common advice for GCS S3 interop.
Check your backend logs for more specific error details from GCS when your service attempts to validate my S3 destination? The generic "Access Denied" makes it difficult to pinpoint the exact cause from my end.
I suspect there might be a subtle incompatibility or configuration nuance in how LangSmith's backend S3 client interacts with the GCS S3 API for my setup.
Thank you for your time and assistance.
Sincerely,
Adam
The text was updated successfully, but these errors were encountered:
Dear Team,
I am encountering an issue while trying to set up a bulk data export destination to my Google Cloud Storage (GCS) bucket using your S3-compatible API endpoint:
https://api.smith.langchain.com/api/v1/bulk-exports/destinations
.Problem:
When I attempt to create the destination via the API, I receive a 400 error with the detail:
{'detail': 'Failed to validate S3 destination: Access denied.'}
.My GCP Setup (details generalized for privacy):
[MY_GCP_PROJECT_ID]
(e.g.,my-company-development
)[MY_BUCKET_NAME]
(e.g.,my-gcp-project-id-langsmith-traces
)europe-west1
[MY_EXPORT_SA_EMAIL]
(e.g.,langsmith-export-sa@[MY_GCP_PROJECT_ID].iam.gserviceaccount.com
)roles/storage.objectAdmin
directly on the GCS bucket.[A_VALID_GCS_HMAC_ACCESS_KEY_ID]
Key Diagnostic Finding:
Using the exact same manually generated HMAC Access Key ID and Secret Key, I can successfully upload files to and list objects in my GCS bucket using Google's
gsutil
command-line tool (whengsutil
is configured to use these HMAC credentials).For example, this command works:
gsutil cp test_file.txt gs://[MY_BUCKET_NAME]/test_file.txt
(aftergsutil
is configured with the aforementioned HMAC key).This successful test with
gsutil
strongly indicates that:storage.objectAdmin
) on the bucket.API Call Details (from my Python script that fails with your API):
POST https://api.smith.langchain.com/api/v1/bulk-exports/destinations
Content-Type: application/json
X-API-Key: [MY_LANGSMITH_API_KEY_TYPE_BUT_NOT_VALUE]
(e.g., "lsv2_pt_...")X-Tenant-Id: [MY_LANGSMITH_WORKSPACE_ID_TYPE_BUT_NOT_VALUE]
400 Bad Request - {'detail': 'Failed to validate S3 destination: Access denied.'}
Question:
Given that
gsutil
works correctly with these HMAC credentials and GCS bucket setup, could you please:gsutil
interaction? We have tried to ensure ourawscli
tests (which also failed with SignatureDoesNotMatch) used path-style addressing, SigV4, andus-east-1
signing region to thestorage.googleapis.com
endpoint, as is common advice for GCS S3 interop.I suspect there might be a subtle incompatibility or configuration nuance in how LangSmith's backend S3 client interacts with the GCS S3 API for my setup.
Thank you for your time and assistance.
Sincerely,
Adam
The text was updated successfully, but these errors were encountered: