Using VertexAI with prompt caching? #568
Unanswered
ShaharZivanOnvego
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I have a fully-functional project that uses Anthropic, and utilizes prompt-caching to improve performance and reduce costs.
To further increase speed, I've decided to try using VertexAI.
The way I create my SystemPrompt object is this way:
This, as mentioned, works perfectly fine with ChatAnthropic, and initializing the message with a dict is actually the only way I managed to get the message to actually be cached.
However, when I try to use the exact same code with ChatAnthropicVertex, I get the following error:
I verified that if I initialize the SystemMessage with content=some_string it works, so the dict is definitely the issue.
This is the code snippet that triggers the error:
I tried commenting out this part, but then I get:
So - definitely enforced by Google as well.
If passing this parameter this way is out of the question... how can I use prompt caching on Vertex? Is it even possible?
Beta Was this translation helpful? Give feedback.
All reactions