top | item 44920477

(no title)

itsalotoffun | 6 months ago

When you discuss caching, are you talking about caching the LLM response on your side (what I presume) or actual prompt caching (using the provider cache[0])? Curious why you'd invalidate static content?

[0]: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

discuss

order

imsh4yy|6 months ago

I think I need to make this a bit more clear. I was mostly referring to caching the tools (sub-agents) if they are a pure function. But that may be a bit too speicific for the sake of this post.

i.e. you have a query that reads data that doesn't change often, so you can cache the result.

adastra22|6 months ago

It seems very doubtful to me that every query would be literally the same (e.g. same hash), if these are plain text descriptions of the subset task.