top | item 38917815

(no title)

shaial | 2 years ago

Congrats, this looks neat, and surely great to have more TS products in the ecosystem.

One plugin or feature that I will like to see in an AI gateway: *Cache* per unique request. So if I send the same request (system, messages, temperature, etc.), I will have the option to pull if from a cache (if it was already populated) and skip the LLM generation. This is much faster and cheaper - especially during development and testing.

discuss

order

retrovrv|2 years ago

Thank you! We have built out the cache system -- we do both simple caching (matching the request strings 100%) and also do semantic caching (returning a cache hit for semantically similar requests). More here - https://portkey.ai/docs/product/ai-gateway-streamline-llm-in...

The caching part isn't open source yet, but part of our internal workers. Would be very cool to open source it!

shaial|2 years ago

Awesome! We built the simple version in-house, and hoped someone would productize it.