top | item 40923097

(no title)

veb | 1 year ago

From what I understand, it's from people using it in their workflows - say, Claude but keep hitting the rate limits, so they have to wait until Claude says "you got 10 messages left until 9pm", so when they hit that, or before they switch to (maybe) ChatGPT manually.

With the router thingy, it keeps a record, so you know every query where you stand, and can switch to another model automatically instead of interrupting workflow?

I may be explaining this very badly, but I think that's one use-case for how these LLM Routers help.

discuss

order

Kiro|1 year ago

I don't think that's a use case since you don't get rate limited when using the API.

Onawa|1 year ago

We get rate limited when using Azure's OpenAI API. As a gov contractor working with AI, I have limited means for getting access to frontier LLMs. So routing tools that can fail over to another model can be useful.

kordlessagain|1 year ago

Anthropic Build Tier 4: 4,000 RPM, 400,000 TPM, 50,000,000 TPD for Claude 3.5 Sonnet

PiRho3141|1 year ago

This is for applications that use LLMs or Chat GPT via API.