(no title)
zswaff | 2 years ago
I don’t know of too many tools that are offering unlimited free calls to OpenAI, which means that all cool LLM-enabled features are price-gated or premium or otherwise limited. It's a bummer to restrict that value. Our bet is that LLM pricing will follow a Moore’s Law-style pattern, at least for a while, that will mean that we can offer better and cheaper LLM-enabled features over time. So in short, we're subsidizing some of the costs now on a longer-term bet.
That said, we can be smart about how we do things technically. We embed, compress, and omit stuff as much as possible to minimize tokens.
Also, we actually just completely fail to handle some things (something like reprioritizing a backlog of 10k tasks just wouldn't work for us right now) so we do hard cap some actions.
ftkftk|2 years ago
zswaff|2 years ago
- Current plan is to re-embed everything but I'm very open to better ideas there haha. Is there a better way?
- I've heard some similar stuff but we haven't run into it yet. What are you working with?