top | item 36101153

(no title)

colinnordin | 2 years ago

If you haven’t already: Start to store question and answer pairs and reuse the answer if the same question is asked multiple times.

You could also compute embeddings for the questions (don’t have to be OpenAI embeddings), and reuse the answer if the question is sufficiently similar to a prevously asked question.

discuss

order

serial_dev|2 years ago

I'm not sure it's practical and if it will result in any savings.

Wouldn't it be almost impossible to hit a duplicate when the users each form their own question?

Another issue I see is that these chat AIs usually have "history", so the question might be the same, but the context is different: the app might have received "when was he born", but in one context, the user talks about Obama and in another, she talks about Tom Brady.

If there are ways around these issues, I'd love to hear it, but it sounds like this will just increase costs via cache hardware costs and any dedup logic instead of saving money.

Silasdev|2 years ago

>Wouldn't it be almost impossible to hit a duplicate when the users each form their own question?

The embeddings approach would increase the likelyhood of finding the same question, even if phrased slightly differently.

rjtavares|2 years ago

With embeddings you can compute distance. The questions don't have to be the same, they just have to be sufficiently close.

Regarding context, that should be a part of the input for the embeddings.

dxhdr|2 years ago

This removes half the magic of interacting with ChatGPT. Users will quickly realize they're interacting with a dumb database rather than an AI.

cloogshicer|2 years ago

I don't see what the problem is if it's only on the exact same prompt.

I assume only a small percentage of users would put in the same prompt twice, and even then, why would they be upset at getting the same response?