top | item 42680081

(no title)

llamaLord | 1 year ago

My experience observing commercial LLM's since the release of GPT-4 is actually the opposite of this.

Sure, they've gotten much cheaper on a per-token basis, but that cost reduction has come with a non-trivial accuracy/reliability cost.

The problem is, tokens that are 10x cheaper are still useless if what they say is straight up wrong.

discuss

order

maeil|1 year ago

> Sure, they've gotten much cheaper on a per-token basis, but that cost reduction has come with a non-trivial accuracy/reliability cost.

This only holds for OpenAI.