indeed, but prices have been falling dramatically in the last 2 years, and I think the latest smallest gpt4o-mini is already in the "mostly don't care" ballpark.
I am happy if we somehow get this down even more orders of magnitude, to the point where I can `npm install llm` and have it run alongside my normal code on a $5 VPS, without a GPU, and still handling (resonable) requests/minute with it. Yes I know we are _very far_ from that still, but one can dream.
anonyfox|1 year ago
I am happy if we somehow get this down even more orders of magnitude, to the point where I can `npm install llm` and have it run alongside my normal code on a $5 VPS, without a GPU, and still handling (resonable) requests/minute with it. Yes I know we are _very far_ from that still, but one can dream.