(no title)
baalimago | 9 days ago
Jokes aside, it's very promising. For sure a lucrative market down the line, but definitely not for a model of size 8B. I think lower level intellect param amount is around 80B (but what do I know). Best of luck!
baalimago | 9 days ago
Jokes aside, it's very promising. For sure a lucrative market down the line, but definitely not for a model of size 8B. I think lower level intellect param amount is around 80B (but what do I know). Best of luck!
otabdeveloper4|9 days ago
You don't actually need "frontier models" for Real Work (c).
(Summarization, classification and the rest of the usual NLP suspects.)
SkyPuncher|9 days ago
Like, give me semantic search that can detect the difference between SSL and TLS without needing to put a full LLM in the loop.
PlatoIsADisease|9 days ago
If we are going for accuracy, the question should be asked multiple times on multiple models and see if there is agreement.
But I do think once you hit 80B, you can struggle to see the difference between SOTA.
That said, GPT4.5 was the GOAT. I can't imagine how expensive that one was to run.
Derbasti|9 days ago
Snarky, but true. It is truly astounding, and feels categorically different. But it's also perfectly useless at the moment. A digital fidget spinner.
anthonypasq|9 days ago
do you have the foresight of a nematode?
edot|9 days ago