(no title)
DiabloD3 | 7 days ago
Problem is, a good LLM reproduces its training as verbatim as the prompt and quant quality allows. Like, thats its entire purpose. It gives you more of what you already have.
Most of these models are trained on unvetted inputs. They will reproduce bad inputs, but do so well. They do not comprehend anything you're saying to them. They are not a reasoning machine, they are a reproduction machine.
Just because I can get better quality inferring locally doesn't mean it stops being an LLM. I don't want a better LLM, I want a machine that can actually reason effectively.
No comments yet.