(no title)
gens | 1 year ago
Was playing with them some more yesterday. Found that the 4bit ("q4") is much worse then q8 or fp16. Llama3.1 8B is ok, internlm2 7B is more precise. And they all hallucinate a lot.
Also found this page, that has some rankings: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_...
In my opinion they are not really useful. Good for translations, to summaries some texts, and.. to ask in case you forgot some things about something. But they lie, so for anything serious you have to do your own research. And absolutely no good for precise or obscure topics.
If someone wants to play there's GPT4All, Msty, LM Studio. You can give them some of your documents to process and use as "knowledge stacks". Msty has web search, GPT4All will get it in some time.
Got more opinions, but this is long enough already.
accrual|1 year ago
petre|1 year ago