top | item 41642477

(no title)

gens | 1 year ago

In my experience the hardware requirements are whatever the file size is + a bit more. Cpu works, gpu is a lot faster but needs VRAM.

Was playing with them some more yesterday. Found that the 4bit ("q4") is much worse then q8 or fp16. Llama3.1 8B is ok, internlm2 7B is more precise. And they all hallucinate a lot.

Also found this page, that has some rankings: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_...

In my opinion they are not really useful. Good for translations, to summaries some texts, and.. to ask in case you forgot some things about something. But they lie, so for anything serious you have to do your own research. And absolutely no good for precise or obscure topics.

If someone wants to play there's GPT4All, Msty, LM Studio. You can give them some of your documents to process and use as "knowledge stacks". Msty has web search, GPT4All will get it in some time.

Got more opinions, but this is long enough already.

discuss

order

accrual|1 year ago

I agree on the translation part. Llama 3.1 8B even at 4bit does a great job translating JP to EN as far as I can tell, and is often better than dedicated translation models like Argos in my experience.

petre|1 year ago

I had a underwhelming experience with Llama translation, incompatable to Claude or GPT3.5+ which are very good. Kind of like Google translate but worse. I was using them through Perplexity.