top | item 38522636

Llamafile – The easiest way to run LLMs locally on your Mac

27 points| paolop | 2 years ago |ppaolo.substack.com

17 comments

order

wokwokwok|2 years ago

Why?

It's unsafe and it takes all the choice and control away from you.

You should, instead:

1) Build a local copy of llama.cpp (literally clone https://github.com/ggerganov/llama.cpp and run 'make').

2) Download the model version you actually want from hugging face (for example, from https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGU..., with the clearly indicated required RAM for each variant)

3) Run the model yourself.

I'll say this explicitly: these llamafile things are stupid.

You should not download arbitrary user uploaded binary executables and run them on your local laptop.

Hugging face may do it's best to prevent people from taking advantage of this (heck, they literally invented safetensors), but long story short: we can't have nice things because people suck.

If you start downloading random executables from the internet and running them, you will regret it.

Just spend the extra 5 minutes to build llama.cpp yourself. It's very, very easy to do and many guides already exist for doing exactly that.

superkuh|2 years ago

It only takes away choice if you use the demo files with the models baked in. There are versions of this under the Releases->Assets that are only the actual llama.cpp OS portable binaries that you pass the model file path to as normal.

Compiling llama.cpp is relatively easy. Compiling llama.cpp for GPU support is a bit harder. I think it's nice this OS portable binaries of llama.cpp applications like main, server, and llava exist. Too bad there's no opencl ones. The only problem was baking in the models. Downloading applications off the internet is not that weird. After all, it's the recommended way to install Rust, etc.

senthil_rajasek|2 years ago

See also,

Llamafile is the new best way to run a LLM on your own computer (simonwillison.net)

https://news.ycombinator.com/item?id=38489533

And

https://news.ycombinator.com/item?id=38464057

Akashic101|2 years ago

I'd like to train one of the provided LLM's with my one data, I heard that RAG can be used for that. Does anyone have any pointers on how this could be achieved with llamafiles all locally on my server?

paolop|2 years ago

What's your experience with open source LLMs like LLaVA 1.5 or Mistral 7B?

bugglebeetle|2 years ago

The fine-tunes of Mistal 7B, open-Hermes-2.5 and OpenOrca are good. Zephyr is underwhelming.

aldarisbm|2 years ago

Why does this keep popping up on here?

gapchuboy|2 years ago

Because, people on hackernews are interested more in the prompt engineering. Convenience and satisfaction 》5 minutes of git pull and make