top | item 39299309

(no title)

gbickford | 2 years ago

Llama.cpp is an inference engine. The author of llama.cpp designed gguf. Funcionary is a model that does function calling. You can download functionary weights in the gguf format and then run it using llama.cpp on low-end machines using CPU or GPU or a mix of both.

discuss

order

thund|2 years ago

ok so the answer is no, thanks

behnamoh|2 years ago

The GP educated you and you still said the answer was "no"? It's clearly "yes".