top | item 44997216 (no title) wfn | 6 months ago > but then the shell commands were actually running llama.cpp, a mistake probably no human would make.But in the docs I see things like cp llama.cpp/build/bin/llama-* llama.cpp Wouldn't this explain that? (Didn't look too deep) discuss order hn newest danielhanchen|6 months ago Yes it's probs the ordering od the docs thats the issue :) Ie https://docs.unsloth.ai/basics/deepseek-v3.1#run-in-llama.cp... does:```apt-get updateapt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -ygit clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ONcmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-servercp llama.cpp/build/bin/llama-* llama.cpp```but then Ollama is above it:```./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf```I'll edit the area to say you first have to install llama.cpp
danielhanchen|6 months ago Yes it's probs the ordering od the docs thats the issue :) Ie https://docs.unsloth.ai/basics/deepseek-v3.1#run-in-llama.cp... does:```apt-get updateapt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -ygit clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ONcmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-servercp llama.cpp/build/bin/llama-* llama.cpp```but then Ollama is above it:```./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf```I'll edit the area to say you first have to install llama.cpp
danielhanchen|6 months ago
```
apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server
cp llama.cpp/build/bin/llama-* llama.cpp
```
but then Ollama is above it:
```
./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf
```
I'll edit the area to say you first have to install llama.cpp