top | item 39890311

(no title)

nborwankar | 1 year ago

I didn’t take a year fully off but last year was slow in my consulting work so I did a deep dive on the emerging LLM area.

One recommendation - get a beefy Mac laptop so you can run LLM’s locally. I got an M2 with 96G RAM. Makes a huge difference to your thinking about LLM’s when you can run your own and integrated it with little tasks here and there for experimenting.

Else I find most people think only of centralized closed LLM’s when they think of what’s possible. Severely limiting.

/r/LocalLlama on Reddit is a great community. Other than that check out llama.cpp and ggml.cpp and the whole ecosystem around that.

Cheers and good luck. Ping me on DM if you want more pointers.

discuss

order

No comments yet.