This is a walkthrough of my set up of local LLM capability on a Lenovo ThinkPad P1 Gen 4 (with a RTX A3000 6GB VRAM) graphics card, using Ollamafor CLI and VS Code Copilot chat access, and LM Studio for a GUI option.
My Lenovo ThinkPad P1 Gen 4 is coming up for 4 years old. It is a powerful workstation, and has a good, but by no means state of the art GPU in the RTX A3000. My expectation is that many developers will have a PC capable of running local LLMs as I have set up here.
See the GitHub repository for the full walk through:
By Moore's Law a 4-year-old machine should be a quarter of what's available today, it should be struggling. Instead it's rocking. Either Moore's Law is stalling, or software efficiency is finally catching up. Either way, the "you need new hardware" argument is getting weaker everyday. Long live tired silicon!
appsoftware|7 days ago
My Lenovo ThinkPad P1 Gen 4 is coming up for 4 years old. It is a powerful workstation, and has a good, but by no means state of the art GPU in the RTX A3000. My expectation is that many developers will have a PC capable of running local LLMs as I have set up here.
See the GitHub repository for the full walk through:
https://github.com/gbro3n/local-ai/blob/main/docs/local-llm-...
Ref: https://www.appsoftware.com/blog/local-llm-setup-on-windows-...
akssassin907|6 days ago
appsoftware|6 days ago