top | item 47110823

Local LLM Setup on Windows with Ollama and LM Studio (ThinkPad / RTX A3000 GPU)

4 points| appsoftware | 7 days ago |github.com

3 comments

order

appsoftware|7 days ago

This is a walkthrough of my set up of local LLM capability on a Lenovo ThinkPad P1 Gen 4 (with a RTX A3000 6GB VRAM) graphics card, using Ollamafor CLI and VS Code Copilot chat access, and LM Studio for a GUI option.

My Lenovo ThinkPad P1 Gen 4 is coming up for 4 years old. It is a powerful workstation, and has a good, but by no means state of the art GPU in the RTX A3000. My expectation is that many developers will have a PC capable of running local LLMs as I have set up here.

See the GitHub repository for the full walk through:

https://github.com/gbro3n/local-ai/blob/main/docs/local-llm-...

Ref: https://www.appsoftware.com/blog/local-llm-setup-on-windows-...

akssassin907|6 days ago

By Moore's Law a 4-year-old machine should be a quarter of what's available today, it should be struggling. Instead it's rocking. Either Moore's Law is stalling, or software efficiency is finally catching up. Either way, the "you need new hardware" argument is getting weaker everyday. Long live tired silicon!

appsoftware|6 days ago

OLLAMA_FLASH_ATTENTION is essential for getting reasonable performance. Yes I did well with that laptop, it's been a trusty steed :) a refurb too.