top | item 40513365

(no title)

jdoss | 1 year ago

I have been using Ollama to run the Llama3 model and I chat with it via Obsidian using https://github.com/logancyang/obsidian-copilot and I hook VSCode into it with https://github.com/ex3ndr/llama-coder

Having the chats in Obsidian lets me save them to reference them later in my notes. When I first started using it in VSCode when programming in Python it felt like a lot of noise at first. It kept generating a lot of useless recommendations, but recently it has been super helpful.

I think my only gripe is I sometimes forget to turn off my ollama systemd unit and I get some noticeable video lag when playing games on my workstation. I think for my next video card upgrade, I am going to build a new home server that can fit my current NVIDIA RTX 3090 Ti and use that as a dedicated server for running ollama.

discuss

order

No comments yet.