top | item 39893321

(no title)

mateuszbuda | 1 year ago

Anyone can share experience with https://ollama.com/ ?

discuss

order

bovem|1 year ago

I love it. It is easy to install and containerized.

Its API is great if you want to integrate it with your code editor or create your own applications.

I have written a blog [1] on the process of deployment and integration with neovim and vscode.

I also created an application [2] to chat with LLMs by adding the context of a PDF document.

Update: I would like to add that because the API is simple and Ollama is now available on Windows I don’t have to share my GPU between multiple VMs to interact with it.

[1] https://www.avni.sh/posts/homelab/self-hosting-ollama/ [2] https://github.com/bovem/chat-with-doc

notjulianjaynes|1 year ago

Stupid easy. I was speaking with an LLM after pasting two lines into terminal.

cgopalan|1 year ago

I use it on my 2015 Macbook pro. Its amazing how quickly you can get set up, kudos to the authors. Its a dog in terms of response time for questions, but that's expected with my machine configuration.

Also they have Python (and less relevant to me) Javascript libraries. So I assume you dont have to go through LangChain anymore.

nipponese|1 year ago

Installing additional LLMs is a single command. I am currently loving Dolphin.

jdwyah|1 year ago

super easy. fun to play with. fast.

we screwed around with it on a live stream: https://www.youtube.com/live/3YhBoox4JvQ?si=dkni5LY3EALnWVuE...

If you're writing something that will run on someone's local machine I think we're at the point where you can start building with the assumption that they'll have a local, fast, decent LLM.

auggierose|1 year ago

> If you're writing something that will run on someone's local machine I think we're at the point where you can start building with the assumption that they'll have a local, fast, decent LLM.

I don't believe that at all. I don't have any kind of local LLM. My mother doesn't, either. Nor does my sister. My girl-friend? Nope.