top | item 42344943

PydanticAI using Ollama (llama3.2) running locally

2 points| scolvin | 1 year ago |github.com

3 comments

order

eternityforest|1 year ago

So cool! I wonder what the weakest model that can still call functions and such is?

I don't have anything more powerful than an i5 other than my phone, and a lot of interesting applications like home automation really need to be local-first for reliability.

0.5b to 1b models seem to have issues with even pretty basic reasoning and question answering, but maybe I'm just Doing It Wrong.

eternityforest|1 year ago

Edit: Gemma2 2B is very slow but it is able to do some basic tasks