PydanticAI using Ollama (llama3.2) running locally
2 points by scolvin 7 months ago | 3 comments- eternityforest 7 months agoSo cool! I wonder what the weakest model that can still call functions and such is?
I don't have anything more powerful than an i5 other than my phone, and a lot of interesting applications like home automation really need to be local-first for reliability.
0.5b to 1b models seem to have issues with even pretty basic reasoning and question answering, but maybe I'm just Doing It Wrong.
- scolvin 7 months agoSee https://github.com/pydantic/pydantic-ai/issues/112 people have tried quite a few models.
Llama3.2 worked well and used <2gb ram
- eternityforest 7 months agoEdit: Gemma2 2B is very slow but it is able to do some basic tasks
- scolvin 7 months ago