I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16`
We uploaded gemma3:270m-it-q8_0 and gemma3:270m-it-fp16 late last night which have better results. The q4_0 is the QAT model, but we're still looking at it as there are some issues.
simonw|6 months ago
(It did not do noticeably better at my pelican test).
Actually it's worse than that, several of my attempts resulted in infinite loops spitting out the same text. Maybe that GGUF is a bit broken?
danielhanchen|6 months ago
temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
JLCarveth|6 months ago
Patrick_Devine|6 months ago