top | item 44903383

(no title)

0x00cl | 6 months ago

I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16`

You'll get better results.

discuss

order

simonw|6 months ago

Good call, I'm trying that one just now in LM Studio (by clicking "Use this model -> LM Studio" on https://huggingface.co/unsloth/gemma-3-270m-it-GGUF and selecting the F16 one).

(It did not do noticeably better at my pelican test).

Actually it's worse than that, several of my attempts resulted in infinite loops spitting out the same text. Maybe that GGUF is a bit broken?

danielhanchen|6 months ago

Oh :( Maybe the settings? Could you try

temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0

JLCarveth|6 months ago

I ran into the same looping issue with that model.

Patrick_Devine|6 months ago

We uploaded gemma3:270m-it-q8_0 and gemma3:270m-it-fp16 late last night which have better results. The q4_0 is the QAT model, but we're still looking at it as there are some issues.