top | item 44753511

(no title)

Tokumei-no-hito | 7 months ago

pardon the ignorance but it's the first time I've heard of bfloat16.

i asked chat for an explanation and it said bfloat has a higher range (like fp32) but less precision.

what does that mean for image generation and why was bfloat chosen over fp?

discuss

order

dragonwriter|7 months ago

My fuzzy understanding, and I'm not at all an expert on this, that the main benefit is that bf16 is less prone to overflow/underflow during calculation, which is a source of bigger problems in both training and inference than the simple loss of precision, so once it became widely supported, it became a commonly-preferred format for models (whether image gen or otherwise) over FP16.