top | item 40089428

(no title)

davej | 1 year ago

Llama 3 is tuned very nicely for English answers. What is most surprising to me is that the 8B model is performing similarly to Mistral's large model and the original GPT4 model (in English answers). Easily the most efficient model currently available.

discuss

order

swalsh|1 year ago

Parameter count seems to only matter for range of skills, but these smaller models can be tuned to be more than competitive with far larger models.

I suspect the future is going to be owned by lots of smaller more specific models, possibly trained by much larger models.

These smaller models have the advantage of faster and cheaper inference.

theLiminator|1 year ago

Probably why MoE models are so competitive now. Basically that idea within a single model.