top | item 47146745

(no title)

nl | 5 days ago

Taalas is interesting. 16,000 TPS for Llama on a chip.

https://taalas.com/

discuss

order

micw|5 days ago

On a very old model, it's more like 16.000 garbage words/s

nl|4 days ago

Llama 3.1 8B is pretty useful for some thing. I use it to generate SQL pretty reliably for example.

They are doing an updated model in a month or so anyway, then a frontier level one "by summer".

patapong|4 days ago

I do wonder if there are tasks where 16k garbage words/s are more useful than 200 good words per second. Does anyone have any ideas? Data extraction perhaps?

DeathArrow|4 days ago

I wonder how many token per seconds can they get if they put Mercury 2 on a chip.

replete|4 days ago

Its exciting to see, but look at the die size for only an 8b model