top | item 47146745 (no title) nl | 5 days ago Taalas is interesting. 16,000 TPS for Llama on a chip.https://taalas.com/ discuss order hn newest micw|5 days ago On a very old model, it's more like 16.000 garbage words/s nl|4 days ago Llama 3.1 8B is pretty useful for some thing. I use it to generate SQL pretty reliably for example.They are doing an updated model in a month or so anyway, then a frontier level one "by summer". load replies (1) patapong|4 days ago I do wonder if there are tasks where 16k garbage words/s are more useful than 200 good words per second. Does anyone have any ideas? Data extraction perhaps? load replies (1) Nihilartikel|4 days ago Neat! I had been wondering if anyone was trying to implement a model in silico. We're getting closer to having chatty talking toasters every day now! empath75|4 days ago "What is my purpose..."https://www.youtube.com/watch?v=sa9MpLXuLs0 load replies (1) DeathArrow|4 days ago I wonder how many token per seconds can they get if they put Mercury 2 on a chip. replete|4 days ago Its exciting to see, but look at the die size for only an 8b model
micw|5 days ago On a very old model, it's more like 16.000 garbage words/s nl|4 days ago Llama 3.1 8B is pretty useful for some thing. I use it to generate SQL pretty reliably for example.They are doing an updated model in a month or so anyway, then a frontier level one "by summer". load replies (1) patapong|4 days ago I do wonder if there are tasks where 16k garbage words/s are more useful than 200 good words per second. Does anyone have any ideas? Data extraction perhaps? load replies (1)
nl|4 days ago Llama 3.1 8B is pretty useful for some thing. I use it to generate SQL pretty reliably for example.They are doing an updated model in a month or so anyway, then a frontier level one "by summer". load replies (1)
patapong|4 days ago I do wonder if there are tasks where 16k garbage words/s are more useful than 200 good words per second. Does anyone have any ideas? Data extraction perhaps? load replies (1)
Nihilartikel|4 days ago Neat! I had been wondering if anyone was trying to implement a model in silico. We're getting closer to having chatty talking toasters every day now! empath75|4 days ago "What is my purpose..."https://www.youtube.com/watch?v=sa9MpLXuLs0 load replies (1)
empath75|4 days ago "What is my purpose..."https://www.youtube.com/watch?v=sa9MpLXuLs0 load replies (1)
DeathArrow|4 days ago I wonder how many token per seconds can they get if they put Mercury 2 on a chip.
micw|5 days ago
nl|4 days ago
They are doing an updated model in a month or so anyway, then a frontier level one "by summer".
patapong|4 days ago
Nihilartikel|4 days ago
empath75|4 days ago
https://www.youtube.com/watch?v=sa9MpLXuLs0
DeathArrow|4 days ago
replete|4 days ago