top | item 47087341 (no title) kaashif | 9 days ago If it's incredibly fast at a 2022 state of the art level of accuracy, then surely it's only a matter of time until it's incredibly fast at a 2026 level of accuracy. discuss order hn newest PrimaryExplorer|9 days ago yeah this is mindblowing speed. imagine this with opus 4.6 or gpt 5.2. probably coming soon scotty79|9 days ago I'd be happy if they can run GLM 5 like that. It's amazing at coding. Gud|9 days ago Why do you assume this?I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down Closi|9 days ago Better models already exist, this is just proving you can dramatically increase inference speeds / reduce inference costs.It isn't about model capability - it's about inference hardware. Same smarts, faster. andy12_|9 days ago Not what he said.
PrimaryExplorer|9 days ago yeah this is mindblowing speed. imagine this with opus 4.6 or gpt 5.2. probably coming soon scotty79|9 days ago I'd be happy if they can run GLM 5 like that. It's amazing at coding.
Gud|9 days ago Why do you assume this?I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down Closi|9 days ago Better models already exist, this is just proving you can dramatically increase inference speeds / reduce inference costs.It isn't about model capability - it's about inference hardware. Same smarts, faster. andy12_|9 days ago Not what he said.
Closi|9 days ago Better models already exist, this is just proving you can dramatically increase inference speeds / reduce inference costs.It isn't about model capability - it's about inference hardware. Same smarts, faster.
PrimaryExplorer|9 days ago
scotty79|9 days ago
Gud|9 days ago
I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down
Closi|9 days ago
It isn't about model capability - it's about inference hardware. Same smarts, faster.
andy12_|9 days ago