top | item 47081273

The Path to Ubiquitous AI

6 points| 2001zhaozhao | 10 days ago |taalas.com

3 comments

order

2001zhaozhao|10 days ago

Saw this on /r/localllama

It's an LLM ASIC that runs one single LLM model at ridiculous speeds. It's a demonstration chip that runs Llama-3-8B at the moment but they're working on scaling it to larger models. I think it has very big implications on how AI will look like a few years from now. IMO the crucial question is whether they will get hard-limited by model size similarly to Cerebras

dust42|9 days ago

Interesting hardware but I wonder if it is capable of KV caching. Thus (only) useful for applications that have short context but would benefit from very low latency. Voice-to-voice applications may be a good example.

max8539|9 days ago

This is crazy! These chips could make high-reasoning models run so fast that they could generate lots of solution variants and automatically choose the best. Or you could have a smart chip in your home lab and run local models - fast, without needing a lot of expensive hardware or electricity