top | item 47093465

(no title)

luyu_wu | 10 days ago

I think this is quite interesting for local AI applications. As this technology basically scales with parameter size, if there could be some ASIC for a QWen 0.5B or Google 0.3B model thrown onto a laptop motherboard it'd be very interesting.

Obviously not for any hard applications, but for significantly better autocorrect, local next word predictions, file indexing (tagging I suppose).

The efficiency of such a small model should theoretically be great!

discuss

order

No comments yet.