top | item 47146252

(no title)

quotemstr | 5 days ago

Once you make a model fast and small enough, it starts to become practical to use LLMs for things as mundane as spell checking, touchscreen-keyboard tap disambiguation, and database query planning. If the fast, small model is multimodal, use it in a microwave to make a better DWIM auto-cook.

Hell, want to do syntax highlighting? Just throw buffer text into an ultra-fast LLM.

It's easy to overlook how many small day-to-day heuristic schemes can be replaced with AI. It's almost embarrassing to think about all the totally mundane uses to which we can put fast, modest intelligence.

discuss

order

No comments yet.