top | item 46606233

(no title)

mkroman | 1 month ago

I don't understand how they expect offline LLM models to work in a meaningful capacity for users.. Isn't there a single multilingual person working at Mozilla?

All of the small LLM models break down as soon as you try to do something that isn't written in English, because - surprise - they're just too small.

There would need to be a hardware breakthrough, or they would have to somehow solve the heavy cost of switching the models between pages.

Instead of useful AI stuff that is a clear improvement to accessibility, they're insistent on ham-fisting LLM solutions that no one have even asked for.

Off the top of my head, they could instead:

1. Integrate something like whisper to add automatic captions to videos or transcribe audio

2. Integrate one of the many really great text to speech models to read articles or text out loud

discuss

order