top | item 47185404

(no title)

almosthere | 2 days ago

Death threats mainly. Personally I think it would be easier if they just made it so that platforms ran a tiny LLM against the content that will be posted - determined if it is a death threat, then require them to be identified before it's posted, then it would solve a lot of these problems.

TLDR: Evil people be doxxed internally not everyone.

discuss

order

numpad0|2 days ago

That turns jokes into contracts that nobody wants. Bad idea.

Filligree|2 days ago

Maybe just don’t make “jokes” like that.

bigfishrunning|2 days ago

a "tiny large language model"? lol

reverius42|2 days ago

See https://tinyllm.org

These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).

almosthere|2 days ago

Yeah, a small one that is cheaper because they'll be processing billions of messages per year.