Death threats mainly. Personally I think it would be easier if they just made it so that platforms ran a tiny LLM against the content that will be posted - determined if it is a death threat, then require them to be identified before it's posted, then it would solve a lot of these problems.TLDR: Evil people be doxxed internally not everyone.
numpad0|2 days ago
Filligree|2 days ago
bigfishrunning|2 days ago
reverius42|2 days ago
These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).
almosthere|2 days ago