top | item 47179853

(no title)

jascha_eng | 2 days ago

Okay fair about the mentions but I don't that email is a good process:

1. It puts more effort on me as a user to report the spam via email because I have to open my email, compose one by hand and add the reasoning. The offending user in comparison probably automatically spams. Can't we have a button at least?

2. It doesn't make the community aware of the ongoing issue. Other community members could be primed that currently they need to read comments more critically. At the moment that seems like the only detection that somewhat works but if I silently send an email instead of commenting here it doesn't inform anyone else of my suspicion.

discuss

order

tomhow|2 days ago

It’s fine to just flag things and move on. We’re considering adding additional parameters to the flag function, but till then, emailing us with “LLM?” in the subject and the comment ID/URL in the body is great, and should be faster for you than a comment and faster for us to be able to act.

The community is well aware of the issue and off-topic meta discussion has always been against the guidelines here. We’ve discussed this publicly and privately with top HN contributors and the consensus is that this is the least-worst approach.

jascha_eng|15 hours ago

The fact that this comment had 10 comments and full comment chains that didn't notice it is LLM generated tells me the community is not aware enough. It was also upvoted a lot.

I think there is significant value in making people second guess content and look at it critically. Especially in a time where it is so easy to fake expertise. We all need to train that skill anyway these days for all online interactions.

10 years ago it was clickbait titles that we needed to learn to ignore, today it is LLM generated content. We will get there, but by not calling it out publicly we are making it easier for adversaries to fool everyone.

And yes I don't want to falsely accuse anyone of LLM slop either but they can defend themselves and making mistakes is part of the learning process for all of us. Writers and commenters will learn how to not sound like an LLM and we will more finely atune to the nuance between polished human writing and AI.