(no title)
pibaker | 4 days ago
I would even go as far as saying that we are under more threat from bad faith arguing from eloquent, educated actors than what people usually blame. You know, "trolls." You notice this every time when a city planning meeting gets derailed by concerned citizens just asking questions about the potential dangers of a children's playground. You notice this when an abusive person in a relationship goes to a therapist and suddenly has a whole high minded vocabulary justifying their own action. You notice this when your boss talks about opening up new opportunities and chasing new fields of business while coworkers circulate rumors of upcoming layoffs.
The entire point of bad faith is saying words you don't mean to achieve your goals. The words are always just a disposable tool secondary to the bad faith actor's true intentions. You fundamentally cannot fix bad faith by fixing someone's choice of words any more than you can sugarcoat a poisoned pill and make it safe.
NitpickLawyer|3 days ago
I think there's something here. The tool is not intended to stop bad faith actors. You can't stop those. But you can nudge people into "being better" with a simple prompt. I can't recall the exact blog/paper now, but I remember reading that someone did this test (google perhaps?) and saw that with a simple prompt "hey this message is high on anger, did you mean to write it like this?" before submitting lead to ~30-50% to change their message and tone it down. It might help in that regard.
casey2|3 days ago
hockey|3 days ago
I've definitely been young and passionate and, occasionally a bit inebriated and/or triggered by certain things when I commented on the internet (in fact I'm usually silent unless I've had a few drinks).
Though I don't think myself a bad faith actor, I've definitely written things I shouldn't in the past. Often with good intentions, but perhaps with anger or passion clouding my judgement. Most folks have something that will trigger them to respond in a sub par way after a bad sleep or a long day.
I'd like to think that a tool to let me know I'm alienating rather than persuading the folks I'm talking to would provide benefit.
But yeah. This is a difficult one. Not everyone who is being a jerk is just having an out-of-character bad day.
atoav|3 days ago
Now a tool that gives people feedback before their comment is going out could be tremendously useful to the quality of the conversations people could have.
NickHodges0702|3 days ago
No tool will stop the determined bad actor.
tveita|3 days ago
Just a reminder that "this probably isn't worth replying to" should help a lot. But alas, it would directly reduce precious engangement.
ranger_danger|3 days ago
Not always, and we can't know people's intentions ahead of time. But I'd rather have something like this that at least tries to help people improve themselves who are open to it, rather than doing nothing.
parpfish|4 days ago
i'd prefer if the trolls in my life retained the superficial appearance of trolls to make them easier to spot.
[0] https://en.wikipedia.org/wiki/Sealioning
unknown|3 days ago
[deleted]
culi|3 days ago
mentalgear|3 days ago
However, I think a tool like this could still have huge potential, but less for tone and more for structure.
E.g.: - Atomicity: Ensuring a comment presents a clear, self-contained core argument that can be debated in sub-comments, rather than a tautology or an accumulation of loosely connected arguments.
- Logical consistency: (Though whether an LLM can reliably parse logic is another question entirely!)
- Citations: Checking if the commenter provided credible sources for their claims.
- Civility of Discussion: instead of it becoming another mud battle
- Misinformation: Flagging the use of known, debunked conspiracy theories: Instead of modifying the original comment, it could simply append a contextual banner to the top with a Snopes link when a known false claim is made.
7bit|3 days ago
The kind of people you describe are much, much more evil.