(no title)
jonathanlb | 6 months ago
1. Profound tone-deafness about appropriate contexts for privacy messaging
2. Intentional targeting of users who want to avoid safety interventions
3. A fundamental misunderstanding of your ethical obligations as an AI provider
None of these interpretations reflect well on AgentSea's judgment or values.
kbelder|6 months ago
VonGuard|6 months ago
Anyone with half a brain complaining about hypothetical future privacy violations on some random platform just makes me spit milk out my nose. What privacy?! Privacy no longer exists, and worrying that your chat logs are gonna get sent to the authorities seems to me like worrying that the cops are gonna give you a parking ticket after your car blew up because you let the mechanic put a bomb in the engine.
sleazebreeze|6 months ago
lurking_swe|6 months ago
To play devils advocate for a second, what if someone that’s mentally ill uses a local LLM for therapy and doesn’t get the help they need? Even if it’s against their will? And they commit suicide or kill someone because the LLM said it’s the right thing to do…
Is being dead better, or is having complete privacy better? Or does it depend?
I use local LLMs too, but it’s disingenuous to act like they solve the _real_ problem here. Mentally ill people trying to use an LLM for therapy. It can end catastrophically.
LamerBeeI|6 months ago
exe34|6 months ago