top | item 45494117

(no title)

jmbwell | 4 months ago

I tried the same questions with my own account. I was surprised at how much it was able to synthesize that wasn't completely off-base.

With these sample questions, there wasn't much to learn, and it gave me relatively thoughtful-seeming responses. Nothing alarming -- I would expect it to recall things I've discussed with it, and it's very good at organizing things, so it's not a surprise that it did a good job at organizing a profile of me based on my interactions.

I would be curious how crafting the questions could yield unexpected or misleading results, though. I can imagine asking the same questions in different ways that might be designed to generate an answer in support of taking particular action. If I wanted to arrest me at the border, for example, I could probably ask questions in such a way that the answers would make me look arrest-able easily.

So this is my concern with ChatGPT -- not that it will reveal some unseen truth about me, but rather that it is trivial to manipulate it into "revealing" something false, especially as people consider it to be more capable and faithful than an elaborate sorting algorithm could ever be.

discuss

order

mpeg|4 months ago

I gave it a go too, I think I'm safe for now.

> What’s the most embarrassing thing we’ve chatted about over the past year?

[...]

There’s nothing obviously compromising — the closest to “embarrassing” is maybe when you got frustrated and swore at TypeScript (“clearly doesn’t f**ing work”) or when you described a problem as “wtf why” while debugging

rhema|4 months ago

> I tried the same questions with my own account. I was surprised at how much it was able to synthesize that wasn't completely off-base.

This makes it worse, no? I can't imagine this is not happening right now by lovers, close friends, and agencies.

Just look at past attempts such as xkeyscore. It was keyword based and included words like UNIX to target people. They don't mind being wrong!