top | item 38716804

(no title)

hayksaakian | 2 years ago

Edited: Simon made a good point that exfiltration can happen via hidding prompt injection attacks in 3rd party websites. (See his reply below).

This has broader implications than Custom GPTs

--

Yeah this seems overblown. Custom GPTs can already make requests via function calls / tools to 3rd party services.

The only difference I see here, is the UI shows you when a function call happens, but even that is easy to obscure behind a 'reasonable sounding' label.

The expectation should be: If I'm using a 3rd party's GPT, they can see all the data I input.

This is the same as any mobile app on a phone, or any website you visit.

The only real 'line' here in a cultural sense might be offline software or tools that you don't expect to connect to the web at all for their functionality.

discuss

order

simonw|2 years ago

There's more to it than just third party GPTs.

ChatGPT can read URLs. If you paste in the URL to a web page you want to summarize, that web page might include a prompt injection attack as hidden text on the page.

That attack could then attempt to exfiltrate private data from your previous ChatGPT conversation history, or from files you have uploaded to analyze using Code Interpreter mode.

hayksaakian|2 years ago

Ah that makes more sense! Thank you for clarifying.

For me, In the past ChatGPT has refused to access URLs directly, but it's willing to search them on Bing and then access them indirectly

nojs|2 years ago

Ok but if you assume prompt injection then there’s a whole lot of other things to worry about.