top | item 35999173

(no title)

Kevcmk | 2 years ago

I believe that what is being missed in this thread is that, as it stands, user consent can be forged by prompt injection.

discuss

order

TheDong|2 years ago

There's a clear point where an API call is being made. That point is when a blocking consent prompt could show up.

Like, at worst openAI could "mitm" the prompt's call, and display a pop up modal asking for permission.

I'm not suggestion that you handle this by having the user type "I give permission to call google".

I don't see how it could be possible to forge user consent that is delivered to openAI's servers via a separate mechanism from the model. You'd have to give the LLM a "accept openAI permission prompts" or "run arbitrary javascript in the chatgpt browser session" plugin for it to then be able to use that plugin to bypass modal dialogs for other plugins.

TeMPOraL|2 years ago

There is always one other way left - the usual ways all the scummy companies do this on the web and mobile: make the consent prompt inscrutable, or feel necessary in context, or both.