top | item 46896703

(no title)

mastermage | 25 days ago

The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works.

discuss

order

PurpleRamen|25 days ago

Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.

larodi|25 days ago

Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security.

Ono-Sendai|25 days ago

They could be if models were trained properly, with more carefully delineated prompts.

arw0n|24 days ago

I'd be super interested in more information on this! Do you mean abandoning unsupervised learning completely?

Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.