LLMs aren't useful for anything, don't pay any mind to the "I'm sorry, I can't do that"-titled Amazon products, nobody is using LLMs to do anything you'll ever see.
I think they somewhat miss the point with 'the LLM could carry out actions'. Sure, it could, but in most cases the ability for models themselves to act will have guardrails.
The likely bigger issue is that a human believes what the model says and acts on it. Poisoning an LLM used in e.g. online learning or HR in this way, unfortunately a lot of people either aren't strong critical thinkers to begin with, or are placed in roles/situations where they're disempowered. "Trust the machine and you won't get fired".
stavros|2 years ago
Earw0rm|2 years ago
The likely bigger issue is that a human believes what the model says and acts on it. Poisoning an LLM used in e.g. online learning or HR in this way, unfortunately a lot of people either aren't strong critical thinkers to begin with, or are placed in roles/situations where they're disempowered. "Trust the machine and you won't get fired".