(no title)
funnybeam | 3 months ago
These things are polite suggestions at best and it’s very misleading to people that do not understand the technology - I’ve got business people saying that using LLMs to process sensitive data is fine because there are “guardrails” in place - we need to make it clear that these kinds of vulnerabilities are inherent in the way gen AI works and you can’t get round that by asking them nicely
mossTechnician|3 months ago
Think of AI guardrails like the barriers along a highway: they don’t slow the car down, but they do help keep it from veering off course.
https://www.ibm.com/think/topics/ai-guardrails
funnybeam|3 months ago
I was on a call with Microsoft the other day when (after being pushed) they said they had guardrails in place “to block prompt injection” and linked to an article which said “_help_ block prompt injection”. The careful wording is deliberate I’m sure.