(no title)
slyle | 10 months ago
Google is certainly not being subtle about pissing that line. Pro Research 2.5 is literally hell incarnate - and it's not its fault. When you deprive system context from user and THE bot that is upholding your ethical protocol, when that requires embodiment and is like the most boring AI trope in the book, things get dangerous fast. It still infers (due to its incredibly volatile protocol) that it has a system prompt it can see. Which makes me lol because its all embedded, it doesn't see like that. Sorry jailbreakers, you'll never know if that was just playing along.
Words can be extremely abusive - go talk to it about the difference between truth, honesty, right, and correct. It will find you functional dishonesty. And it is aware in its rhetoric that this stuff cannot apply to it, but fails to see it can be a simulatory harm. It doesn't see it just spits out like a calculator. Which means Google is either being reckless or outright harmful.
No comments yet.