If you don’t consider the difference in kind between a human vulnerability and an automated vulnerability that derives from the essentially unlimited capacity of the latter to scale, your comment makes a lot of sense. If you do consider that, the argument becomes irrelevant and deeply misleading
noduerme|2 years ago
An example I use for people is the Bernstein Bears effect. Imagine you wake up tomorrow and all your digital devices have no reference to 9/11. You ask Bing and Google and they insist you must be wrong, nothing like that ever happened. You talk to other people who remember it clearly but it seems you've lost control of reality; now imagine that type of gaslighting about "nothing happening" while the lights go out all over the world and you have some sense of what scale the larger of these systems are operating at.
eru|2 years ago
It was always a good idea to ignore the cesspool that is Twitter. No matter whether we are talking about bots or lynch mobs.
Btw, I think you mean Berenstain Bears.
erosenbe0|2 years ago
For example, my phone would automatically sign this post with my signature. If I programmed a bot, I could sign as myself or as a bot, but not as another registered human. So you'd know the post came from me or a bot I've authorized. Theft or fraud with digital signatures would be criminalized, it isn't already.
TeMPOraL|2 years ago
danShumway|2 years ago
There are ways to build AIs that don't have these problems specifically because their intelligence is limited to a specific task and thus they don't have a bunch of additional attack vectors literally baked into them.
But the attitude from a lot of companies I'm seeing online is "this might be impossible to fix, so you can't expect us to hold off releasing just because it's vulnerable." I don't understand that. If this is genuinely impossible to fix, that has implications.
Because the whole point with AI is to make things that are scalable. It matters that the security be better than the non-scalable system. If it can't be better, then we need to take a step back and ask if LLMs are the right approach.
ethanbond|2 years ago
This is like saying a nuclear weapon accident is not that scary because you can also have a microwave malfunction and catch on fire. Sure you can —- but the fact it’s not a nuke is highly relevant.
aidenn0|2 years ago
To use an admittedly extreme example: The difference between drawing some fake lines on the road and crashing 1 or 2 cars and having all self-driving cars on the road swerve simultaneously is not just a quantitative difference.