top | item 35780896

(no title)

ljw1001 | 2 years ago

If you don’t consider the difference in kind between a human vulnerability and an automated vulnerability that derives from the essentially unlimited capacity of the latter to scale, your comment makes a lot of sense. If you do consider that, the argument becomes irrelevant and deeply misleading

discuss

order

noduerme|2 years ago

This needs to be hammered into people's understanding of the danger of LLMs at every opportunity. Enough of the general population considers things like Twitter bots to have scaled to a dangerous point of polluting the information ecosystem. The scalability and flexibility of LLMs in germinating chaos is orders of magnitude beyond anything we've yet seen.

An example I use for people is the Bernstein Bears effect. Imagine you wake up tomorrow and all your digital devices have no reference to 9/11. You ask Bing and Google and they insist you must be wrong, nothing like that ever happened. You talk to other people who remember it clearly but it seems you've lost control of reality; now imagine that type of gaslighting about "nothing happening" while the lights go out all over the world and you have some sense of what scale the larger of these systems are operating at.

eru|2 years ago

> Enough of the general population considers things like Twitter bots to have scaled to a dangerous point of polluting the information ecosystem.

It was always a good idea to ignore the cesspool that is Twitter. No matter whether we are talking about bots or lynch mobs.

Btw, I think you mean Berenstain Bears.

erosenbe0|2 years ago

Would univeral adoption of digital signatures issued by trusted authorities alleviate this problem to any degree?

For example, my phone would automatically sign this post with my signature. If I programmed a bot, I could sign as myself or as a bot, but not as another registered human. So you'd know the post came from me or a bot I've authorized. Theft or fraud with digital signatures would be criminalized, it isn't already.

TeMPOraL|2 years ago

The difference you're talking about is only in the fact that humans don't scale like computer code. If humans were to scale like computer code, you'd still find the "vulnerability" unfixable.

danShumway|2 years ago

But that difference is a big part of why this matters. That this might be unfixable is not a strong argument for moving forward anyway, if anything it should prompt us to take a step backwards and consider if general intelligence systems are well suited for scalable tasks in the first place.

There are ways to build AIs that don't have these problems specifically because their intelligence is limited to a specific task and thus they don't have a bunch of additional attack vectors literally baked into them.

But the attitude from a lot of companies I'm seeing online is "this might be impossible to fix, so you can't expect us to hold off releasing just because it's vulnerable." I don't understand that. If this is genuinely impossible to fix, that has implications.

Because the whole point with AI is to make things that are scalable. It matters that the security be better than the non-scalable system. If it can't be better, then we need to take a step back and ask if LLMs are the right approach.

ethanbond|2 years ago

Right, but humans don’t scale that way, so the threat is completely different.

This is like saying a nuclear weapon accident is not that scary because you can also have a microwave malfunction and catch on fire. Sure you can —- but the fact it’s not a nuke is highly relevant.

aidenn0|2 years ago

I think what GP (and I) are talking about is that social engineering is limited in scope because humans don't scale like computer code. A theoretical AGI (and LLMs) do scale like computer code.

To use an admittedly extreme example: The difference between drawing some fake lines on the road and crashing 1 or 2 cars and having all self-driving cars on the road swerve simultaneously is not just a quantitative difference.