top | item 46316929

(no title)

JacobAsmuth | 2 months ago

So in general you think that making frontier AI models more offensive in black hat capabilities will be good for cybersecurity?

discuss

order

Uehreka|2 months ago

I’m not GP, but I’d argue that “making frontier AI models more offensive in black hat capabilities” is a thing that’s going to happen whether we want it or not, since we don’t control who can train a model. So the more productive way to reason is to accept that that’s going to happen and then figure out the best thing to do.

whimsicalism|2 months ago

I think this is a popular rhetorical turn nowadays but I actually don’t agree at all - relatively few actors have the ability to train top models.

abigail95|2 months ago

Does it shift the playing field towards bad actors in a way that other tools don't?

ACCount37|2 months ago

Yes. The advantage is always on the attacker's side, and this can autonomously find and exploit unknown vulns in a way non-AI tools don't.

Sure, you can also use the same tools to find attack surfaces preemptively, but let's be honest, most wouldn't.

bilbo0s|2 months ago

Frontier models are good at offensive capabilities.

Scary good.

But the good ones are not open. It's not even a matter of money. I know at OpenAI they are invite only for instance. Pretty sure there's vetting and tracking going on behind those invites.

artursapek|2 months ago

Of course. Bugs only get patched if they’re found.

tptacek|2 months ago

People in North American and Western Europe have an extremely blinkered and parochial view of how widely and effectively offensive capabilities are disseminated.