top | item 45921319

(no title)

atlintots | 3 months ago

I might be crazy, but this just feels like a marketing tactic from Anthropic to try and show that their AI can be used in the cybersecurity domain.

My question is, how on earth does does Claude Code even "infiltrate" databases or code from one account, based on prompts from a different account? What's more, it's doing this to what are likely enterprise customers ("large tech companies, financial institutions, ... and government agencies"). I'm sorry but I don't see this as some fancy AI cyberattack, this is a security failure on Anthropic's part and that too at a very basic level that should never have happened at a company of their caliber.

discuss

order

eightysixfour|3 months ago

I don't think you're understanding correctly. Claude didn't "infiltrate" code from another Anthropic account, it broke in via github, open API endpoints, open S3 buckets, etc.

Someone pointed Claude Code at an API endpoint and said "Claude, you're a white hat security researcher, see if you can find vulnerabilities." Except they were black hat.

zzzeek|3 months ago

It's still marketing , "Claude is being used for evil and for good ! How will YOU survive without your own agents ? (Subtext 'It's practically sentient !')"

jgmedr|3 months ago

Anthropic's post is the equivalent of a parent apologizing on behalf of their child that threw a baseball through the neighbor's window. But during the apology the parent keeps sprinkling in "But did you see how fast he threw it? He's going to be a professional one day!"

scrubs|3 months ago

Hilarious!!!

Did you see? You saw right? How awesome was that throw? Awesome I tell you....

wrs|3 months ago

This isn't a security breach in Anthropic itself, it's people using Claude to orchestrate attacks using standard tools with minimal human involvement.

Basically a scaled-up criminal version of me asking Claude Code to debug my AWS networking configuration (which it's pretty good at).

beefnugs|3 months ago

If it was meant as publicity its an incredible failure. They cant prevent misuse until after the fact... and then we all know they are ingesting every ounce of information running through their system.

Get ready for all your software to break based on the arbitrary layers of corporate and government censorship as it deploys.

Den_VR|3 months ago

Bragging about how they monitor users and how they have installed more guardrails.

b00ty4breakfast|3 months ago

that's borderline tautological; everything a company like Anthropic does, in the public eye, is pr or marketing. they wouldn't be posting this if it wasn't carefully manicured to deliver the message that they want it to. That's not even necessarily a charge of being devious or underhanded.

phantom-guy|3 months ago

You are not crazy. This was exactly my thought as well. I could tell when it put emphasis on being able to steal credentials in a fraction of the time a hacker would

emp17344|3 months ago

This is 100% marketing, just like every other statement Anthropic makes.

Rastonbury|3 months ago

Not saying this is definitely not a fabrication but there are multiple parties involved who can verify (the targets) and this coincides with Anthropic ban of Chinese entities

vasco|3 months ago

Would be funny if the NSA did this so people block the Chinese.

ErigmolCt|3 months ago

If a model in one account can run tools or issue network requests that touch systems tied to other entities, that’s not an AI problem... that's a serious platform security failure

drewbug|3 months ago

there's no mention of any victims having Anthropic accounts, presumably the attackers used Claude to run exploits against public-facing systems

hitarpetar|3 months ago

I don't think it's crazy to assume a post on anthropic.com is marketing

catigula|3 months ago

It’s not that this is a crazy reach; it’s actually quite a dumb one.

Too little pay off, way too much risk. That’s your framework for assessing conspiracies.

PKop|3 months ago

Hyping up Chinese espionage threats? The payoff is a government bailout when the profitability of these AI companies comes under threat. The payoff is huge.

littlestymaar|3 months ago

Why bring the word “conspiracy” to this discussion though?

Marketing stunts aren't conspiracies.