top | item 47156145

(no title)

unyttigfjelltol | 4 days ago

Techno futurist:

1. Builds tool extremely capable of mass surveillance and running autonomous warfighting capabilities.

2. Expresses shock — shock — when the Department of War insists on using the tool for mass surveillance and autonomous warfighting systems.

discuss

order

Thrymr|4 days ago

I don't doubt that Claude is capable of mass surveillance, but surely it is not too much of a stretch to say it may not be suitable for automated killbots?

ozlikethewizard|4 days ago

I assume the techs at the pentagon know that, and itd be more used for intelligence (Equally as worrying, because if theres one thing GPTs arent, its intelligent)

groby_b|4 days ago

IDK, depends on how much you care about outcomes.

I don't think Drunk Pete does, very much.

diydsp|4 days ago

1. The article points out Claude has resisted being trained for that. AI in general could, but Claude can not.

supern0va|3 days ago

I think the biggest problem is whether Claude could be tricked into doing so. I could see how mass surveillance could be repacked as "summarize my conversations", or autonomous killbots could be playing a video game.

spidersenses|4 days ago

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus

EA-3167|4 days ago

Step 1.5 is also the one being ignored by 95% of comments here: the leverage the Pentagon is using is the lucrative contract Anthropic signed with them. The only threat here is Anthropic sucking up less money from the DoD.

unsnap_biceps|4 days ago

the article lists three things, two of which are concerning beyond just losing some money. Granted, I have no idea how realistic the later two are.

    These consequences are generally understood to be some mix of :
    
    canceling the contract
    
    using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
    
    the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business3. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.

Balinares|4 days ago

It's been amazing watching them cosplay ethicality while twisting themselves into knots attempting to justify selling their service to Satan.

Who could have predicted that Satan would turn around and screw them, outside of everyone ever. Maybe they should have asked a person instead of Claude.

hoopleheaded|4 days ago

Exactly - step 2 should be sign $200MM contract with party obviously and extremely interested in mass surveillance and autonomous warfighting capabilities.

Then comes the shock.