top | item 47169706

(no title)

sigmar | 3 days ago

>Anthropic has repeatedly asked defense officials to agree to guardrails that would restrict its AI model... also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the negotiations said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said.

They explicitly allow it to be used in military operations, just not killing people without a human in the loop

source: https://www.cbsnews.com/news/pentagon-anthropic-offer-ai-unr...

discuss

order

No comments yet.