(no title)
akersten | 5 days ago
So the guardrails (for you and me) are still there. They just stopped committing the unforced error of excluding themselves from federal procurement. Under a different administration, the requirement might change, and you might see them boasting once more on "safety."
toddmorey|5 days ago
chasd00|5 days ago
one problem i have with this specific case and Anthropic/Claude working with the DOD is I feel an LLM is the wrong tool for targeting decisions. Maybe given a set of 10 targets an LLm can assist with compiling risks/reward and then prioritizing each of the 10 targets but it seems like there would be much faster and better way to do that than asking an LLM. As for target acquisition and identification, i think an LLM would be especially slow and cumbersome vs one of the many traditional ML AIs that already exist. DOD must be after something else.
pjc50|5 days ago
What do you do when the government come to you and tell you that they do want that, and can back it up with threats such as nationalizing your technology? (see Anthropic)
We're back to "you might not care about politics, but that won't stop politics caring about you".
MattDaEskimo|5 days ago
toddmorey|5 days ago