Anthropic specifically are the people who talk about "model alignment" and "harmful outputs" the most, and whose models are by far the most heavily censored. This is all done on the basis that AI has a great potential to do harm.
One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).
Why do you think humans would make better life or death decisions? Have we never had innocent civilians killed overseas by US military as a result of human error?
I hear where you are coming from, but if an AI company is going to be in this field, wouldn't you want it to be the company with as many protections in place as possible to avoid misuse?
We aren't going to stop this march forward, no matter how much it is unpopular it will happen. So, which AI company would you prefer be involved with DOD?
Do you really not know? It's a difficult question to answer in an HN thread, because on one hand, it requires a review of the history of empire and war profiteering. But on the other hand, it's just obvious to the point of being difficult to even articulate.
If you live in US, taxes you pay directly fund DoD. So if you sponsor their activities, why can't Anthropic do business with them? Which other company would you rather get their (your) money?
Genuine question, and with due regard to some of the valid concerns you have: what would your opinion on this have been in 1940-1945? What about the Cold War?
Not everyone believes defense contracts are inherently unethical, or at least that they are any more unethical than all of the other consumers GenAI firms are already serving. Given that a (if not the) main business proposition for GenAI is massive reductions in employment costs (which means unemployment and massive economic disruption) this is not a business sector built on any ethical high ground.
int_19h|7 months ago
One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).
kadushka|7 months ago
leakycap|7 months ago
We aren't going to stop this march forward, no matter how much it is unpopular it will happen. So, which AI company would you prefer be involved with DOD?
jMyles|7 months ago
gk1|7 months ago
ridiculous_leke|7 months ago
ghc|7 months ago
unknown|7 months ago
[deleted]
kadushka|7 months ago
unknown|7 months ago
[deleted]
FredPret|7 months ago
dttze|7 months ago
henry2023|7 months ago
[deleted]
cpgxiii|7 months ago