(no title)
snowwrestler | 2 days ago
These court cases would produce bad outcomes either way. If the court finds for Anthropic, future DoD leadership will find itself constrained or at least chilled. Or if the court finds for the government, an expansive permissive view of the DPA might encourage future administrations to compel tech companies to make AIs break the law in other ways, for example by suppressing certain political points of view in output.
National defense is strongest if the military is extremely powerful but carefully judicious in the application of that power. That gives us the highest “top end” capability of performance. If military leadership insists on acting recklessly, then eventually guardrails are installed, with the result of a diminished ability to respond effectively to low-probability, high risk moments. One of many nuances and paradoxes the current political leadership does not seem to understand.
magicalist|2 days ago
Seems like a good outcome? The government should not be able to arbitrarily decide to make private citizens do things they aren't willing to do, whether the government thinks the action is legal or not, and its especially egregious when the government knew about those limits ahead of time, spelled out in a fucking contract.
halJordan|2 days ago
The bad part is the failure of the citizenry to elect moral and ethical politicians.
wpwd|19 hours ago