Whom do we trust regulation with? Current US admin which is being run by team idiocracy, Europe that is run by senile men who don't even understand tech or can't even come to a consensus on smallest of issues or China which only does things that benefit their autocrats?
The issue is much more complex than "just regulate it" unfortunately.
Sure, but the reality is that the United States where these companies are headquartered currently has the exact opposite policy: Anthropic has been blacklisted by the DoW (and replaced by OpenAI) because the US administration thought that the very limited amount of self-regulation Anthropic insisted on was going too far.
We need an AI workers union. The real power and discernment is in the hands of the people building these systems. They are extremely difficult to replace and firing them basically guarantees they go to a competitor.
https://notdivided.org/ is basically validation that there is appetite for something like this amongst them.
I’m all for regulation of AI, but that’s not a serious solution where the problem is the government pressuring private companies to do evil things. Consumer pressure isn’t much, but it’s not nothing.
> Next week Anthropic will do something evil and everyone will be moving back to OpenAI.
Anthropic has been, relatively speaking, the most responsible of the frontier labs since its founding. There has never been a point at which OpenAI took a more measured and reasonable approach while Anthropic proceeded dangerously.
These are relative terms, but you'd have to not be paying attention to find this plausible.
wraptile|3 days ago
The issue is much more complex than "just regulate it" unfortunately.
eevahr|3 days ago
notahacker|3 days ago
Ylpertnodi|3 days ago
JimmyBuckets|3 days ago
https://notdivided.org/ is basically validation that there is appetite for something like this amongst them.
droidjj|3 days ago
subdavis|3 days ago
Anthropic has been, relatively speaking, the most responsible of the frontier labs since its founding. There has never been a point at which OpenAI took a more measured and reasonable approach while Anthropic proceeded dangerously.
These are relative terms, but you'd have to not be paying attention to find this plausible.
angry_octet|3 days ago
JohnnyMarcone|3 days ago
voganmother42|3 days ago
jatora|3 days ago
mkoubaa|3 days ago
Jare|3 days ago
When EU tries to regulate AI, they are accused of being against progress and will destroy their economies.
Any regulation that Trump would place on AI would be of the "do what I say and f*k up my opponents" kind. Which arguably is already happening.
brookst|3 days ago
The applications it can be used for? That doesn’t work, it’s the governments that want abusive applications.
The size of models? That doesn’t work, it just discourages MoE.
Access by consumers? Great, now it’s just for megacorps and the military.
What, exactly, would successful regulation look like?
lobsterthief|2 days ago
We agreed and [largely] adhere to chemical weapons bans; perhaps this could be treated similarly
sandman83|3 days ago
droidjj|3 days ago