(no title)
txrx0000 | 1 day ago
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
magicalist|1 day ago
What is why?
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
txrx0000|1 day ago
bottlepalm|1 day ago
Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.
txrx0000|1 day ago
fooker|1 day ago
Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.
reactordev|1 day ago
msuniverse2026|1 day ago
tgma|1 day ago
Was it successful? The jury is still out.
Muromec|1 day ago
smegger001|1 day ago
txrx0000|1 day ago
medi8r|1 day ago
txrx0000|1 day ago
layer8|1 day ago
jefftk|1 day ago
m4rtink|1 day ago
OK, maybe someone will build a bioweapon that does that for real. :P
txrx0000|1 day ago
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
oceanplexian|1 day ago
We live in a free society. AI should be democratized like any other technology.
claudiojulio|1 day ago
avaer|1 day ago
We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.
quotemstr|1 day ago
wahnfrieden|1 day ago
pluc|1 day ago
4bpp|1 day ago
ted_dunning|1 day ago
Funding the majority of HIV prevention in Africa.
The list is long, but you knew that.
no_wizard|1 day ago
If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.
But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.
It’s meaningless. Utterly meaningless.
Get what you pay for, I suppose.
inkysigma|1 day ago
https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...
The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.
SpicyLemonZest|1 day ago
5o1ecist|1 day ago
xpe|1 day ago
Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.
I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.