top | item 47188839

(no title)

txrx0000 | 1 day ago

This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.

discuss

order

magicalist|1 day ago

> This is why you can't gatekeep AI capabilities.

What is why?

You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?

txrx0000|1 day ago

I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.

bottlepalm|1 day ago

What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.

txrx0000|1 day ago

Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.

fooker|1 day ago

> hardware to run them

Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.

reactordev|1 day ago

I run local models on Mac studios and they are more than capable. Don’t spread fud.

msuniverse2026|1 day ago

I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.

tgma|1 day ago

> bioweapons convention was successful

Was it successful? The jury is still out.

Muromec|1 day ago

Because bioweapons suck, this is why. On the other hand AI sucks too, but it has at least some use

smegger001|1 day ago

because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.

txrx0000|1 day ago

Don't compare general intelligence to bioweapons. A bioweapon cannot defend against or reverse the effects of another bioweapon.

medi8r|1 day ago

Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.

txrx0000|1 day ago

I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.

layer8|1 day ago

Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.

jefftk|1 day ago

A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.

m4rtink|1 day ago

I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.

OK, maybe someone will build a bioweapon that does that for real. :P

txrx0000|1 day ago

There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.

Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.

oceanplexian|1 day ago

I’m tired of these bizarre hypothetical gotcha arguments. If AI can create bioweapons, it can equally create vaccines and antidotes to them.

We live in a free society. AI should be democratized like any other technology.

claudiojulio|1 day ago

If it's taken by force, it will stagnate. It makes no sense at all.

avaer|1 day ago

The logic used in the treats is that it's a national security risk to not use Claude, but it's also a national security risk to use Claude.

We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.

quotemstr|1 day ago

I am certain that there exist people who are 1) capable of advancing the state of the art in AI, and 2) free of the hubris that lets them believe that their making AI somehow gives them a veto over the fates of nations.

wahnfrieden|1 day ago

Is TikTok stagnating in the US?

pluc|1 day ago

When have US corporations (or simply "the US" really) ever done the right thing for humanity?

ted_dunning|1 day ago

Donating the first polio vaccine to humanity.

Funding the majority of HIV prevention in Africa.

The list is long, but you knew that.

no_wizard|1 day ago

This letter and all of this is meaningless.

If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.

But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.

It’s meaningless. Utterly meaningless.

Get what you pay for, I suppose.

SpicyLemonZest|1 day ago

We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.

5o1ecist|1 day ago

They control the compute.

xpe|1 day ago

> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.

Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.

I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.