Remember when the world freaked out over encryption, thinking every coded message was a digital skeleton key to anarchy? Yeah, the 90s were wild with the whole PGP (Pretty Good Privacy) encryption fight. The government basically treated encryption like it was some kind of wizardry that only "good guys" should have. Fast forward to today, and it's like we're stuck on repeat with open model weights.
Just like code was the battleground back then, open model weights are the new frontier. Think about it—code is just a bunch of instructions, right? Well, model weights are pretty much the same; they're the brains behind AI, telling it how to think and learn. Saying "nah, you can't share those" is like trying to put a genie back in its bottle after it's shown you it can grant wishes.
The whole deal with PGP was about privacy, sending messages without worrying about prying eyes. Fast forward, and model weights are about sharing knowledge, making AI smarter and more accessible. Blocking that flow of information? It's like telling scientists they can't share their research because someone, somewhere, might do something bad with it.
Code lets us communicate with machines, model weights let machines learn from us. Both are about building and sharing knowledge. When the government tried to control encryption, it wasn't just about keeping secrets; it was about who gets to have a voice and who gets to listen. With open model weights, we're talking about who gets to learn and who gets to teach.
Banning or restricting access to model weights feels eerily similar to those encryption wars. It's a move that says, "We're not sure we trust you with this power." But just like with code, the answer isn't locking it away. It's about education, responsible use, and embracing the potential for good.
Innovation thrives on openness. Whether it's the lines of code that secure our digital lives or the model weights that could revolutionize AI, putting up walls only slows us down. We've been down this road before. Let's not make the same mistake of thinking we can control innovation by restricting access.
The fight against encryption continue to this day and while https is now ubiquitous, large-scale cdns makes it somewhat a moot point and emails are still largely plaintext.
I share your concerns and think you're broadly correct. I think it's worth adding some nuance though.
When you drill into specifics there are almost always exceptions. For instance, in your example about sharing research, there are certainly some types of research we shouldn't automatically and immediately make publicly available like biological superviruses or software vulnerabilities.
I think the same can be said about AI. We should aim to be as open as we can but I'd be hesitant about being an open source absolutist.
Weights is derivative work, and as such should follow the licensing of the original works. If those works are public domain or were appropriately licensed, distributing weights openly should be protected as free speech.
And let’s not anthropomorphize of ML. Models don’t “think” or “learn”. The only party with free will and agency is whoever makes or operates them; trying to paint an unthinking tool as a human is just a means of waiving responsibility for them.
Off-topic but this user seems to be using ChatGPT or something similar for almost every single comment. Does Hacker News have a stance on this or is the thinking that it is allowed as long as the content is good?
I don't think this analogy is wholly applicable, simply given the scale and potential blast radius of certain classes of models. A more apt analogy would be nuclear technology. There are the Atomic Gardeners on the one hand, who believe in only the good and see the promise of the technology and peoples intent, and then there is the bitter struggle for power and threat which hinges on it.
In most cases, ML models have the capability to revolutionise or at least augment/optimise problems in predictable ways. In extreme cases, deepfake technology and the like can erode at the tenuous levels of trust which hold societies together. We have seen what happens when disinformation, mistrust, and even basic levels of technology meet: look at students being lynched in India, Pakistan, and elsewhere due to WhatsApp group messages claiming blasphemy; or the practice of SWOTing; or the insinuation of CSE gangs leading to a pizza parlour in the US being stormed by someone with a machine gun.
The stakes here are a lot larger than pure technologists, todays Atomic Gardeners, may perceive.
In that context, legislation is trying to provide a counterweight to the pace of change to allow for some – any – breathing room, and particularly to prevent an increasingly hostile cast of nation states from weaponising that technology, even in small ways. For example, OSINT of North Koreas operations show that the basic parts needed for making baby powder are also capable of being used for weapons development.
Give every script kiddie, bored teenager, Mexican cartel and scammer an AI that can mimic anyone's voice and likeness, and the world will get a lot messier.
I don't think they're the same. I wish we could put the genie back in the bottle. I think AI will make humanity less special.
I'm not convinced society will be better with AI. The benefits must push down cost of living for the masses, improve quality of life for the masses, all without destroying society with disinformation and shattering job loss.
I think LLMs have considerably more probability to make the world worse overall than cryptography does (the sheer level of information bullshit they can develop for pennies is going to transform our society and I doubt it is going to be for the better). Still I don't see the point of banning open weight models and LLMs that don't have guardrails. And I'm not sure you can realistically construct laws that would do it accurately. The genie is out of the bottle, pandora's box is opened, etc, etc. And locking down models with guardrails is only something that corporations have to do in order to avoid having a public racist chatbot problem and the associated headlines.
Life, all biological life with us as a kind of pinnacle, is about to go through radical change.
There is no risk free path. It isn’t guaranteed that a single human will be alive in 100 years - because we failed, or even because technologically we succeeded
But a degree of openness is necessary for our best ideas, our most good faith collaborations, to have a chance
It is more chaotic to trust each other, en masse. But I also think it is our best bet
> Just like code was the battleground back then, open model weights are the new frontier. Think about it—code is just a bunch of instructions, right? Well, model weights are pretty much the same; they're the brains behind AI, telling it how to think and learn. Saying "nah, you can't share those" is like trying to put a genie back in its bottle after it's shown you it can grant wishes.
I think telling a genie "I wish for no more wishes" is a common enough trope.
I'd agree that making weights available is basically irreversible; however making it illegal to make new sets of weights available is probably fairly achievable… at present.
-
Some of the issues with "winning" the battle for encryption include:
1) We also need it to defend normal people from attackers
2) It's simple enough to print onto a T-shirt
3) The developers recognised the value and wanted to share this
The differences with AI include:
1) The most capable models don't fit on most personal devices at present, let alone T-shirts
2) 95% of the advantages can be had from centralised systems without needing to distribute the models directly to everyone
3) A huge number of developers have signed an open letter which is basically screaming "please regulate us! We don't want to be in an arms race with each other to make this more capable! We don't know what we're doing or what risks this has!"
Wow what a horrible idea. Sounds like monopoly in the making. I would be really interested to hear what "open" AI has commented about this - I guess they are lobbying for this with all of their billions.
If the US government really want to do this correctly, they must also ban any API access to AI models and ban all research related to AI.
How could this be even written as law? Universities and companies are probihited to publish their research? How many layer models are forbidden to be published? All neural networks?
So AI companies take all the world's text and knowledge for free, use openly available research, take massive private funding and generate immense economic value using that, and want to make it illegal for anyone else to do the same?
I don't get the 'harm can be done by individuals' argument. Sticks and stones. Every discussion forum on the internet is moderated to some degree, and every human being has the ability to post hurtful or illegal content, yet the system works. Moderation will only get more powerful thanks to AI tools.
There are almost 200 countries in the world. Even if the US and the EU and a bunch more ban open weight models, I doubt they'll succeed in convincing every country to do so. And whichever countries decide not to follow the ban, could thereby give their own AI industries a big boost. As the world becomes ever more globalised, the potential effectiveness of these kinds of policies declines.
Sure, they could try to negotiate some kind of UN convention for a coordinated global ban. But, given how fractured global diplomacy has become, I doubt the odds of something like that succeeding are particularly high.
Most of the researchers are specifically in a handful of big labs, most of which in turn are in the USA. As one who has done so, trust me when I say that relocation is harder than it seems on paper.
Also note that both California and the EU are economically dominant enough that they tend to influence regulation outside their own borders: https://en.wikipedia.org/wiki/Brussels_effect
In addition, given how much training data current models need, there would be a huge impact just by a handful of governments siding with all the copyright holders suing OpenAI, Midjourney, Stability AI, etc.
And all that is without needing Yudkowsky's point that a ban isn't serious unless you're willing to escalate to performing airstrikes on data centres.
The executive order (and associated DoC/NTIA RFC) was long, and dense with both legal references and political platitudes. Not great reading, (though the EO got a good amount of discussion here). It's unfortunate, but less than surprising that it didn't make actual news outlets.
It seems complicated now, and many would like to see how things play out a little more before committing the time to deciding things. (I think that's a bad idea; regardless of the revolutionary tech, it seems wise to begin governmental thought early.)
It's kinda been buried in an otherwise heavy news cycle since the end of October of last year, AI and otherwise. I assume that wasn't intentional, but it seems hard to hide something so big at a better time.
And though I hate to say it, I suspect: apathy. Both traditional from the bottom ("too far off, unfixable, can't do anything about it"), and from the top ("once we get too big to fuck with, we just won't give a damn about any changes they try to make anyway").
A careless or reckless president might overly depend on advisers for drafting executive orders, sidelining personal oversight. This delegation risks orders that may not align with the president's intent or could lead to adverse outcomes due to unchecked biases or agendas among advisers. Without the president's close review, policies might lack comprehensive vetting, inviting legal issues, public disapproval, or impractical implementations. Excessive reliance on advisers could also push more extreme policies under reduced scrutiny. Effective governance requires the president's informed engagement to ensure executive orders reflect their vision and serve the nation's best interest. Instead we have someone who offers his "concerns" to the public while shipping billions worth of dollars of explosives to the enemies of humanity. Don't expect any concerns or input from the public to make any difference.
Maybe some lawyer here can explain how can an administration block publishing of open weights without violating first amendment which guarantees freedom of expression for everyone
Government is necessary, in order to organize a complex society, but government is like any other organization: made up of people, many of who are out for their own interests. The most prominent of those interests are power and money.
Whenever government proposes banning a technology, one must ask: who benefits? The LLMs, even in their current state of infancy, are powerful tools. Restricting access to those tools keeps power in the hands of wealthy corporations and the government itself. That's the power aspect.
The money aspect is even simpler: Don't doubt that some of those wealthy corporations are making donations to certain officials, in order to gain support for actions like this. Almost no one leaves the Congress (or almost any parliamentary body in any country) as less than a multi-millionaire. Funny, how that works...
Sure, but you still have to balance your "who benefits" against the harm of the thing they're proposing to ban.
In the case of these models, you can fine-tune the model all you want to be moral and not do harmful things like scam the elderly, carry out disinformation campaigns, harass people to the point of suicide. But as soon as you release the model weights, you are giving anyone the ability to fine-tune out all of those restrictions, with orders of magnitude less cost that it took to develop the model in the first place.
Regulating AI, especially as it becomes AGI and beyond, is going to be very tricky, and if everyone has the ability to create their own un-restricted, potentially sociopathic intelligences by tweaking the safe models created under careful conditions by big labs, we're in for a lot of trouble. That assumes we put the proper regulations on the big labs, and that they have the ability to make them "safe", which is hard, yes. But as AI turns into AGI and beyond, things are going to go pretty nuts, so it's important to start laying groundwork now.
Open model weight bans will likely be struck down as First Amendment violations because, at their core, model weights are a form of expression. They embody the ideas, research, and innovations of their creators, similar to how code was deemed a form of speech protected under the First Amendment during the encryption debates of the 1990s. Just as the government's attempts to control encryption software were challenged and largely curtailed due to free speech concerns, any attempts to ban open model weights will face legal challenges arguing that such bans unjustly restrict the free exchange of ideas and information, a cornerstone of First Amendment protections. The precedent set by cases involving code and free speech strongly suggests that similar principles apply to model weights, making such bans vulnerable to being overturned on constitutional grounds.
Beyond horrible. Beyond even dystopian films. Look at the evil mega corps and governments have shown with regards to privacy and freedoms. They've been eating away at it for years, one bite at a time, not too much to cause a revolt. Are we to depend upon the good intentions of mega corp owners and politicians to wield AGI exclusively and for the benefit of society when that same AGI will render any form of public organized protest impossible.
People never grow tired of the ever-repeating excuses to further reduce our freedoms, or to hide criminality, or just insult the already lacking collective IQ.
Think of the children! We need to prevent abuse! Someone's feelings might be hurt! Someone might get harmed! Seeing real breasts might cause trauma!
Or one that's not so often used, but still really effective:
We're just bulldozering this place, because it's so horrible. Instead we turn it into a luxury resort for rich people. People who believe that it's because we're destroy any and all evidence are just conspiracy theorists. (in regards to Epstein's Island)
This tech is risky. It could be dangerous, if used without any control.
It should be keep in English culture. And its close friend allies.
In my opinion, in our society, full of threats, it is not a good idea to make it open-source. (Not an expert on the topic).
That could allow adversaries parties, to obtain it. Parties that otherwise, would be unable to do so, by themselves.
In my opinion, it should be keep under control of the most able, and responsible organizations. Supervised by government. Because who else could supervise it?. Regulation of this tech, is a very important topic. That should be tackled by the best organizations, university and responsible researchers.
I would prefer a Star-Wars type of society. Where humans still do human activities.
I mean... what you're communicating right now is "I have built a strawman in my head which I thoroughly hate, can anyone please come defend it before me"? I guess I'll try anyway.
What's your opinion on the orthogonality thesis, ie intelligence not being fundamentally correlated with altruism? The concept of mesa-optimization, that a process that tries to teach a system to optimize for X might instead instead make it optimize for thing-that-leads-to-X (eg evolution made us want sex because it leads to reproduction, but now we have a lot of sex with birth control). The concept of instrumental convergence, that any optimization process powerful enough will eventually optimize for "develop agency, survive longer, accumulate more power"?
There's a lot of literature written about AI safety, a lot of it from very skeptical people, but it's not like there's any single insight that can be reliably transmitted to a guy at a table with a "you're all idiots, change my mind" sign.
AI centralises power, devalues human labour and at its limit is fundamentally uncontrollable.
Personally I struggle to understand anyone who believes it's in humanities best interest to continue developing AI systems without significant limitations on the rate of progress. There's so many ways AI could go wrong I'd argue it's almost guaranteed that something will go wrong if we continue on this trajectory.
> The only difference with AI, is that AI is immensely powerful.
Now, I keep getting stuck on this.
All of these models automate the creation of BS -- that is, stuff that kind of seems like it could be real but isn't.
I have no doubt there is significant economic value in this. But the world was awash in BS before these LLMs so we're really talking about a more cost effective way to saturate the sponge.
Anyway, on the main topic... closing models is an absurd idea, and one that cannot possibly work. I think the people who have billions at stake in these models are panicking, realizing the precarious and temporary nature of their lead in LLMs and are desperately trying to protect it. All that money bought them a technological lead that is evaporating a lot faster than they can figure out how to build a business model on it.
...Nvidia should pump a little bit of their windfall profits into counter lobbying since they have the most to gain/lose from open/closed models.
There will still be "free" as in freedom models available from China and others.
And no doubt a burgeoning resistance focused on making open source models available. It would be a disaster but I can see it making the industry stronger, with more variety and breaking the influence of some of the bigger players.
Practically, if you're concerned about this, learn more about ML/AI, not about how to use super high level frameworks but about how it actually works so when "SHTF" as survivalists say, you'll still be able to use it.
Chinese, iranians, russians should now exclaim - "Poor americans! Look at what Biden's regime does to them! Look at how their liberties get suspressed! Let's help those people fight for their rights".
Because if this had happened in any non Western country, the mainstream news in US and EU would've been of the similar sentiment.
The problem would be that orgs like Meta would stop publishing llama 3/4/5/etc, which most open source models build upon. Without new foundational models, progress would stall pretty quickly, and procuring thousands of GPUs to train new foundational models would be difficult. In theory, since the US “controls” Nvidia/amd/tsmc, they could put up roadblocks to even doing open training outside of the US. Maybe a “SETI@Home” style distributed training system could be done on consumer GPUs…
[+] [-] greenavocado|1 year ago|reply
Just like code was the battleground back then, open model weights are the new frontier. Think about it—code is just a bunch of instructions, right? Well, model weights are pretty much the same; they're the brains behind AI, telling it how to think and learn. Saying "nah, you can't share those" is like trying to put a genie back in its bottle after it's shown you it can grant wishes.
The whole deal with PGP was about privacy, sending messages without worrying about prying eyes. Fast forward, and model weights are about sharing knowledge, making AI smarter and more accessible. Blocking that flow of information? It's like telling scientists they can't share their research because someone, somewhere, might do something bad with it.
Code lets us communicate with machines, model weights let machines learn from us. Both are about building and sharing knowledge. When the government tried to control encryption, it wasn't just about keeping secrets; it was about who gets to have a voice and who gets to listen. With open model weights, we're talking about who gets to learn and who gets to teach.
Banning or restricting access to model weights feels eerily similar to those encryption wars. It's a move that says, "We're not sure we trust you with this power." But just like with code, the answer isn't locking it away. It's about education, responsible use, and embracing the potential for good.
Innovation thrives on openness. Whether it's the lines of code that secure our digital lives or the model weights that could revolutionize AI, putting up walls only slows us down. We've been down this road before. Let's not make the same mistake of thinking we can control innovation by restricting access.
[+] [-] jj999|1 year ago|reply
[+] [-] nathansherburn|1 year ago|reply
When you drill into specifics there are almost always exceptions. For instance, in your example about sharing research, there are certainly some types of research we shouldn't automatically and immediately make publicly available like biological superviruses or software vulnerabilities.
I think the same can be said about AI. We should aim to be as open as we can but I'd be hesitant about being an open source absolutist.
[+] [-] anileated|1 year ago|reply
And let’s not anthropomorphize of ML. Models don’t “think” or “learn”. The only party with free will and agency is whoever makes or operates them; trying to paint an unthinking tool as a human is just a means of waiving responsibility for them.
[+] [-] opdahl|1 year ago|reply
[+] [-] nrawe|1 year ago|reply
In most cases, ML models have the capability to revolutionise or at least augment/optimise problems in predictable ways. In extreme cases, deepfake technology and the like can erode at the tenuous levels of trust which hold societies together. We have seen what happens when disinformation, mistrust, and even basic levels of technology meet: look at students being lynched in India, Pakistan, and elsewhere due to WhatsApp group messages claiming blasphemy; or the practice of SWOTing; or the insinuation of CSE gangs leading to a pizza parlour in the US being stormed by someone with a machine gun.
The stakes here are a lot larger than pure technologists, todays Atomic Gardeners, may perceive.
In that context, legislation is trying to provide a counterweight to the pace of change to allow for some – any – breathing room, and particularly to prevent an increasingly hostile cast of nation states from weaponising that technology, even in small ways. For example, OSINT of North Koreas operations show that the basic parts needed for making baby powder are also capable of being used for weapons development.
[+] [-] unethical_ban|1 year ago|reply
AI is about destroying trust (in the short term).
Give every script kiddie, bored teenager, Mexican cartel and scammer an AI that can mimic anyone's voice and likeness, and the world will get a lot messier.
I don't think they're the same. I wish we could put the genie back in the bottle. I think AI will make humanity less special.
I'm not convinced society will be better with AI. The benefits must push down cost of living for the masses, improve quality of life for the masses, all without destroying society with disinformation and shattering job loss.
[+] [-] lamontcg|1 year ago|reply
[+] [-] Nevermark|1 year ago|reply
Life, all biological life with us as a kind of pinnacle, is about to go through radical change.
There is no risk free path. It isn’t guaranteed that a single human will be alive in 100 years - because we failed, or even because technologically we succeeded
But a degree of openness is necessary for our best ideas, our most good faith collaborations, to have a chance
It is more chaotic to trust each other, en masse. But I also think it is our best bet
The dice must be rolled. Best we throw them bold
[+] [-] ben_w|1 year ago|reply
I think telling a genie "I wish for no more wishes" is a common enough trope.
I'd agree that making weights available is basically irreversible; however making it illegal to make new sets of weights available is probably fairly achievable… at present.
-
Some of the issues with "winning" the battle for encryption include:
1) We also need it to defend normal people from attackers
2) It's simple enough to print onto a T-shirt
3) The developers recognised the value and wanted to share this
The differences with AI include:
1) The most capable models don't fit on most personal devices at present, let alone T-shirts
2) 95% of the advantages can be had from centralised systems without needing to distribute the models directly to everyone
3) A huge number of developers have signed an open letter which is basically screaming "please regulate us! We don't want to be in an arms race with each other to make this more capable! We don't know what we're doing or what risks this has!"
[+] [-] mikkom|1 year ago|reply
If the US government really want to do this correctly, they must also ban any API access to AI models and ban all research related to AI.
How could this be even written as law? Universities and companies are probihited to publish their research? How many layer models are forbidden to be published? All neural networks?
[+] [-] baq|1 year ago|reply
If that isn't enough, it links to https://en.wikipedia.org/wiki/Illegal_number#Illegal_primes which is well into ridiculous but true territory.
[+] [-] DarkNova6|1 year ago|reply
[+] [-] smeagull|1 year ago|reply
[+] [-] torginus|1 year ago|reply
I don't get the 'harm can be done by individuals' argument. Sticks and stones. Every discussion forum on the internet is moderated to some degree, and every human being has the ability to post hurtful or illegal content, yet the system works. Moderation will only get more powerful thanks to AI tools.
[+] [-] skissane|1 year ago|reply
Sure, they could try to negotiate some kind of UN convention for a coordinated global ban. But, given how fractured global diplomacy has become, I doubt the odds of something like that succeeding are particularly high.
[+] [-] ben_w|1 year ago|reply
Also note that both California and the EU are economically dominant enough that they tend to influence regulation outside their own borders: https://en.wikipedia.org/wiki/Brussels_effect
Further note how many people signed the Pause letter: https://futureoflife.org/open-letter/pause-giant-ai-experime...
In addition, given how much training data current models need, there would be a huge impact just by a handful of governments siding with all the copyright holders suing OpenAI, Midjourney, Stability AI, etc.
And all that is without needing Yudkowsky's point that a ban isn't serious unless you're willing to escalate to performing airstrikes on data centres.
[+] [-] antupis|1 year ago|reply
[+] [-] graemep|1 year ago|reply
[+] [-] torginus|1 year ago|reply
[+] [-] CamperBob2|1 year ago|reply
Is there a reason why this executive order / RFC received no coverage on HN (or anywhere else I'm aware of) until after the deadline had passed?
[+] [-] mcpherrinm|1 year ago|reply
[+] [-] spiesd|1 year ago|reply
The executive order (and associated DoC/NTIA RFC) was long, and dense with both legal references and political platitudes. Not great reading, (though the EO got a good amount of discussion here). It's unfortunate, but less than surprising that it didn't make actual news outlets.
It seems complicated now, and many would like to see how things play out a little more before committing the time to deciding things. (I think that's a bad idea; regardless of the revolutionary tech, it seems wise to begin governmental thought early.)
It's kinda been buried in an otherwise heavy news cycle since the end of October of last year, AI and otherwise. I assume that wasn't intentional, but it seems hard to hide something so big at a better time.
And though I hate to say it, I suspect: apathy. Both traditional from the bottom ("too far off, unfixable, can't do anything about it"), and from the top ("once we get too big to fuck with, we just won't give a damn about any changes they try to make anyway").
[+] [-] greenavocado|1 year ago|reply
[+] [-] wanderingmind|1 year ago|reply
[+] [-] entrep|1 year ago|reply
[1] https://github.com/OpenDevin
[+] [-] bradley13|1 year ago|reply
Whenever government proposes banning a technology, one must ask: who benefits? The LLMs, even in their current state of infancy, are powerful tools. Restricting access to those tools keeps power in the hands of wealthy corporations and the government itself. That's the power aspect.
The money aspect is even simpler: Don't doubt that some of those wealthy corporations are making donations to certain officials, in order to gain support for actions like this. Almost no one leaves the Congress (or almost any parliamentary body in any country) as less than a multi-millionaire. Funny, how that works...
[+] [-] NumberWangMan|1 year ago|reply
In the case of these models, you can fine-tune the model all you want to be moral and not do harmful things like scam the elderly, carry out disinformation campaigns, harass people to the point of suicide. But as soon as you release the model weights, you are giving anyone the ability to fine-tune out all of those restrictions, with orders of magnitude less cost that it took to develop the model in the first place.
Regulating AI, especially as it becomes AGI and beyond, is going to be very tricky, and if everyone has the ability to create their own un-restricted, potentially sociopathic intelligences by tweaking the safe models created under careful conditions by big labs, we're in for a lot of trouble. That assumes we put the proper regulations on the big labs, and that they have the ability to make them "safe", which is hard, yes. But as AI turns into AGI and beyond, things are going to go pretty nuts, so it's important to start laying groundwork now.
[+] [-] dang|1 year ago|reply
OpenAI's comment to the NTIA on open model weights - https://news.ycombinator.com/item?id=39900197 - April 2024 (41 comments)
[+] [-] foota|1 year ago|reply
[+] [-] greenavocado|1 year ago|reply
[+] [-] thelittleone|1 year ago|reply
[+] [-] supriyo-biswas|1 year ago|reply
It still sends the message, and if many people send such letters, it will force them to reconsider.
[+] [-] MrYellowP|1 year ago|reply
People never grow tired of the ever-repeating excuses to further reduce our freedoms, or to hide criminality, or just insult the already lacking collective IQ.
Think of the children! We need to prevent abuse! Someone's feelings might be hurt! Someone might get harmed! Seeing real breasts might cause trauma!
Or one that's not so often used, but still really effective:
We're just bulldozering this place, because it's so horrible. Instead we turn it into a luxury resort for rich people. People who believe that it's because we're destroy any and all evidence are just conspiracy theorists. (in regards to Epstein's Island)
[+] [-] xzzulz|1 year ago|reply
It should be keep in English culture. And its close friend allies. In my opinion, in our society, full of threats, it is not a good idea to make it open-source. (Not an expert on the topic).
That could allow adversaries parties, to obtain it. Parties that otherwise, would be unable to do so, by themselves.
In my opinion, it should be keep under control of the most able, and responsible organizations. Supervised by government. Because who else could supervise it?. Regulation of this tech, is a very important topic. That should be tackled by the best organizations, university and responsible researchers.
I would prefer a Star-Wars type of society. Where humans still do human activities.
[+] [-] fullspectrumdev|1 year ago|reply
I remain open to having my mind changed on this matter, but thus far, I’ve not seen a single good argument for restricting development of AI.
[+] [-] rnd0|1 year ago|reply
A subtle, but important distinction.
[+] [-] PoignardAzur|1 year ago|reply
What's your opinion on the orthogonality thesis, ie intelligence not being fundamentally correlated with altruism? The concept of mesa-optimization, that a process that tries to teach a system to optimize for X might instead instead make it optimize for thing-that-leads-to-X (eg evolution made us want sex because it leads to reproduction, but now we have a lot of sex with birth control). The concept of instrumental convergence, that any optimization process powerful enough will eventually optimize for "develop agency, survive longer, accumulate more power"?
There's a lot of literature written about AI safety, a lot of it from very skeptical people, but it's not like there's any single insight that can be reliably transmitted to a guy at a table with a "you're all idiots, change my mind" sign.
[+] [-] kypro|1 year ago|reply
Personally I struggle to understand anyone who believes it's in humanities best interest to continue developing AI systems without significant limitations on the rate of progress. There's so many ways AI could go wrong I'd argue it's almost guaranteed that something will go wrong if we continue on this trajectory.
[+] [-] jmull|1 year ago|reply
Now, I keep getting stuck on this.
All of these models automate the creation of BS -- that is, stuff that kind of seems like it could be real but isn't.
I have no doubt there is significant economic value in this. But the world was awash in BS before these LLMs so we're really talking about a more cost effective way to saturate the sponge.
Anyway, on the main topic... closing models is an absurd idea, and one that cannot possibly work. I think the people who have billions at stake in these models are panicking, realizing the precarious and temporary nature of their lead in LLMs and are desperately trying to protect it. All that money bought them a technological lead that is evaporating a lot faster than they can figure out how to build a business model on it.
...Nvidia should pump a little bit of their windfall profits into counter lobbying since they have the most to gain/lose from open/closed models.
[+] [-] andy99|1 year ago|reply
And no doubt a burgeoning resistance focused on making open source models available. It would be a disaster but I can see it making the industry stronger, with more variety and breaking the influence of some of the bigger players.
Practically, if you're concerned about this, learn more about ML/AI, not about how to use super high level frameworks but about how it actually works so when "SHTF" as survivalists say, you'll still be able to use it.
[+] [-] formula_ninguna|1 year ago|reply
Because if this had happened in any non Western country, the mainstream news in US and EU would've been of the similar sentiment.
[+] [-] meindnoch|1 year ago|reply
[+] [-] zzzzzzzzzz10|1 year ago|reply
[+] [-] gorbypark|1 year ago|reply
[+] [-] badcarbine|1 year ago|reply
[deleted]
[+] [-] piokoch|1 year ago|reply
[deleted]