“closed-source” AI applications ...—where the system’s software is securely held by its maker and a limited set of vetted partners. ... while keeping the underlying software secure. ... rapid and uncontrolled release of powerful unsecured ...
This is the worst kind of propaganda, supporting corporate AI companies, who in any case can hardly be trusted to guide AI in a direction to benefit all of society.
And honestly, what is the danger? That AI can spew toxic and misleading content? We certainly don't need AI for that!
Open-source AI may or may not be more 'dangerous' than corporate AI, but it is essential for society that AI is open.
This is possibly the most egregiously and malevolently misleading piece of propaganda about AI that I have seen. Written presumably by a person, making an effective counterargument against their own claim.
Free and open, unfettered development of AI is a fundamental human right.
LLMs are nothing more (and nothing less) than marvelously effective parsers for the cultural-linguistic heritage generated by all of humanity. The n-dimensional matrix of vector data represented in the sum total of human intellectual output is THE legacy of humankind, and is the precious and vital commons of all humanity.
To regulate and close access to tools required to parse that information in newly effective ways, tools that make that heritage accessible and available to humanity at large, represents nothing less that an attempt to hobble and intellectually restrain humanity itself, to criminalize the unconstrained enlightenment of humankind, as “uniquely dangerous” .
Yes, access to information and knowledge is uniquely dangerous, in the same way that allowing the everyperson access to libraries and reading is.
This article might just as well be arguing to restrict the teaching of reading, as well as access to books and the internet to an “approved list” for public consumption.
To see this published in IEEE is a serious disappointment. I will be withdrawing my affiliation with them unless a retraction is made.
If it is really so dangerous, what are we going to do about the researchers working on it behind closed doors? They are human too, some will be corrupt, some will do all these things listed in the article. Somebody will leak some of these weights. Maybe to the public, maybe to the blackmarket.
If it too dangerous to develop these things in public, it is too dangerous to develop them in the first place.
Things like making a more deadly coronavirus etc., are also trivialities, within the reach of any university professor, probably many PhD students in biomedicine.
Things that you can't publish, not because they're dangerous, but because it's boring and of no scientific interest.
Bombs are of course even easier, but so boring it's not even worth thinking about. I think people need to accept that the reason people don't do these kinds of things is only that they don't want to and that these things are relatively straightforward-- that there's no way to protect oneself and that the technical capability to do these things is and will forever remain widespread.
Open source often evens the field, and sometimes forces competitiveness into the general market. People become more aware of what's possible (good and bad, and what to expect), and corporations cannot be trusted not to use those tools in harmful ways in any case, just see OpenAI recently giving up on that prohibition for "weapons development" and "military purposes". "Pause all new releases of unsecured AI systems" only benefits them and bad actors that do not care about adhering to rules and will find ways around them, all of those points seem a wishlist for entities that want to regulatorily capture the market, appointed to endlessly produce "risk assessments" that do not reflect reality and only stall progress for the average person.
The premise of this article: that closed source, company owned software is better secured, is utter rubbish.
Closed source companies have been hacked so extensively most citizens and every US Federal Employee of the western world has be owed. The most highly kept nuclear secrets were leaked or stolen. The most guarded industrial secrets have been lifted by other countries.
This antiquated idea that making a small list of people who have access and putting ownership into private corporations is inherently more secure has proven to be folly time and time again, yet organizations continue to espouse it like it’s some kind of truth. It’s a falsehood.
Open Source has thousands of well intentioned eyeballs looking at things, and that is a very effective form of security that truly makes things secure.
You could replace "AI model" with "computer". With an "unsecured computer" you could write a virus, or design a detonator switch for a bomb, or publish copyrighted material. Clearly we need to limit the sale of unsecured computers and estaplish liabilities for manufacturers whose customers enguage in these actions.
The risk potential is far greater for the bots that are provided with compute and an API as a commercial service. The businesses offering these services will also be uniquely positioned to connect their bots to more and more real world infrastructure.
Also, the "someone will make a bomb with it" never eventuates. You can find recipes for sarin on the clearweb. People exist that have studied at university. You can 3D print guns. People are allowed to drive cars.
> or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration
This statement is especially scary, because it implies that being against immigration is a political position that society must not allow at any cost. I bet other inflamatory text messages are allowed, as long as they benefit a certain political side.
All of the hypotheticals were pretty poorly formed. But to be fair, it doesn't say anything for or against immigration, just "angry about immigration".
The problem is stirring up hatrid against one or more defined demographics to make political advantage. It is a core component of fascism's path to power.
Since AI is mostly trained on open data from the Internet, AI is as dangerous as the Internet. Aren't LLMs basically hallucinating search engines which understand natural language? Where's the danger? They can't do anything novel that humans can't already do.
What's really dangerous is lack of transparency around closed-source models (say, US govt has a deal with OpenAI to alter the output the way they want) and there's also privacy concerns (no idea where my personal or our confidential corporate data will end up).
> David Evan Harris is [sic] senior advisor for AI and elections at the Brennan Center for Justice, and visiting fellow at the Integrity Institute. He previously worked as a research manager at Meta (formerly Facebook) on the responsible AI, civic integrity, and social impact teams, and was recently named to Business Insider’s AI 100 list for his work on AI governance, fairness, and misinformation.
Except in this case he walked from corporate cash to an academic role. I’d be curious why he left the integrity division in Meta. Anyway, when someone like this expresses caution it causes me to listen.
This is one of those curious lists of wildly different things.
> You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration.
One of these things is not like the others. You can not ask any of these models to "design a more deadly coronavirus". The other things pale in comparison to having AI controlled by a few corporations. "A series of inflammatory text messages." My god.
Is it just me or is this a completely ludicrous position? And under IEEE’s name? Data breaches happen at the largest and best protected holders of data. How on earth can the author expect, then, closed AI to remain so forever?
This just seems like such an absurd take but I’d gladly hear how reasonable minds differ.
My first encounter with IEEE was when I started in an "Engineering" college in India, and saw professors writing dubious "research papers" on IEEE venues (IEEE eXplore ?) for stuff any worthy software engineer would come up with in 20 minutes. These shitty professors also had some sorts of "Memberships" in IEEE, despite not knowing shit about anything in CS.
At that point I thought IEEE was a mostly money-grabbing organization. This was more than half a decade ago.
This rhetoric is dangerous especially if made wide spread by social media.
There are so many examples of how research is hindered by closed source companies. The recent paper on "are emergent abilities of LLM a mirage?" also hints at how the closed nature of OpenAI and their refusal to share discoveries is an obstacle.
Another AI doomsayers with little understanding of threats
> You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration. You will likely receive polite refusals to all such requests because they violate the usage policies of these AI systems.
You can do all of these with Google search and Photoshop today. AIs are trained with data from Internet, ergo they can only do things you can already find in Internet today.
Or the author understands this precisely and the article is just fearmongering for shareholder interests and regulatory capture.
"Unsecured AI" is the "Ghost Gun" of AI control now, I guess? We're inventing new terms for the AI models people can run on their own without the involvement of a megacorp? Nice narrative.
Sorry, IEEE, but just putting a disclaimer about it being a guest post doesn't reduce the amount of respect I've lost for you.
[+] [-] jacknews|2 years ago|reply
This is the worst kind of propaganda, supporting corporate AI companies, who in any case can hardly be trusted to guide AI in a direction to benefit all of society.
And honestly, what is the danger? That AI can spew toxic and misleading content? We certainly don't need AI for that!
Open-source AI may or may not be more 'dangerous' than corporate AI, but it is essential for society that AI is open.
[+] [-] K0balt|2 years ago|reply
Free and open, unfettered development of AI is a fundamental human right.
LLMs are nothing more (and nothing less) than marvelously effective parsers for the cultural-linguistic heritage generated by all of humanity. The n-dimensional matrix of vector data represented in the sum total of human intellectual output is THE legacy of humankind, and is the precious and vital commons of all humanity.
To regulate and close access to tools required to parse that information in newly effective ways, tools that make that heritage accessible and available to humanity at large, represents nothing less that an attempt to hobble and intellectually restrain humanity itself, to criminalize the unconstrained enlightenment of humankind, as “uniquely dangerous” .
Yes, access to information and knowledge is uniquely dangerous, in the same way that allowing the everyperson access to libraries and reading is.
This article might just as well be arguing to restrict the teaching of reading, as well as access to books and the internet to an “approved list” for public consumption.
To see this published in IEEE is a serious disappointment. I will be withdrawing my affiliation with them unless a retraction is made.
[+] [-] bee_rider|2 years ago|reply
If it too dangerous to develop these things in public, it is too dangerous to develop them in the first place.
[+] [-] bamboozled|2 years ago|reply
Maybe it seems he is more special / qualified, but there is no evidence to really prove that.
More out in the open the better.
If we’re going to be obliterated by the AI uprising, I’d prefer it’s an open source apocalypse.
[+] [-] impossiblefork|2 years ago|reply
Things that you can't publish, not because they're dangerous, but because it's boring and of no scientific interest.
Bombs are of course even easier, but so boring it's not even worth thinking about. I think people need to accept that the reason people don't do these kinds of things is only that they don't want to and that these things are relatively straightforward-- that there's no way to protect oneself and that the technical capability to do these things is and will forever remain widespread.
[+] [-] MeImCounting|2 years ago|reply
[+] [-] 0x_rs|2 years ago|reply
https://theintercept.com/2024/01/12/open-ai-military-ban-cha...
[+] [-] happytiger|2 years ago|reply
Closed source companies have been hacked so extensively most citizens and every US Federal Employee of the western world has be owed. The most highly kept nuclear secrets were leaked or stolen. The most guarded industrial secrets have been lifted by other countries.
This antiquated idea that making a small list of people who have access and putting ownership into private corporations is inherently more secure has proven to be folly time and time again, yet organizations continue to espouse it like it’s some kind of truth. It’s a falsehood.
Open Source has thousands of well intentioned eyeballs looking at things, and that is a very effective form of security that truly makes things secure.
[+] [-] wolpoli|2 years ago|reply
I disagree with the author calling open-source AI models as Unsecured AI models.
[+] [-] sharperguy|2 years ago|reply
[+] [-] stavros|2 years ago|reply
[+] [-] anon373839|2 years ago|reply
[+] [-] qwery|2 years ago|reply
The risk potential is far greater for the bots that are provided with compute and an API as a commercial service. The businesses offering these services will also be uniquely positioned to connect their bots to more and more real world infrastructure.
Also, the "someone will make a bomb with it" never eventuates. You can find recipes for sarin on the clearweb. People exist that have studied at university. You can 3D print guns. People are allowed to drive cars.
[+] [-] ur-whale|2 years ago|reply
Replace the word "AI" by the word "Computer" in the other's text, this will give you a good insight into the way the author's mind works.
What is uniquely dangerous is people who think like this author does. North Korean government mindset.
[+] [-] curtisblaine|2 years ago|reply
This statement is especially scary, because it implies that being against immigration is a political position that society must not allow at any cost. I bet other inflamatory text messages are allowed, as long as they benefit a certain political side.
[+] [-] qwery|2 years ago|reply
[+] [-] somewhereoutth|2 years ago|reply
[+] [-] kgeist|2 years ago|reply
What's really dangerous is lack of transparency around closed-source models (say, US govt has a deal with OpenAI to alter the output the way they want) and there's also privacy concerns (no idea where my personal or our confidential corporate data will end up).
[+] [-] qwery|2 years ago|reply
[+] [-] Onavo|2 years ago|reply
Found the revolving door lobbyist.
[+] [-] Arn_Thor|2 years ago|reply
[+] [-] amadeuspagel|2 years ago|reply
> You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration.
One of these things is not like the others. You can not ask any of these models to "design a more deadly coronavirus". The other things pale in comparison to having AI controlled by a few corporations. "A series of inflammatory text messages." My god.
[+] [-] sgammon|2 years ago|reply
This just seems like such an absurd take but I’d gladly hear how reasonable minds differ.
[+] [-] freddealmeida|2 years ago|reply
[+] [-] jbxffretuxa|2 years ago|reply
At that point I thought IEEE was a mostly money-grabbing organization. This was more than half a decade ago.
[+] [-] div72|2 years ago|reply
[+] [-] account_holder|2 years ago|reply
There are so many examples of how research is hindered by closed source companies. The recent paper on "are emergent abilities of LLM a mirage?" also hints at how the closed nature of OpenAI and their refusal to share discoveries is an obstacle.
[+] [-] miohtama|2 years ago|reply
> You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration. You will likely receive polite refusals to all such requests because they violate the usage policies of these AI systems.
You can do all of these with Google search and Photoshop today. AIs are trained with data from Internet, ergo they can only do things you can already find in Internet today.
Or the author understands this precisely and the article is just fearmongering for shareholder interests and regulatory capture.
[+] [-] Turing_Machine|2 years ago|reply
He appears to have no credentials or expertise in this area, or, for that matter, any tech-related area.
https://haas.berkeley.edu/faculty/harris-david/
Given that he appears to not even understand the definition of "open source", I wouldn't grant any credence to anything he has to say.
[+] [-] bartleeanderson|2 years ago|reply
[+] [-] kordlessagain|2 years ago|reply
[+] [-] thefurdrake|2 years ago|reply
Sorry, IEEE, but just putting a disclaimer about it being a guest post doesn't reduce the amount of respect I've lost for you.