> "expectation that companies like yours must make sure their products are safe before making them available to the public."
Lets make a guess, they are going to say its dangerous and we need regulation to prevent competitio---terorrism.
Here is what you need to do instead, get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished. Then show them that you can torrent facebooks LLM right now, and that its already on computers worldwide. The cat is out of the bag.
Then let them make policy decisions.
Hard to imagine this is anything other than a ploy for regulations and lobbying.
> get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished
This is a punt to committee. Likely what this meeting will result in. It’s as performative as it’s useless.
Suggestions of pauses have always been a farce. But I’m struggling to see solutions from experts, apart from constant predictions of generic doom. (I’m in favour of a domestic registry, so we know who’s training what on which data. Maybe a copyright safe harbour in exchange for registration?)
The other side is competitiveness: what can the federal government do to make America the best place to build AI? (I'm continually drawn to the Heavy Press Program [1], the era's massive forging presses being loosely analogous to modern training costs.)
LLama is meaningfully behind the state of the art AFAIK, so the cat isn't fully out of the bag in that sense. Whether a GPT4 or better model has public weights in 2-3 years may in fact depend on government regulation.
1A makes regulating software basically impossible. I can’t imagine what additionally regulation they think they’re going to implement that will survive the judiciary. The only legal barrier I can see that could influence these AI services is that that their output is obviously not shielded from liability by Section 230. But that will play out in the civil courts, if at all.
I’d say the more likely outcome is the far more subversive scenario where the government simply pressures private organisations into doing its unconstitutional bidding
It also depends on what "safe" means. Before I read your comment I assumed this was about not accidentally building skynet, rather than making it easy to break copyright etc. I hope it's about AI Safety given Anthropic is there and not Midjourney.
Why are regulations bad? When there are no regulations the results are usually not pretty. Current big tech dystopia, stock market crash of 1929, UK textile mills worked by children in the Industrial Revolution...
Why should we expect AI to not repeat the same abuses and errors?
Companies are not the only ones who have something to gain from that exchange. Democratization of AI will also be used to push harsher legislation on the internet, in the guise of fighting "misinformation" (now possibly AI-generated!).
Considering Anthropic and Open AI will be there, I think the right players are at the table. I would've liked to have seen Meta there since I think they're focused on generative deep learning. That said, with the administration's AI Bill of Rights top of mind, I don't have faith in the gerontocracy to regulate this sector [1].
As a jocular aside, I wonder if Chat GPT could be used to write these articles? The second to last paragraph in this article is the exact same as the second paragraph from this earlier one: [2].
Who would be the attendants at your dream summit on AI safety?
Public policy needs to involve decisionmakers. You can't just hand society to the engineers. Imagine your boss and hierarchy being empowered to decide for everyone.
Does anyone? Things like intelligence, consciousness, alignment etc are open research areas with a lot of noise but barely even agreement on the basics.
This honestly feels like a good step. I see a lot of comments here lamenting potential regulatory overreach and while that is definitely a risk there are also a lot of people calling for regulations on AI and LLMs. There are credible risks and a lot of people are concerned. At the end of the day it’s a democracy: ignoring these people will not work out. Enough people are concerned that doing nothing is not an option (numerous septuagenarians in my life have serious and legitimate concerns about this. The government has done nothing to curtail rampant text/phone scams targeting the elderly and LLMs can really amplify these scams).
The White House inviting leaders from industry to represent their position at a tentative stage feels like a measured and sensible approach to regulation. Industry is given a seat at the table and hopefully they can reach an agreement that satisfies the needs of industry while also placating the widespread fears about AI. This is a good incremental approach to crafting good laws. While they are at it I wouldn’t mind if the White House also did something about the rampant social security phone scams, but one step at a time.
Optimistically industry can help the government separate the reality from the hype and maybe identify boundaries for the technology which would lead to sensible regulation and hopefully not be too restrictive.
"In early May 1945, Secretary of War Henry L. Stimson, with the approval of President Harry S. Truman, formed an Interim Committee of top officials charged with recommending the proper use of atomic weapons in wartime and developing a position for the United States on postwar atomic policy. Stimson headed the advisory group composed of Vannevar Bush, James Conant, Karl T. Compton, Under Secretary of the Navy Ralph A. Bard, Assistant Secretary of State William L. Clayton, and future Secretary of State James F. Byrnes. Robert Oppenheimer, Enrico Fermi, Arthur Compton, and Ernest Lawrence served as scientific advisors (the Scientific Panel), while General George Marshall represented the military. The committee met on May 31 and then again the next day with leaders from the business side of the Manhattan Project, including Walter S. Carpenter of DuPont, James C. White of Tennessee Eastman, George H. Bucher of Westinghouse, and James A. Rafferty of Union Carbide."
So, should we expect the AI equivalent of Hiroshima in a couple months? An awe inspiring demonstration of raw power to silence any detractors? What would that look like?
Here's the argument that (as a USA-ian) persuades me the most: if these AI systems are weapons they we get to have them by the 2nd Amendment. It's the same as the we-get-to-have-strong-encryption argument, eh?
The gov and the corps are not supposed to be the ultimate arbiters of authority. The was the crux of the American Revolution: throwing out the king.
Remember that e.g. Palmer Luckey and co. are busy making Skynet (Anduril Industries). The system is poised to enforce policy.
One thought about AI. Testing for correct answers is not a useful metric for AI.
People can learn something that is wrong as easily as something that is "less wrong", as long as it makes sense. Sometimes things that are very counterintuitive are proven correct, and our intellect has to kind of reason a way to believe it.
Also, AI doesn't need to be "human" to be very useful. The argument of birds vs. planes comes to mind.
"I think we should be cautious with AI, and I think there should be some government oversight because it is a danger to the public," Tesla Chief Executive Elon Musk said last month in a television interview
As one of the few actors having already literally killed people with hyperbolic statements about "AI" in a high-stakes control context, his authority is not as good as it could have been. Maybe Reuters should have picked another face for urging caution.
What makes you think the Republican candidate wouldn't do the same thing? I don't think we've really seen this become a campaign issue yet, if it ever will (for 2024.)
The real risk of AI actually is related to that, IMO. Vinod Khosla estimated in an event a few weeks ago that 80% of human jobs will be automated in the next 10-20 years. We’re going to be living in an era of extreme abundance, which in theory could lead to an absolute human utopia.
The reality is that our society as it is currently run requires everyone to work in some capacity to earn a living. We’re about to hit a point where that simply is not feasible, because the majority of the jobs are going to be automated.
Teaching jobs are on their way out likely aren’t coming back. But with them are going to go most white collar jobs generally, and most blue collar jobs, and before too long self driving will be figured out, and then transportation jobs are gone too.
I hope that this meeting is focused around the changes we need to make societally to use this abundance for good, but I know how the slim the chances are of them talking about that, and how even more slim they are that they come up with a solution.
Most school funding is at the state level. If you'd said "Doesn't Gavin Newsom have better things to do than a campaign(?) trip to Florida," yes he does.
The article highlights the White House's efforts to engage with top AI companies and discuss concerns related to artificial intelligence. However, it's worth considering whether these meetings might serve as a double-edged sword, given the potential for the administration to manipulate the AI community for political gain. As the next election cycle approaches, there is a risk that the White House could use its influence to shape AI development in ways that benefit the incumbent administration.
For instance, the Biden administration's call for AI companies to ensure the safety of their products before releasing them to the public could be seen as a way to exert control over these influential technologies. While it is important to address the potential risks of AI, such as privacy violations, bias, and misinformation, it is crucial to ensure that the government's involvement does not lead to undue interference or censorship that could sway public opinion in favor of the ruling party.
Moreover, as AI technologies like ChatGPT gain more prominence and widespread adoption, the potential for misuse by political actors becomes increasingly concerning. The administration's interest in regulating AI systems may be well-intended, but there is a danger that such regulation could be used to manipulate the information landscape in a way that serves the interests of those in power.
In conclusion, while the White House's engagement with the AI community is a necessary step in addressing the challenges and concerns surrounding artificial intelligence, it is important to remain vigilant against the potential for political manipulation. The AI community must work together with government officials to strike a balance between addressing legitimate concerns and preserving the integrity and independence of AI development.
note: I did prod ChatGPT the direction of criticism from the prompt, but this is as is generated response. Well, I be damned.
hospitalJail|2 years ago
Lets make a guess, they are going to say its dangerous and we need regulation to prevent competitio---terorrism.
Here is what you need to do instead, get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished. Then show them that you can torrent facebooks LLM right now, and that its already on computers worldwide. The cat is out of the bag.
Then let them make policy decisions.
Hard to imagine this is anything other than a ploy for regulations and lobbying.
JumpCrisscross|2 years ago
This is a punt to committee. Likely what this meeting will result in. It’s as performative as it’s useless.
Suggestions of pauses have always been a farce. But I’m struggling to see solutions from experts, apart from constant predictions of generic doom. (I’m in favour of a domestic registry, so we know who’s training what on which data. Maybe a copyright safe harbour in exchange for registration?)
The other side is competitiveness: what can the federal government do to make America the best place to build AI? (I'm continually drawn to the Heavy Press Program [1], the era's massive forging presses being loosely analogous to modern training costs.)
[1] https://en.wikipedia.org/wiki/Heavy_Press_Program
kalkin|2 years ago
AmericanChopper|2 years ago
I’d say the more likely outcome is the far more subversive scenario where the government simply pressures private organisations into doing its unconstitutional bidding
ajmurmann|2 years ago
pers0n|2 years ago
Mistletoe|2 years ago
Why should we expect AI to not repeat the same abuses and errors?
unknown|2 years ago
[deleted]
unknown|2 years ago
[deleted]
gordian-mind|2 years ago
waboremo|2 years ago
[deleted]
kvathupo|2 years ago
As a jocular aside, I wonder if Chat GPT could be used to write these articles? The second to last paragraph in this article is the exact same as the second paragraph from this earlier one: [2].
[1] - https://www.whitehouse.gov/ostp/ai-bill-of-rights/
[2] - https://www.reuters.com/technology/us-begins-study-possible-...
kypro|2 years ago
This is like having a meeting about the risk of climate change with Greta Thunberg on one side and Exxon and Shell on the other.
mikece|2 years ago
ajross|2 years ago
Public policy needs to involve decisionmakers. You can't just hand society to the engineers. Imagine your boss and hierarchy being empowered to decide for everyone.
rwmj|2 years ago
dehrmann|2 years ago
blululu|2 years ago
The White House inviting leaders from industry to represent their position at a tentative stage feels like a measured and sensible approach to regulation. Industry is given a seat at the table and hopefully they can reach an agreement that satisfies the needs of industry while also placating the widespread fears about AI. This is a good incremental approach to crafting good laws. While they are at it I wouldn’t mind if the White House also did something about the rampant social security phone scams, but one step at a time.
taylodl|2 years ago
ftxbro|2 years ago
GalenErso|2 years ago
carapace|2 years ago
The gov and the corps are not supposed to be the ultimate arbiters of authority. The was the crux of the American Revolution: throwing out the king.
Remember that e.g. Palmer Luckey and co. are busy making Skynet (Anduril Industries). The system is poised to enforce policy.
visarga|2 years ago
Cap-cap-cap-cap-cap ... ture
(♫ cue in football gallery tune)
Soon we will know that only evil people have LLaMA finetunes on their desktops. Good citizens use an official provider like OpenAI.
barelyauser|2 years ago
Also, AI doesn't need to be "human" to be very useful. The argument of birds vs. planes comes to mind.
stuff4ben|2 years ago
blihp|2 years ago
justrealist|2 years ago
abudabi123|2 years ago
wellthisisgreat|2 years ago
Or that AI CEOs of Google and Microsoft are having an AI pow-wow at the White House
zerop|2 years ago
visarga|2 years ago
wslh|2 years ago
2devnull|2 years ago
renewiltord|2 years ago
Drops LLM into open source world, leaves without explaining. Plausible deniability through leak. No one punished.
Legend. Like handing everyone in America a nail gun.
etiam|2 years ago
"I think we should be cautious with AI, and I think there should be some government oversight because it is a danger to the public," Tesla Chief Executive Elon Musk said last month in a television interview
As one of the few actors having already literally killed people with hyperbolic statements about "AI" in a high-stakes control context, his authority is not as good as it could have been. Maybe Reuters should have picked another face for urging caution.
oifjsidjf|2 years ago
cwkoss|2 years ago
laweijfmvo|2 years ago
MrBigOcean|2 years ago
Kbelicius|2 years ago
grantcas|2 years ago
[deleted]
marsven_422|2 years ago
[deleted]
0zemp2c|2 years ago
[deleted]
1001101|2 years ago
foogazi|2 years ago
taywrobel|2 years ago
The reality is that our society as it is currently run requires everyone to work in some capacity to earn a living. We’re about to hit a point where that simply is not feasible, because the majority of the jobs are going to be automated.
Teaching jobs are on their way out likely aren’t coming back. But with them are going to go most white collar jobs generally, and most blue collar jobs, and before too long self driving will be figured out, and then transportation jobs are gone too.
I hope that this meeting is focused around the changes we need to make societally to use this abundance for good, but I know how the slim the chances are of them talking about that, and how even more slim they are that they come up with a solution.
unknown|2 years ago
[deleted]
circuit10|2 years ago
"Your point that the world contains multiple problems is a real slam-dunk argument against fixing any of them."
dehrmann|2 years ago
sremani|2 years ago
For instance, the Biden administration's call for AI companies to ensure the safety of their products before releasing them to the public could be seen as a way to exert control over these influential technologies. While it is important to address the potential risks of AI, such as privacy violations, bias, and misinformation, it is crucial to ensure that the government's involvement does not lead to undue interference or censorship that could sway public opinion in favor of the ruling party.
Moreover, as AI technologies like ChatGPT gain more prominence and widespread adoption, the potential for misuse by political actors becomes increasingly concerning. The administration's interest in regulating AI systems may be well-intended, but there is a danger that such regulation could be used to manipulate the information landscape in a way that serves the interests of those in power.
In conclusion, while the White House's engagement with the AI community is a necessary step in addressing the challenges and concerns surrounding artificial intelligence, it is important to remain vigilant against the potential for political manipulation. The AI community must work together with government officials to strike a balance between addressing legitimate concerns and preserving the integrity and independence of AI development.
note: I did prod ChatGPT the direction of criticism from the prompt, but this is as is generated response. Well, I be damned.