> This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application.
I just can't stand this kind of language. ChatGPT is quite useful but have you tried to ask it something serious that is not twitter worthy? We are not there yet. And in any case, this is not the first superhuman tool that humans made. Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.
> Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.
Building nukes and bioweapons aren't as good of a business model as AGI though. The government was incentivised to at least take some precautions with nukes. Nukes can't be developed and launched by individual bad actors. AGI isn't that comparable to nukes for numerous reasons. Bioweapons maybe, but I wouldn't be in support of companies researching bioweapons without regulation.
It's not a choice between living in fear and going full steam ahead. Both are idiotic positions to take. The reasonable approach here would be to publicly fund alignment research while slowing and regulating AI capability research to ensure the best possible outcomes and minimise risk.
You're basically arguing in favour of a free market approach to developing what has the potential to be a dangerous technology. If you wouldn't allow the free market to regulate something as mundane as automobile safety, then why would you trust the free market to regulate AI safety?
Companies who wish to develop state-of-the-art AI models should be required to demonstrate they are taking reasonable steps to ensure safety. They should be required to disclose state-of-the-art research projects to the government. They they should be required to publish alignment research so we can learn...
Though take the examples - nuclear weapons or biotech - as you say both have huge potential for harm.
However both are regulated and relatively inaccessible to the average person.
While training models like ChatGPT is still relatively inaccessible for the average person, using them is potentially not.
One of the features of software is the almost zero cost of copying - making proliferation much more of an issue than for nukes or custom made viruses tech [1]
ChatGPT is over-hyped of course, but I think the genie and bottle issue is more real here than for military tech or biotech.
Having said all that I do think the solution is largely around applying existing laws to these new tools.
[1] Ok if they escape, then can self replicate....
It is better than the promises we had in 1980s where we went through an AI winter.
But it is going to take some time for people an corporations to figure out if it is all hype, the next crypto, or if there are some real applications for this new technology.
Look at the cloud, S3 was launched in 2006, but you did not see much about it on Harvard Business Review till 2011. And even then, it was potential promises of what the cloud could do. Things did not really pick up till 2016.
I asked it about how to transition from nation states to local ownership at scale and was very happy with its answer. Better and more comprehensive than anyone I think around me would have answered - in 5 seconds - and it introduced me to new concepts like time banks and community currencies which I could ask follow up questions on.
I think it’s truly mind blowing a computer can now simulate some of the best conversations I’ve ever had on a variety of topics.
>These are fears that we live with and will forever live with, but we can't live lives only in fear.
But we can't lie to ourselves about reality in order to prevent fear either.
The opinions from Elon musk, to Sam Altman to even the person who started it all Geoffrey Hinton are actually inline with the blog post.
Hinton even says things like these chatGPT models literally can understand things you tell it.
Should we call climate scientists fear mongers because they talk about a catastrophic but realistic future? I think not, and the same can be said for the people I mentioned.
I personally think these experts are right, but you are also right in that "we are not there yet". But given the trajectory of the technology for the past decade we basically have a very good chance of being "there" very soon.
Agi that is perceptually equivalent to a person more intelligent then us is now a very realistic prospect within our lifetimes.
I get the concern expressed, but the fear-mongering is getting a little much these days. Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know. How this will impact the future of business and technology has yet to be determined. Only time will tell.
We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.
Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.
Right now, it sounds more like the CRISPR discussion all over again.
There are similarities, sure. But there are also stark differences. Due to the existence of ChatGPT, the GPT-3 API, and the general viability of natural language prompting, LLMs are now essentially commoditised. They are now in the hands of orders of magnitude more people. Barring sector-specific regulations, people are free to iterate (with varying degrees of care, ethical consideration, and success) at a much faster pace compared with the field of medicine, or even academia in general, where there’s non-zero involvement of ethics committees.
At DAYJOB we already have immense domain expertise to tune GPT-3 and prove its reliability in our sector. For giggles I also implemented an incredibly naive approach to a problem we set out to solve, and still ended up with a result that’s considered very impressive, and is usually the sort of thing many companies have spent countless hours working toward.
My sector certainly won’t be an edge case. And we all know that everyone and their dog is trying to see how GPT-3 can deliver value. It’s all happening at the same time, and very quickly.
As someone that’s generally quite jaded and skeptical of new technologies, my experience in my day job has completely changed my perspective. At this stage I’m willing to go out on a limb and say that this is going to be quite disruptive to labour markets at the very least. And this itself could very well be at the level where it raises serious ethical and societal questions. I’ll happily eat humble pie if I’m wrong.
The experiments to determine the answers must not be in the sole pervue of corporations. Executives of corporations have fiduciary duty only to the shareholders.
So a completely liberal approach to traversing the space of pervasive AI in society, with stated 10% probability for catastrophic results (number is per Sam Altman), can not be left to a decision making process that only seeks to maximize profits.
To "be careful as we chart" decisively means it can not be treated as a mere innovation to be subjected to market forces. That's really the only fundamental issue. This isn't a 'product' and 'market' may happily seek a local maxima which then leads to the "10%" failed state. That's it. Address that and we can safely explore away.
>Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know.
Here's the thing. Before chatGPT, it was pretty much a given that society was under more or less zero risk of losing jobs to AI.
Now with GPT-4 that zero risk has changed to unknown risk.
That is a huge change and it is a change that would be highly unwise to not address.
I agree that only time will tell. But as humans we act on predictions of the future. We all have to make a bet on what that future will be.
Right now this blog post is writing about a scenario that although speculative is also very realistic. It is, again, unwise to dismiss the possibility of a realistic scenario.
I don't get the call to action at the end - do a 6 month moratorium on R&D to focus on safety.
That 6 month call is driven by people who write fanfic about AI.
There's been active research in AI safety for years and years and it hasn't been without controversy, but these groups have done far more to ensure safety in its various forms exists than the fanfic authors. I think that a 6 month pause of "GPT-5" doesn't accomplish anything other than further fuel radicals who buy into the fanfic to take action that harms people who work in AI.
National Academy of Sciences must not only take a leading role here in creating the platform for discussions tasked with advising the government and public, it also must spearhead creating a national MI infrastructure for public use.
Department of Energy already runs many high-tech national labratories and we need a Sandia or Los Alamos for AI, for national, public use.
I'm not sure I believe we're quite there yet with GPT4, but let's suppose that we are. All of those other, potentially dangerous technologies mentioned have close government supervision that surrounds them: Nuclear has the Department of Energy, non-proliferation treaties, test ban treaties and much more. Biotech has the FDA and HHS and tons of regulation like GxP, ICH, HIPAA, and much more. But what does Artificial Intelligence have? ITAR?
I think the party is over, fellas. It's time for a new Federal department. Let's call it the Artificial Intelligence Administration (AIA). Time to take control of this technology before it takes control of US.
It’s really surprising to me the amount of doubt that’s been voiced over the last few weeks that a technology could possibly be dangerous.
For me the perspective is straightforward: even if chatgpt is not it, there is the physical possiblity for a relatively small improvement on human intelligence just like we’re a relatively small improvement on chimps, or on neanderthals. That’s just simple for me to get my head around.
Along with that, there are easy to follow “monkey’s paw” scenarios: the easiest way to end poverty is to extinct all humans, the easiest way to end suffering is to extinct life on earth. I can’t quite formulate a straightforward way to eliminate suffering while maximizing my humanist values. This is the alignment problem.
We’ve got Yan LeCun saying that slowing down or thinking about safety would just mean the chinese get ahead. He’s also saying we understand LLMs more than we understand airplanes.
We’ve got people completely ignoring past examples of technological destruction or technological safety like nonproliferation or Asilomar.
We’ve got people saying GPT is simultaneously revolutionary and going to change everything, thus it’s critical we forge forward… but also is too dumb to change anything (makes up info, etc), and thus we should not worry about being concerned with safety.
What is it about our field that is so gung ho? Are these all bad faith FOMO arguments? It’s hard to understand.
——
The one way i can make sense of it is as a religious experience. Our culture has deep persistent roots in christian eschatological mythology, and of course the coming of a benevolent next wave of intelligence slots into this nicely. Taleb states this clearly[0] that those who are pure of heart will be welcomed into the kingdom of heaven. Not a huge fan of this style of accidental religiosity.
Great piece. Although, I do not agree with "labs keeping safe" look at what happened with the pandemic. Or perhaps, the safety measures that are currently in place need to be redefined. The world is at conflict, everyone is in a race. Dominance is at play. I think it is silly to even consider the halt of AI development. The faster we reach the maximum output, the quicker we will realize the breaking points.
The best thing we can do to ensure the safety of AI, is to keep make it so a user can run the models on their own hardware.
The biggest danger from AI that I see is that they will only be able to be run by large corporations/governments and we the users will be at their mercy with regards to what we are allowed to use them for.
Could someone not worried about AGI, please explain their position?
Specifically what makes you so confident that someone won't end up creating an AGI that's unaligned? Or alternative, if you believe an unaligned AGI might be created why are you confident that it won't cause mass destruction?
I guess the way I see this is that even if you believe there is a 5-10% chance of AGI could go rouge and say take out global power grids, why is this a chance worth taking? Especially if we can try to slow capability progress as much as possible while funding alignment research?
Seriously though, if you are interested in addressing the real, presently-timed harms of large language models (and the capitalists who deploy them), this letter is just the thing:
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter
Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)
They are stopping only just short of saying that myopism is The One True Faith.
(They literally call out "Longtermism" as being elitist and the root of all evil.)
I mean, sure, one should look at one's feet from time to time to make sure one doesn't trip. However, these people come across as exclusively myopic and uncompromising in their position at that.
seydor|2 years ago
I just can't stand this kind of language. ChatGPT is quite useful but have you tried to ask it something serious that is not twitter worthy? We are not there yet. And in any case, this is not the first superhuman tool that humans made. Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.
kypro|2 years ago
Building nukes and bioweapons aren't as good of a business model as AGI though. The government was incentivised to at least take some precautions with nukes. Nukes can't be developed and launched by individual bad actors. AGI isn't that comparable to nukes for numerous reasons. Bioweapons maybe, but I wouldn't be in support of companies researching bioweapons without regulation.
It's not a choice between living in fear and going full steam ahead. Both are idiotic positions to take. The reasonable approach here would be to publicly fund alignment research while slowing and regulating AI capability research to ensure the best possible outcomes and minimise risk.
You're basically arguing in favour of a free market approach to developing what has the potential to be a dangerous technology. If you wouldn't allow the free market to regulate something as mundane as automobile safety, then why would you trust the free market to regulate AI safety?
Companies who wish to develop state-of-the-art AI models should be required to demonstrate they are taking reasonable steps to ensure safety. They should be required to disclose state-of-the-art research projects to the government. They they should be required to publish alignment research so we can learn...
unknown|2 years ago
[deleted]
DrScientist|2 years ago
Though take the examples - nuclear weapons or biotech - as you say both have huge potential for harm.
However both are regulated and relatively inaccessible to the average person.
While training models like ChatGPT is still relatively inaccessible for the average person, using them is potentially not.
One of the features of software is the almost zero cost of copying - making proliferation much more of an issue than for nukes or custom made viruses tech [1]
ChatGPT is over-hyped of course, but I think the genie and bottle issue is more real here than for military tech or biotech.
Having said all that I do think the solution is largely around applying existing laws to these new tools.
[1] Ok if they escape, then can self replicate....
tmaly|2 years ago
It is better than the promises we had in 1980s where we went through an AI winter.
But it is going to take some time for people an corporations to figure out if it is all hype, the next crypto, or if there are some real applications for this new technology.
Look at the cloud, S3 was launched in 2006, but you did not see much about it on Harvard Business Review till 2011. And even then, it was potential promises of what the cloud could do. Things did not really pick up till 2016.
Artistry121|2 years ago
I think it’s truly mind blowing a computer can now simulate some of the best conversations I’ve ever had on a variety of topics.
lm28469|2 years ago
Herval_freire|2 years ago
But we can't lie to ourselves about reality in order to prevent fear either.
The opinions from Elon musk, to Sam Altman to even the person who started it all Geoffrey Hinton are actually inline with the blog post.
Hinton even says things like these chatGPT models literally can understand things you tell it.
Should we call climate scientists fear mongers because they talk about a catastrophic but realistic future? I think not, and the same can be said for the people I mentioned.
I personally think these experts are right, but you are also right in that "we are not there yet". But given the trajectory of the technology for the past decade we basically have a very good chance of being "there" very soon.
Agi that is perceptually equivalent to a person more intelligent then us is now a very realistic prospect within our lifetimes.
fasdf342|2 years ago
[deleted]
tjopies|2 years ago
We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.
Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.
Right now, it sounds more like the CRISPR discussion all over again.
My 2 cents for what it is worth.
KyeRussell|2 years ago
eternalban|2 years ago
The experiments to determine the answers must not be in the sole pervue of corporations. Executives of corporations have fiduciary duty only to the shareholders.
So a completely liberal approach to traversing the space of pervasive AI in society, with stated 10% probability for catastrophic results (number is per Sam Altman), can not be left to a decision making process that only seeks to maximize profits.
To "be careful as we chart" decisively means it can not be treated as a mere innovation to be subjected to market forces. That's really the only fundamental issue. This isn't a 'product' and 'market' may happily seek a local maxima which then leads to the "10%" failed state. That's it. Address that and we can safely explore away.
So not fear mongering. Correctly categorizing.
Herval_freire|2 years ago
Here's the thing. Before chatGPT, it was pretty much a given that society was under more or less zero risk of losing jobs to AI.
Now with GPT-4 that zero risk has changed to unknown risk.
That is a huge change and it is a change that would be highly unwise to not address.
I agree that only time will tell. But as humans we act on predictions of the future. We all have to make a bet on what that future will be.
Right now this blog post is writing about a scenario that although speculative is also very realistic. It is, again, unwise to dismiss the possibility of a realistic scenario.
phillipcarter|2 years ago
That 6 month call is driven by people who write fanfic about AI.
There's been active research in AI safety for years and years and it hasn't been without controversy, but these groups have done far more to ensure safety in its various forms exists than the fanfic authors. I think that a 6 month pause of "GPT-5" doesn't accomplish anything other than further fuel radicals who buy into the fanfic to take action that harms people who work in AI.
eternalban|2 years ago
Department of Energy already runs many high-tech national labratories and we need a Sandia or Los Alamos for AI, for national, public use.
Herval_freire|2 years ago
I would qualify your post with open source usage of the model, training algorithms and training data as well.
jaybrendansmith|2 years ago
rumblestrut|2 years ago
There’s something in my head that thinks, “This writer is out of touch” (even if they are not).
I admit my logic may be faulty.
NegativeLatency|2 years ago
stuckkeys|2 years ago
c54|2 years ago
For me the perspective is straightforward: even if chatgpt is not it, there is the physical possiblity for a relatively small improvement on human intelligence just like we’re a relatively small improvement on chimps, or on neanderthals. That’s just simple for me to get my head around.
Along with that, there are easy to follow “monkey’s paw” scenarios: the easiest way to end poverty is to extinct all humans, the easiest way to end suffering is to extinct life on earth. I can’t quite formulate a straightforward way to eliminate suffering while maximizing my humanist values. This is the alignment problem.
We’ve got Yan LeCun saying that slowing down or thinking about safety would just mean the chinese get ahead. He’s also saying we understand LLMs more than we understand airplanes.
We’ve got people completely ignoring past examples of technological destruction or technological safety like nonproliferation or Asilomar.
We’ve got people saying GPT is simultaneously revolutionary and going to change everything, thus it’s critical we forge forward… but also is too dumb to change anything (makes up info, etc), and thus we should not worry about being concerned with safety.
What is it about our field that is so gung ho? Are these all bad faith FOMO arguments? It’s hard to understand.
——
The one way i can make sense of it is as a religious experience. Our culture has deep persistent roots in christian eschatological mythology, and of course the coming of a benevolent next wave of intelligence slots into this nicely. Taleb states this clearly[0] that those who are pure of heart will be welcomed into the kingdom of heaven. Not a huge fan of this style of accidental religiosity.
[0] https://twitter.com/nntaleb/status/1642241685823315972?s=20
stuckkeys|2 years ago
RcouF1uZ4gsC|2 years ago
The biggest danger from AI that I see is that they will only be able to be run by large corporations/governments and we the users will be at their mercy with regards to what we are allowed to use them for.
philote|2 years ago
kypro|2 years ago
Specifically what makes you so confident that someone won't end up creating an AGI that's unaligned? Or alternative, if you believe an unaligned AGI might be created why are you confident that it won't cause mass destruction?
I guess the way I see this is that even if you believe there is a 5-10% chance of AGI could go rouge and say take out global power grids, why is this a chance worth taking? Especially if we can try to slow capability progress as much as possible while funding alignment research?
ttpphd|2 years ago
Seriously though, if you are interested in addressing the real, presently-timed harms of large language models (and the capitalists who deploy them), this letter is just the thing:
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)
https://www.dair-institute.org/blog/letter-statement-March20...
Kim_Bruning|2 years ago
(They literally call out "Longtermism" as being elitist and the root of all evil.)
I mean, sure, one should look at one's feet from time to time to make sure one doesn't trip. However, these people come across as exclusively myopic and uncompromising in their position at that.