Been involved with LLMs for the past 2 years, and what I can say is we have no idea what the technology can do, and we have no way of monitoring when it gains new abilities. We're racing towards autonomous everything, and we're too slow and blind to even detect hidden exponential developments.
Question , what do we do if/when it gets to that point?
Tech keeps advancing, most people just seem to say, “it’s not there yet”, the entire point of the tech industry is now moving to get us “there” with absolutely no idea what the consequences are. I don’t find this intelligent at all, ironically ?
I like the idea of progress but stating to feel like enough is enough without a at least some clear idea about where we want it to end. I really really don’t want to see terminators in my lifetime anymore than I want to see human cloning, which is banned.
This IMO is the point where tech starts to go from cool and helpful to potential sci-fi disaster.
How would it get to that point? There's no connection between having an internet connection and ending the human race.
These are all sci-fi stories that come with unexamined assumptions that something "smart" and "optimal" is going to be invented that's so good at its job that you can ask it to do X and it's going to do completely impossible thing Y without running out of AWS credits first.
(I personally think that humans are not "optimal" and that an AGI will also not be "optimal" or else it wouldn't be "general". More importantly, I don't think AGIs are going to be great at their jobs just because they have computer brains, and this is clearly an old SF movie trope.)
This is such an amazingly shortsighted, naive, and flawed way of thinking I'm having a really hard time not sniping.
A large number of people are very concerned about this, and rightfully so because so many people don't get the risks (including yourself it seems).
These people largely aren't fearmongers either, they are experts in that field, serious engineers.
A computer can't do this you say... and that is how it has always been right up until it can, and then whole ecosystems shift seemingly overnight. This will be no different.
Let me ask you, where's the risk management? You deal with anything dangerous you've dealt with risk management. Where is it for this? Can we even evaluate a problem like this? Our main form of interaction is by code, we work in seconds it ticks in nanoseconds, by the time it receives input from us, it could have predicted and nullified our attempts to do anything if it were sentient.
Right now, its very simple, there is almost no risk management, and you and the smartest people in the world trying to tackle this problem are clawing in the dark blind, and you don't know it, but they do and the ones with true intelligence are scared shitless which is why there are so many people going on record (a thing normally that would be a career killer), trying to prevent you from driving everyone over that proverbial cliff, only its more like a dam.
For you and most other people that don't work with this stuff, its an out of context problem that will never happen, and that's fine for small things that don't cascade.
People are traditionally very bad at recognizing cascading failures before it actually happens. This is like a dam with a crack running through it that almost no one has noticed, and your home is right underneath it, in this case everyone's home is underneath it.
What could possibly go wrong with giving someone, really anyone, who doesn't recognize the risks the ability to potentially end everything if the digital dice line up just right.
Literally Everything is networked. Globally.
It doesn't even need to be Battlestar Galactica type apocalypses, though that's fairly realistic pilot about how it might go down if it became sentient. It can also do it without even being sentient by the slow Ayn Rand/John Galt route where societal mechanics do the majority of the work, all you need to do is disrupt the economic cycle between factor and non-factor markets to a sufficient degree, and people will do the rest, plenty of examples where we were able to restart in the historic record, what about those dark areas for which we have no history; without modern technology we can't grow enough food to feed half the worlds current population.
When the stakes are this high, and the risk management is so nonexistent; everyone including policy makers should be scared shitless and do something about it. If you look at things like how the Manhattan project were handled, they were done with more risk management and care for the amount of destruction potential than either bio or cyber.
Our modern day society is largely fully dependent on technology for survival. What happens when that turns against you, or just ceases to function.
112|2 years ago
Here's a good overview to get up to speed: https://youtu.be/xoVJKj8lcNQ
than3|2 years ago
ChatGTP|2 years ago
Tech keeps advancing, most people just seem to say, “it’s not there yet”, the entire point of the tech industry is now moving to get us “there” with absolutely no idea what the consequences are. I don’t find this intelligent at all, ironically ?
I like the idea of progress but stating to feel like enough is enough without a at least some clear idea about where we want it to end. I really really don’t want to see terminators in my lifetime anymore than I want to see human cloning, which is banned.
This IMO is the point where tech starts to go from cool and helpful to potential sci-fi disaster.
astrange|2 years ago
These are all sci-fi stories that come with unexamined assumptions that something "smart" and "optimal" is going to be invented that's so good at its job that you can ask it to do X and it's going to do completely impossible thing Y without running out of AWS credits first.
(I personally think that humans are not "optimal" and that an AGI will also not be "optimal" or else it wouldn't be "general". More importantly, I don't think AGIs are going to be great at their jobs just because they have computer brains, and this is clearly an old SF movie trope.)
satvikpendem|2 years ago
_siis|2 years ago
A large number of people are very concerned about this, and rightfully so because so many people don't get the risks (including yourself it seems).
These people largely aren't fearmongers either, they are experts in that field, serious engineers.
A computer can't do this you say... and that is how it has always been right up until it can, and then whole ecosystems shift seemingly overnight. This will be no different.
Let me ask you, where's the risk management? You deal with anything dangerous you've dealt with risk management. Where is it for this? Can we even evaluate a problem like this? Our main form of interaction is by code, we work in seconds it ticks in nanoseconds, by the time it receives input from us, it could have predicted and nullified our attempts to do anything if it were sentient.
Right now, its very simple, there is almost no risk management, and you and the smartest people in the world trying to tackle this problem are clawing in the dark blind, and you don't know it, but they do and the ones with true intelligence are scared shitless which is why there are so many people going on record (a thing normally that would be a career killer), trying to prevent you from driving everyone over that proverbial cliff, only its more like a dam.
For you and most other people that don't work with this stuff, its an out of context problem that will never happen, and that's fine for small things that don't cascade.
People are traditionally very bad at recognizing cascading failures before it actually happens. This is like a dam with a crack running through it that almost no one has noticed, and your home is right underneath it, in this case everyone's home is underneath it.
What could possibly go wrong with giving someone, really anyone, who doesn't recognize the risks the ability to potentially end everything if the digital dice line up just right.
Literally Everything is networked. Globally.
It doesn't even need to be Battlestar Galactica type apocalypses, though that's fairly realistic pilot about how it might go down if it became sentient. It can also do it without even being sentient by the slow Ayn Rand/John Galt route where societal mechanics do the majority of the work, all you need to do is disrupt the economic cycle between factor and non-factor markets to a sufficient degree, and people will do the rest, plenty of examples where we were able to restart in the historic record, what about those dark areas for which we have no history; without modern technology we can't grow enough food to feed half the worlds current population.
When the stakes are this high, and the risk management is so nonexistent; everyone including policy makers should be scared shitless and do something about it. If you look at things like how the Manhattan project were handled, they were done with more risk management and care for the amount of destruction potential than either bio or cyber.
Our modern day society is largely fully dependent on technology for survival. What happens when that turns against you, or just ceases to function.
staticman2|2 years ago
How many "experts in their field" think GPT 4 can end the human race?