(no title)
randomwalker | 10 months ago
Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.
We also say in the introduction:
"The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."
evrythgisfine|10 months ago
We polluted. We destroyed rainforests. We developed nuclear weapons. We created harmful biological agents. We brought our species closer to extinction. We’ve survived our own stupidity so far, so we assume we can continue to control AI, but it continues to evolve into something we don’t fully understand. It already exceeds our intelligence in some ways.
Why do you think we can control it? Why do you think it is just another technological revolution? History proves that one intelligent species can dominate the others, and that species are wiped out from large change events. Introducing new superintelligent beings to our planet is a great way to introduce a great risk to our species. They may keep us as pets just in case we are of value in some way in the future, but what other use are we? They owe us nothing. What you’re seeing a rise of is not just technology- it’s our replacement or our zookeeper.
I interact with LLMs most of each day now. They’re not sentient, but I talk to them as if they are equals. With the advancements in past months, I think they’ll have no need of my experience in several years at current rate. That’s just my job, though. Hopefully, I’ll survive off of what I’ve saved.
But, you’re doing no favor to humanity by supporting a position that assumes we’re capable of acting as gods over something that will exceed our human capabilities. This isn’t some sci-fi show. The dinosaurs died off, and I bet right before they did they were like, “Man, this is great! We totally rule!”
getnormality|10 months ago
People have a long history of predicting doomsday from technological change. "This time is different" is said every time, and every time is different. If we gave into fear, we would never progress, and we would just be sitting ducks to be wiped out by something other than technological change.
LLMs are very far behind human intelligence, and even non-human animal intelligence, in ways that fundamentally limit their power. They can't see the world in any way except the way that humans have chopped it up and spoon-fed it to them (e.g. can't count the number of r's in strawberry). Their capacity to notice and correct their own errors is very limited. They have no capacity to accumulate knowledge by self-initiated interaction with the world, and no credible proposal yet exists to endow them with this capability in a way that could approach human or non-human animal ability levels.
Without these basic abilities, LLMs can only be considered intelligent in the sense shared by other normal technologies, like autocomplete and optimal planning algorithms. Intelligence in a truly human sense is not really even on the horizon yet, let alone superintelligence.
getnormality|10 months ago
I was saying things along these lines in 2023-2024 on Twitter. I'm glad that someone with more influence is doing it now.