(no title)
rocgf | 1 year ago
However, what can you really complain about? Technological progress? We can't just decide to ignore AI. Just to play devil's advocate, if there really were clear signs of this impending distruction, there could be some sort of international agreement to halt progress. Realistically, this will never happen.
> All along, we are incrementally improving AI because it is an intellectual amusement
You're contradicting yourself here. Is it just 'intellectual amusement' if this technology is as disruptive as you claim?
vouaobrasil|1 year ago
Let me be more clear: individual programmers are improving AI for its intellectual amusement, but organizations use it for its disruptive powers. Two different groups of people, with a bit of an intersection.
MOreover:
> Just to play devil's advocate, if there really were clear signs of this impending distruction, there could be some sort of international agreement to halt progress. Realistically, this will never happen.
Of course not, are you joking? There are clear signs of climate destruction as well with CO2 levels rising. Did international agreements work there? Nope, no flattening in the CO2 curve yet.
We are fundamentally destructive species, who cannot see long-term problems if there is short-term gain. The only mechanism we have on a global scale to decide what to do is capitalistic motivations.
rocgf|1 year ago
That's clearer, but I still take issue with it. You can say the same about any software project or maybe even most work in general. As software engineers (I assume that's your profession too), we automate things that could have kept hundreds of people employed. There isn't that much different with AI - as long as there is money spent on the problem, there will be people willing to work on it, especially at the forefront of technolgy.
---
I agree with your second point. With that said, climate is way clearer distructive behavior, while also being more of a nuissance, a side effect of growth. AI has enormous potential and could, in the most optimistic outcomes, lead us to a utopia. Obviously, we all know that will not happen.
Also, the reason why AI progress will not be halted - we cannot allow our adversaries take the lead on this. It's really that simple.
kevmo314|1 year ago
So if AI destroys us as a species and...
> There are clear signs of climate destruction as well with CO2 levels rising. Did international agreements work there? Nope, no flattening in the CO2 curve yet.
Seems like we're in good shape?
If the end game is destroying ourselves, looks like our long term "problems" are solved.
throwuwu|1 year ago
reissbaker|1 year ago
https://ourworldindata.org/grapher/annual-co-emissions-by-re...
China has blown up emissions astronomically, though. To a lesser extent other Asian countries have as well.
I generally agree that international regulations controlling AI are unlikely to work, though, since it seems like it might be such a powerful and disruptive technology: if it doesn't stall, it could effectively be single-shot Prisoner's Dilemma, and when you have 193 players, someone's going to defect.
Personally though I think there are two possible outcomes:
1. Progress stalls, and it turns out getting from GPT-4-Turbo to better-than-human intelligence just doesn't pan out. LLMs are stuck as junior engineers for decades. If so, this is largely good for software engineers (and somewhat good for everyone, since it means we're more productive), but society doesn't change too much.
2. Progress doesn't stall, and we hit at least slightly-superhuman intelligence within the next decade. While this would obviously be a tough shift for most knowledge workers, especially depending on how quickly the shift happens, I also think this would likely bring about incredible medical advances, as well as incredible advances in robotics that reduce the cost of physical labor as well: meaning the price of goods drops enormously, and thanks to the medical advances we significantly increase either our lifespans, or at least the quality of our lives in old age, which seems quite positive. We'd need to figure out some sort of UBI system once the labor costs drop enough, but I think most people will be in favor of that, and also most stuff will just be really cheap at that point: ultimately just the cost of electricity (even "raw materials" are priced based on the cost of labor to get the materials, and the labor would be... the cost of electricity to run the robotics).
There are probably some in between scenarios, but TBH it's hard to see anything other than "stall" vs "takeoff" as being likely: either you never get past human intelligence (stall), or you do break through the wall, and then intelligence self-improves faster than before, up to some sort of information theory limit that I think is a lot higher than the average human is operating close to (consider just the variation in intelligence between individual human beings!).
Takeoff could also result in some sort of doomsday scenario, but the current LLMs haven't seemed to have the problems that the early doomers predicted, and so I think the like, humanity-enslaving or destroying outcomes are probably just not gonna happen.