(no title)
commonturtle | 5 years ago
I really find it hard to understand why people are optimistic about the impact AI will have on our future.
The pace of improvement in AI has been really fast over the last two decades, and I don't feel like it's a good thing. Compare the best text generator models from 10 years ago with GPT-3. Now do the same for image generators. Now project these improvements 20 years into the future. The amount of investment this work is getting grows with every such breakthrough. It seems likely to me we will figure out general-purpose human-level AI in a few decades.
And what then? There are so many ways this could turn into a dystopian future.
Imagine for example huge mostly-ML operated drone armies, tens of millions strong, that only need a small number of humans to supervise them. Terrified yet? What happens to democracy when power doesn't need to flow through a large number of people? When a dozen people and a few million armed drones can oppress a hundred million people?
If there's even a 5% chance of such an outcome (personally I think it's higher), then we should be taking it seriously.
m12k|5 years ago
CuriouslyC|5 years ago
The only issue I see here is that government will need to take a hand in mitigating capitalistic wealth inequality, and access to creative tools will need to be subsidized for low income individuals (assuming we can't bring the compute cost down a few orders of magnitude).
hntrader|5 years ago
Even if it's 0.1% we should be taking it very seriously, given the magnitude of the negative outcome. In expected value terms it's large. And that's not a Pascal's mugging given the logical plausibility of the proposed mechanism.
At least the rhetoric of Sam Altman and Demis Hassabis suggests that they do take these concerns seriously, which is good. However there are far too many industry figures who shrug off and even ridicule the idea that there's a possible threat on the medium-term horizon.
ajnin|5 years ago
dontreact|5 years ago
desideratum|5 years ago
commonturtle|5 years ago
lbrito|5 years ago
"The singularity is _always near_". We've been here before (1950s-1970s); people hoping/fearing that general AI was just around the corner.
I might be severely outdated on this, but the way I see it AI is just rehashing already existent knowledge/information in (very and increasingly) smart ways. There is absolutely no spark of creativity coming from the AI itself. Any "new" information generated by AI is really just refined noise.
Don't get me wrong, I'm not trying to take a leak on the field. Like everyone else I'm impressed by all the recent breakthroughs, and of course something like GPT is infinitely more advanced than a simple `rand` function. But the ontology remains unchanged; we're just doing an extremely opinionated, advanced and clever `rand` function.
3pt14159|5 years ago
About a decade ago I trained a model on Wikipedia which was tuned to classify documents into what branch of knowledge the document could be part of. Then I fed in one of my own blog posts. The second highest ranking concept that came back to me was "mereology" a term I had never even heard of and one that was quite apt for the topic I was discussing in the blog post.
My own software, running on the contents of millions of authors' work, ingesting my own blog post, taught me the orchestrator of the process about his own work. This feedback loop is accelerating and just because it takes decades for the irrefutable to come, it doesn't mean that it never will. People in the early 40s said atomic weapons would never happen because it would be too difficult. For some people nothing short of seeing is believing, but those with predictive minds know that this truly is just around the corner.
Simon321|5 years ago
unknown|5 years ago
[deleted]
stevofolife|5 years ago
kertoip_1|5 years ago
dash2|5 years ago
CuriouslyC|5 years ago