top | item 25657395

(no title)

commonturtle | 5 years ago

This is simultaneously amazing and depressing, like watching someone set off a hydrogen bomb for the first time and marveling at the mushroom cloud it creates.

I really find it hard to understand why people are optimistic about the impact AI will have on our future.

The pace of improvement in AI has been really fast over the last two decades, and I don't feel like it's a good thing. Compare the best text generator models from 10 years ago with GPT-3. Now do the same for image generators. Now project these improvements 20 years into the future. The amount of investment this work is getting grows with every such breakthrough. It seems likely to me we will figure out general-purpose human-level AI in a few decades.

And what then? There are so many ways this could turn into a dystopian future.

Imagine for example huge mostly-ML operated drone armies, tens of millions strong, that only need a small number of humans to supervise them. Terrified yet? What happens to democracy when power doesn't need to flow through a large number of people? When a dozen people and a few million armed drones can oppress a hundred million people?

If there's even a 5% chance of such an outcome (personally I think it's higher), then we should be taking it seriously.

discuss

order

m12k|5 years ago

The scary thing about automation isn't the technology itself. It's that it breaks the tenuous balance of power between those who own and those who work - if the former can just own robots instead of hiring the latter, what will become of the latter? The truth is, what's scary about that imbalance of power is already true, it's just that until now, technological limitations made that imbalance incomplete - workers still had some bargaining power. That is about to go away, and what will be left is the realization that the solution to this isn't ludditism, the solution is political. As it always was.

CuriouslyC|5 years ago

That's not exactly true. A lot (low level) human labor will be made irrelevant, but AI tools will allow people to easily work productively at a higher level. Musicians will be able to hum out templates of music, then iteratively refine the result using natural language and gestures. Writers will be able to describe a plot, and iteratively refine the prose and writing style. Movie producers will be able to describe scenes then iteratively refine the angles, lighting, acting, cuts, etc. It will be a golden age for creativity, where there's an abundance of any sort of art or entertainment you'd like to consume, and the only problem is locating it in the sea of abundance.

The only issue I see here is that government will need to take a hand in mitigating capitalistic wealth inequality, and access to creative tools will need to be subsidized for low income individuals (assuming we can't bring the compute cost down a few orders of magnitude).

hntrader|5 years ago

> If there's even a 5% chance of such an outcome, then we should be taking it seriously.

Even if it's 0.1% we should be taking it very seriously, given the magnitude of the negative outcome. In expected value terms it's large. And that's not a Pascal's mugging given the logical plausibility of the proposed mechanism.

At least the rhetoric of Sam Altman and Demis Hassabis suggests that they do take these concerns seriously, which is good. However there are far too many industry figures who shrug off and even ridicule the idea that there's a possible threat on the medium-term horizon.

ajnin|5 years ago

I think the points you make are very important. Not only the "Terminator" scenario but also the "hyper-capitalism" scenario. But the solution is not to stop working on such research, it is political.

dontreact|5 years ago

After seeing how the tech community seems to leave political problems for someone else to solve and how that has worked out with housing in the Bay Area, it does make me quite concerned about the future.

desideratum|5 years ago

Nick Bostrom's "Superintelligence" is a sober perspective on this issue and a very worthwhile read.

commonturtle|5 years ago

Yup that's a good recommendation. I've read it and some of the AI Safety work that a small portion of the AI community is working on. At the moment there seems no reason to believe that we can solve this.

lbrito|5 years ago

>It seems likely to me we will figure out general-purpose human-level AI in a few decades.

"The singularity is _always near_". We've been here before (1950s-1970s); people hoping/fearing that general AI was just around the corner.

I might be severely outdated on this, but the way I see it AI is just rehashing already existent knowledge/information in (very and increasingly) smart ways. There is absolutely no spark of creativity coming from the AI itself. Any "new" information generated by AI is really just refined noise.

Don't get me wrong, I'm not trying to take a leak on the field. Like everyone else I'm impressed by all the recent breakthroughs, and of course something like GPT is infinitely more advanced than a simple `rand` function. But the ontology remains unchanged; we're just doing an extremely opinionated, advanced and clever `rand` function.

3pt14159|5 years ago

No we're not.

About a decade ago I trained a model on Wikipedia which was tuned to classify documents into what branch of knowledge the document could be part of. Then I fed in one of my own blog posts. The second highest ranking concept that came back to me was "mereology" a term I had never even heard of and one that was quite apt for the topic I was discussing in the blog post.

My own software, running on the contents of millions of authors' work, ingesting my own blog post, taught me the orchestrator of the process about his own work. This feedback loop is accelerating and just because it takes decades for the irrefutable to come, it doesn't mean that it never will. People in the early 40s said atomic weapons would never happen because it would be too difficult. For some people nothing short of seeing is believing, but those with predictive minds know that this truly is just around the corner.

Simon321|5 years ago

How typically cynical of human beings, a wondrous technology comes a long that can free mankind of tedious work and massively improve our lives, maybe even eliminate scarcity eventually and all people can think about is how it could be bad for us.

stevofolife|5 years ago

Regulations. That's what the government is for. You think any country is going to let someone operate millions of drones at their will? Yeah ok.

kertoip_1|5 years ago

You are assuming that AI will magically appear in one hands only. We can prevent that, as developers we can make AI research open and provide AI tools to masses in order to keep "balance". If everyone had the same power, then it wouldn't be such big advantage anymore.

dash2|5 years ago

That's not obvious. What if everyone has the tools to create their own army of nuclear-tipped killer drones?

CuriouslyC|5 years ago

Armies of high powered smart drones aren't going to be a thing until we figure out security, and I'm not sure that's ever going to happen. Having people in the loop is affordable and much more expensive/time consuming to subvert.