top | item 28747467

(no title)

suchow | 4 years ago

The word "smarter" masks some of the complexity here. If you define it as "higher performance on a task widely considered to require intelligence", then we've had computers that are smarter than humans for at least decades. If you define it as "higher performance on every task widely considered to require intelligence", then I'll take that bet, please.

discuss

order

tuatoru|4 years ago

> higher performance on every task

Even just one task: bringing up a child to be a well-adjusted, productive adult.

PartiallyTyped|4 years ago

We have beat humans on every single atari game by at least one order of magnitude, and we do that consistently, and it really only took 5 or so years since the first solution that provided tangible results. It has only been 8 or 9 years since GPGPUs were used for ML research.

We are also seeing models that are able to generate code given prompts.

Given enough representational power, I don't see why a model that learns to solve games can't figure out how to generate good enough subroutines for itself.

So I am taking the other side of this bet.

We will see ML models surpass Humans in every task in 30 or so years.

I will find you and buy you dinner in October of 2051.

omegalulw|4 years ago

> We have beat humans on every single atari game by at least one order of magnitude

There's mechanical skill involved, it's not purely intelligence.

> We are also seeing models that are able to generate code given prompts.

This has been discussed a lot, but the generated code is nowhere close to good enough for large projects where you really need intelligence.

> Given enough representational power, I don't see why a...

Except that it's not linear scaling. The larger NLP models consume absurdly large resources, it's not straightforward to "get enough representational power"

Also, most models fail to adapt to new tasks outside of their narrow training scope, that's a massive problem. Even if you make models large, you will find that getting data covering all edge cases is exponentially expensive.

YeGoblynQueenne|4 years ago

>> We have beat humans on every single atari game by at least one order of magnitude, and we do that consistently, and it really only took 5 or so years since the first solution that provided tangible results. It has only been 8 or 9 years since GPGPUs were used for ML research.

Actually, only the 57 games in the Aracde Learning Environment, not "every single atari game". It's an impressive achievement and there's no need to oversell it.

akrasiaca|4 years ago

I'll offer an even better deal:

If AI surpasses humans at either comedy or film(by total hours of content viewed, or some other metric you propose) by January 2050, I'll buy you a fake meat dinner.