top | item 31010286

(no title)

rndphs | 3 years ago

I think the failures of people spouting hype and failing to deliver in ML has absolutely nothing to do with the real and immense progress which is happening in the field concurrently. I don't understand how one can look at GPT-3, DALL-E2, alpha go, alpha fold, etc and think hmmm... this is evidence of an AI winter. A balanced reading of the season imo suggests that we are in the brightest AI summer and there is no sign of even autumn coming. At least on the research side of things.

discuss

order

314|3 years ago

The difference between the two views could be summarized in a textbook intro from twenty years ago: here is a list of problems that are not (now) AI. Back then it would have included chess, checkers and other games that were researched for their potential to lead to AI. In the end they all fell to specific methods that did not provide general progress. While the current progress on image related problems is great, if it does not lead to general advances then an AI winter will follow.

spupe|3 years ago

I disagree. If we find a particular architecture is good for Chess, and another for image generation, then so be it. We would still have solved important problems. We are seeing both general and specific approaches improving rapidly. I don't think the AI winter was defined by a failure to reach AGI, but rather that they reached a Plateau and produced nothing of great commercial or even intellectual value for some years, while other computer science fields thrived. I would say the situation is the exact opposite right now.

nl|3 years ago

> Back then it would have included chess, checkers and other games that were researched for their potential to lead to AI.

20 years ago (2002) Deep Blue had beating reigning world chess champion Kasparaov was old news.

Unsolved problems were things like unconstrained speech-to-text, image understanding, open question answering on text etc. Playing video games wasn't a problem that was even being considered.

I was working in an adjacent field at the time, and at that point it was unclear if any of these would ever be solved.

> In the end they all fell to specific methods that did not provide general progress.

In the end they all fell to deep neural networks, with basically all progress being made since the 2014 ImageNet revolution where it was proven possible to train deep networks on GPUs.

Now, all these things are possible with the same NN architecture (Transformers), and in a few cases these are done in the same NN (eg DALL-E 2 both understands images and text. It's possible to extract parts of the trained NN and get human-level performance on both image and text understanding tasks).

> While the current progress on image related problems is great, if it does not lead to general advances then an AI winter will follow.

"current progress on image related problems is great" - it's much more broad than that.

"if it does not lead to general advances" - it has.

gwern|3 years ago

A very telling example, since we now have methods like Player of Games which apply a single general method to solve chess, checkers, ALE, DMLab-30, poker, Scotland Yard... And the diffusion models behind DALL-E apply to generative modeling of pretty much everything, whether audio or text or image or multimodal.

blinding-streak|3 years ago

Crawl, walk, run. You can't go directly from crawl to run. You need the intermediate steps (pun not intended)