top | item 14217122

(no title)

electronvolt | 8 years ago

> I'm getting kind of sick of this "deep learning is a black box" trope, because it's really not true anymore.

That's fair/probably true.

I think there's two things that drive that--one, lack of a widely shared deep understanding of the field[0] (and not really needing a deep understanding to get good results--as both you and the author pointed out), and two, the fact that it feels like cheating, compared to the old ways of doing things. :P

[0] When the advice on getting a basic understanding is "read a textbook, then read the last 5 years of papers so that you aren't hopelessly behind", there just isn't going to be widespread understanding.

discuss

order

trevyn|8 years ago

Fair. How about an excellent 4-minute YouTube video to get a basic understanding? :)

https://www.youtube.com/watch?v=AgkfIQ4IGaM

electronvolt|8 years ago

I'll have to watch this later, but I'd argue the issue, at least for me, isn't really surface level understanding. (At least, the kind I think could plausibly be imparted in 4 minutes. :))

The basic idea of deep learning has always seemed straightforward to me[0]. However, at least my perception is that it feels like there's a lot of deep magic going on in the details at the level that Google/Microsoft/Amazon/researchers are doing deep learning. That's honestly true of most active research areas[1], but since those results are also the results that keep getting a lot of attention, the "it's a black box" feeling makes sense to me. :)

[0] Having done both some moderately high level math and having a CS background, I feel like most ideas in CS fit this description, though. Our devil is the details.

[1] For instance: fairly recent results in weird applications of type theory are also super cool, and require some serious wizardry, but those get much less attention. (And are, I think, more taken for granted, since who doesn't understand a type system? /s)