top | item 40931989

(no title)

0x00_NULL | 1 year ago

Yep. I think this is right on. The anthropomorphization in their behavior and problem descriptions is flawed.

It's precisely that analogy we learned early in our study of neural networks: the layers analyze the curves, straight segments, edges, size, shape, etc. But, when we look at the activation patterns, we see they are not doing anything remotely like that. They look like stochastic correlations, and the activation pattern was almost entirely random.

The same thing is happening here, but at incomprehensible scales and with fortunes being sunk into hope.

discuss

order

No comments yet.