top | item 32063217

(no title)

mikolajw | 3 years ago

Clickbait title.

I wish ML researchers (EDIT: and engineers and journalists) stopped using anthropomorphizing language. This has decades of solid tradition, but that's no excuse. Any comparison of a machine to a human misleads the public. Machines aren't like babies, artificial neural networks aren't like actual neural networks or brains. Machines shouldn't be given human names (PLATO is a borderline case).

I know this is like talking to a wall -- money requires hype -- but still, please stop doing that.

discuss

order

sailingparrot|3 years ago

> I wish ML researchers stopped using anthropomorphizing language.

ML researchers don't write articles, journalists do.

Actual language used by the ML researchers: "Intuitive physics learning in a deep-learning model inspired by developmental psychology" [1]

[1]: https://www.nature.com/articles/s41562-022-01394-8

_gabe_|3 years ago

> Actual language used by the ML researchers: "Intuitive physics learning in a deep-learning model inspired by developmental psychology"

In my opinion, this is still anthropomorphizing the algorithms. The term deep-learning is a poor representation of what actually goes on. Someone please correct me if I'm wrong, but all ML does is statistical regressions (in essence). It doesn't "learn" like a person learns. Neural networks are not actually like brains (as far as we understand how the brain works).

I feel like the whole industry is inundated with aphorisms that are kind of true, but not wholly true. Evolutionary algorithms, neural networks, deep learning, deep mind, this stuff all reeks of anthropomorphizing fundamentally mathematical processes. I get it, it's a lot easier to get the gist of "the computer is learning/training" than "the computer is refining the weights and biases to try to optimize the output".

mikolajw|3 years ago

You're right that journalists use anthropomorphization much more. But AI researchers also have a long history of choosing terms that are anthropomorphizing or animating. Here the name PLATO -- which evokes an image of an ancient philosopher, a human, who is by cultural tradition considered smart -- is used in the original journal article.

Terms like "neural network" and "artificial intelligence" are frequently used by AI engineers and researchers despite the obvious image they evoke. Sometimes they even call their creations "brains". Also note the name DeepMind.

It's definitely not just the journalists.

bluetwo|3 years ago

To add to that, often EDITORS are the ones who come up with the titles, for reasons beyond being clear, like using words that draw attention and to fit in a specific space.

blt|3 years ago

My pet peeve is when AI researchers coin new terms for objects that can be described by well-established mathematical terms. For example, saying a neural network layer has "256 units" instead of "output dimensionality of 256".

But at some point you need to name things for brevity. I understand why people say "activation function" instead of "elementwise monotonic nonlinear function".

Misuse is also rampant, like using "inference" to describe evaluating a neural network on an input, even when the NN isn't part of a probabilistic model.

mdp2021|3 years ago

To be fair, give high degrees of interdisciplinarity and imperfect acquaintance with all the terminologies (and imperfect memory), and given that we mix natural language and conventional technical language, and with some continuity, and given that natural language itself mixes original core root meanings and posterior conventions, and given that even biologically the best term may be occasionally (polysense) hard to find, the mess is expected.

feral|3 years ago

> artificial neural networks aren't like actual neural networks or brains

Just to zoom right in on neural networks:

People often say this, and I never see a solid argument.

I know very little about biological neural networks.

Clearly they are very different in some respects, for example, meat vs silicon.

But I never see a good argument that there's no perspective from which the computational structure is similar.

Yes, the low level structure, and the optimization is different, but so? You can run quicksort on a computer made of water and wood, or vaccum tubes, or transistors, and it's still quicksort.

Are we sure there aren't similarities in terms of how the various neural networks process information? I would be interested in argument for this claim.

After all, the artificial neural networks are achieving useful high level functionality, like recognizing shapes.

mikolajw|3 years ago

There are many ways one can argue for or against this comparison. This is mostly a matter of terminology. However the problem is that the field of AI has been for many decades consistently shaping its language to evoke human-like connotations in order to boost hype. This article's title is a yet another example of that.

PeterisP|3 years ago

There are a few conceptual differences where artificial neural networks conceptually diverge for computational reasons.

One is the notion of time and connectivity loops - overwhelmingly, ANNs use a feed-forward architecture where the network is a directional graph without loop and some input is transformed to some output in a single pass - and weights can be adjusted in a single reverse pass, which is very practical for training. We do know that biological brains have some behavior that relies on signals "looping through" the neurons, and that is fundamentally different from, for example, running some network iteratively (like generating text word-by-word via GPT-3). We have artificial neural network simulations that do things like this, and also simulations of "spike-train" networks (which can model other time-related aspects which glorified perceptrons can't), but we don't use them in practice since the computational overhead means that for most common ML tasks we can get better performance by using an architecture that's easy to compute and allows to use a few orders of magnitude more parameters, as size matters more.

mdp2021|3 years ago

It is not the case - this is just biomimicry: "let us try imitating feats of a living organism". Perfectly legitimate. Nobody is told to make unduly images out of it.

mikolajw|3 years ago

"DeepMind AI learns simple physics like a baby" clearly makes an unduly image out of it. Calling it PLATO evokes an image of an ancient human philosopher. No other field uses as many bold comparisons to humans as artificial intelligence (its name alone is one).