top | item 41323454

What's Going on in Machine Learning? Some Minimal Models

239 points| taywrobel | 1 year ago |writings.stephenwolfram.com

70 comments

order

deng|1 year ago

Say what you will about Wolfram: he's a brilliant writer and teacher. The way he's able to simplify complex topics without dumbing them down is remarkable. His visualizations are not only extremely helpful but usually also beautiful, and if you happen to have Mathematica on hand, you can easily reproduce what he's doing. Anytime someone asks me for a quick introduction to LLMs, I always point them to this article of his, which I still think is one of best and most understandable introductions to the topic:

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

mebiles|1 year ago

“entropy is the log of the number of states that a system can be in that are consistent with all the information known about that system”. he is amazing at explaining things.

vessenes|1 year ago

Classic Wolfram — brilliant, reimplements / comes at a current topic using only cellular automata, and draws some fairly deep philosophical conclusions that are pretty intriguing.

The part I find most interesting is his proposal that neural networks largely work by “hitching a ride” on fundamental computational complexity, in practice sort of searching around the space of functions representable by an architecture for something that works. And, to the extent this is true, that puts explainability at fundamental odds with the highest value / most dense / best deep learning outputs — if they are easily “explainable” by inspection, then they are likely not using all of the complexity available to them.

I think this is a pretty profound idea, and it sounds right to me — it seems like a rich theoretical area for next-gen information theory, essentially are their (soft/hard) bounds on certain kinds of explainability/inspectability?

FWIW, there’s a reasonably long history of mathematicians constructing their own ontologies and concepts and then people taking like 50 or 100 years to unpack and understand them and figure out what they add. I think of Wolfram’s cellular automata like this, possibly really profound, time will tell, and unusual in that he has the wealth and platform and interest in boosting the idea while he’s alive.

phyalow|1 year ago

Agree. (D)NNs have a powerful but somewhat loose inductive bias. They're great at capturing surface-level complexity but often miss the deeper compositional structure. This looseness, in my opinion, stems from a combination of factors: architectures that are not optimally designed for the specific task at hand, limitations in computational resources that prevent us from exploring more complex and expressive models, and training processes that don't fully exploit the available information or fail to impose the right constraints on the fitting process.

The ML research community generally agrees that the key to generalization is finding the shortest "program" that explains the data (Occam's Razor / MDL principle). But directly searching for these minimal programs (architecture space, feature space, training space etc) is exceptionally dificult, so we end up approximating the search to look something like GPR or circuit search guided by backprop.

This shortest program idea is related to Kolmogorov complexity (arises out of classical Information Theory) - i.e. the length of the most concise program that generates a given string (because if your not operating on the shortest program, then there is looseness/or overfit!). In ML, the training data is the string, and the learned model is the program. We want the most compact model that still captures the underlying patterns.

(D)NNs have been super successful, their reliance on approximations suggests there's plenty of room for improvement in terms of inductive bias and more program-like representations. I think approaches that combine the flexibility of neural nets with the structured nature of symbolic representations will lead to more efficient and performant learning systems. It seems like a rich area to just "try stuff" in.

Leslie Valiant touches on some of the same ideas in his book "Probably approximately correct" which tries to nail down some of the computational phenomena associated with the emergent properties of reality (its heady stuff).

bob1029|1 year ago

> neural networks largely work by “hitching a ride” on fundamental computational complexity

If you look at what a biological neural network is actually trying to optimize for, you might be able to answer The Bitter Lesson more adeptly.

Latency is a caveat, not a feature. Simulating a biologically-plausible amount of real-time delay is almost certainly wasteful.

Leaky charge carriers are another caveat. In a computer simulation, you can never leak any charge (i.e. information) if you so desire. This would presumably make the simulation more efficient.

Inhibitory neurology exists to preserve stability of the network within the constraints of biology. In a simulation, resources are still constrained but you could use heuristics outside biology to eliminate the fundamental need for this extra complexity. For example, halting the network after a limit of spiking activity is met.

Learning rules like STDP may exist because population members learned experiences cannot survive across generations. If you have the ability to copy the exact learned experiences from prior generations into new generations (i.e. cloning the candidates in memory), this learning rule may represent a confusing distraction more than a benefit.

captainclam|1 year ago

"Classic Wolfram — brilliant, reimplements / comes at a current topic using only cellular automata, and draws some fairly deep philosophical conclusions that are pretty intriguing."

Wolfram has a hammer and sees everything as a nail. But its a really interesting hammer.

mjburgess|1 year ago

> And, to the extent this is true, that puts explainability at fundamental odds with the highest value / most dense / best deep learning outputs — if they are easily “explainable” by inspection, then they are likely not using all of the complexity available to them.

Could you define explainability in this context?

taneq|1 year ago

> searching around the space of functions representable by an architecture for something that works

That’s… why we’re here?

taneq|1 year ago

> searching around the space of functions representable by an architecture for something that works

That’s… why we’re here? How else could we characterise what any learning algorithm does?

nuz|1 year ago

I can never read comments on any wolfram blog on HN because they're always so mean spirited. I'm seeing a nerdy guy explaining things from a cool new perspective I'm excited to read through. The comments almost always have some lens against him being 'self centered' or obsessing about cellular automata (who cares we all have our obsessions)

whalee|1 year ago

The complaint about his ego is warranted, but he also earned it. Wolfram earned his PhD in particle physics from cal tech at 21 years old. Feynman was on his thesis committee. He spent time at the IAS. When he speaks about something, no matter in which configuration he chooses to do so, I am highly inclined to listen.

leobg|1 year ago

Same here on anything Elon. HN is like an uncle who knows a lot and teaches you new things every time you hang out with him… but who also has a few really weird sore spots that you better never mention in his presence.

ralusek|1 year ago

There should be a Godwin’s Law for Stephen Wolfram. Wolfram’s Law: as the length of what he’s saying increases, the probability it will be about cellular automata approaches 1.

That being said, I’m enjoying this. I often experiment with neural networks in a similar fashion and like to see people’s work like this.

nxobject|1 year ago

...and the probability that he names something after himself approaches 1/e.

krackers|1 year ago

>Instead what seems to be happening is that machine learning is in a sense just “hitching a ride” on the general richness of the computational universe. It’s not “specifically building up behavior one needs”; rather what it’s doing is to harness behavior that’s “already out there” in the computational universe.

Is this similar to the lottery ticket hypothesis?

Also the visualizations are beautiful and a nice way to demonstrate the "universal approximation theorem"

DataDive|1 year ago

I find it depressing that every time Stephen Wolfram wants to explain something, he slowly gravitates towards these simplistic cellular automata and tries to explain everything through them.

It feels like a religious talk.

The presentation consists of chunks of hard-to-digest, profound-sounding text followed by a supposedly informative picture with lots of blobs, then the whole pattern is repeated over and over.

But it never gets to the point. There is never an outcome, never a summary. It is always some sort of patterns and blobs that are supposedly explaining everything ... except nothing useful is ever communicated. You are supposed to "see" how the blobs are "everything..." a new kind of Science.

He cannot predict anything; he can not forecast anything; all he does is use Mathematica to generate multiplots of symmetric little blobs and then suggests that those blobs somehow explain something that currently exists

I find these Wolfram blogs a massive waste of time.

They are boring to the extreme.

benlivengood|1 year ago

I think that unless Wolfram is directly contradicting the Church-Turing thesis it is ok to skip over the finite automata sections.

It is a given from Church-Turing that some automata will be equivalent to some turing machines, and while it is a profound result the specific details of the equivalence isn't super important unless, perhaps, it becomes super fast and efficient to run the automata instead of Von Neumann architecture.

ActionHank|1 year ago

Got me feeling self conscious here.

I often explain boring things with diagrams consisting of boxes and arrows, some times with different colours.

wrsh07|1 year ago

Because of the computational simplicity, I think there's a possibility that we will discover very cheap machine learning techniques that are discrete like this.

I think this is novel (I've seen BNN https://arxiv.org/pdf/1601.06071 This actually makes things continuous for training, but if inference is sufficiently fast and you have an effective mechanism for permutation, training could be faster using that)

I am curious what other folks (especially researchers) think. The takes on Wolfram are not always uniformly positive but this is interesting (I think!)

sdenton4|1 year ago

So, the thing is that linear algebra operations are very cheap already... you just need a lot of them. Any other 'cheap' method is going to have a similar problem: if the unit is small and not terribly expressive, you need a whole lot of them. But it will be compounded by the fact that we don't have decades of investment in making these new atomic operations as fast and cheap as possible.

A good take-away from the Wolfram writeup is that you can do machine learning on any pile of atoms you've got lying around, so you might as well do it on whatever you've got the best tooling for - right now this is silicon doing fixed-point linear algebra operations, by a long shot.

usgroup|1 year ago

Tsetlin machines have been around for some time:

https://en.wikipedia.org/wiki/Tsetlin_machine

They are discrete, individually interpretable, and can be configured into complicated architectures.

abecedarius|1 year ago

This looks like it might be interesting or might not, and I wish it said more in the article itself about why it's cool rather than listing technicalities and types of machines. Do you have a favorite pitch in those dozens of references at the end?

taneq|1 year ago

For a wiki article, this seems almost deliberately obtuse. What actually is one, in plain language?

achrono|1 year ago

>All one will be able to say is that somewhere out there in the computational universe there’s some (typically computationally irreducible) process that “happens” to be aligned with what we want.

>There’s no overarching theory to it in itself; it’s just a reflection of the resources that were out there. Or, in the case of machine learning, one can expect that what one sees will be to a large extent a reflection of the raw characteristics of computational irreducibility

Strikes me as a very reductive and defeatist take that flies in the face of the grand agenda Wolfram sets forth.

It would have been much more productive to chisel away at it to figure out something rather than expecting the Theory to be unveiled in full at once.

For instance, what I learn from the kinds of playing around that Wolfram does in the article is: neural nets are but one way to achieve learning & intellectual performance, and even within that there are a myriad different ways to do it, but most importantly: there is a breadth vs depth trade-off, in that neural nets being very broad/versatile are not quite the best at going deep/specialised; you need a different solution for that (e.g. even good old instruction set architecture might be the right thing in many cases). This is essentially why ChatGPT ended up needing Python tooling to reliably calculate 2+2.

jstanley|1 year ago

> ChatGPT ended up needing Python tooling to reliably calculate 2+2.

This is untrue. ChatGPT very reliably calculates 2+2 without invoking any tooling.

dbrueck|1 year ago

I believe that this is one of the key takeaways for reasoning about LLMs and other seemingly-magical recent developments in AI:

"tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought."

It hurts one's pride to realize that the specialized thing they do isn't quite as special as was previously thought.

wredue|1 year ago

Computers still aren’t writing essays. They are stringing words together using copied data.

If they were writing essays, I would suggest that it wouldn’t be so ridiculously easy to pick out the obviously AI articles everywhere.

delifue|1 year ago

> But now we get to use a key feature of infinitesimal changes: that they can always be thought of as just “adding linearly” (essentially because ε2 can always be ignored to ε). Or, in other words, we can summarize any infinitesimal change just by giving its “direction” in weight space

> a standard result from calculus gives us a vastly more efficient procedure that in effect “maximally reuses” parts of the computation that have already been done.

This partially explains why gradient descent becomes mainstream.

aeonik|1 year ago

This article does a good job laying the foundation of why I think homiconic languages are so important, and doing AI in languages that aren't, are doomed to stagnation in the long term.

The acrobatics that Wolfram can do with the code and his analysis is awesome, and doing the same without the homoiconicity and metaprogramming makes my poor brain shudder.

Do note, Wolfram Language is homoiconic, and I think I remember reading that it supports Fexprs. It has some really neat properties, and it's a real shame that it's not Open Source and more widely used.

jderick|1 year ago

I'd be curious to see an example of what you are talking about wrt his analysis here.

jderick|1 year ago

It is interesting to see the type of analysis he does and the visualizations are impressive, but the conclusions don't really seem too surprising. To me, it seems the most efficient learning algorithm will not be simpler but rather much more complex, likely some kind of hybrid involving a multitude of approaches. An analogy here would be looking at modern microprocessors -- although they have evolved from some relatively simple machines, they involve many layers of optimizations for executing various types of programs.

jksk61|1 year ago

Is a TL;DR available or at least some of the ideas covered? Because after 3 paragraphs it seems the good old "it is actually something resembling a cellular automata" post by Wolfram.

G3rn0ti|1 year ago

Wolfram explains the basic concepts of neural networks rather well, I think. He trains and runs a perceptron at the beginning und then a simpler network. Then he dwells into replacing the continuous functions they constitute into discrete binary ones — and ends up with cellular automata he thinks emulate neural networks and their training process. While this surely looks interesting all „insight“ he obtains into the original question of how exactly networks do learn is trained networks do not seem to come up with a simple model they use to produce the output we observe but rather find one combination of parameters in a random state space being able to reproduce a target function. There are multiple possible solutions that equally work well — so perhaps the notion of networks generalizing training data is perhaps not quite accurate (?). Wolfram links this to „his concept“ of „computational irreducibility“ (which I believe is just a consequence of Turing-completeness) but does not give any novel strategies to understand trained machine models or how to do machine learning in any better way using discrete systems. Wolfram presents a fun but at times confusing exercise in discrete automata and unfortunately does not apply the mathematical rigor needed to draw deep conclusions on his subject.

jmount|1 year ago

Wow- Wolfram "invented" cellular automata, neural nets, symbolic algebra, physics and so much more.