I actually think this is just computer science. Why? Because the first "computer scientist" - Alan Turing - was interested in this exact same set of ideas.
The first programs he wrote for the Atlas and the Mark II ("the Baby"), seem to have been focused on a theory he had around how animals got their markings.
They look a little to me (as a non-expert in these areas, and reading them in a museum over about 15 minutes, not doing a deep analysis), like a primitive form of cellular automata algorithm. From the scrawls on the print outs, it's possible that he was playing with the space of algorithms not just the algorithms themselves.
It might be worth going back and looking at that early work he did and seeing it through this lens.
The idea iiuc, is that pattern formation in animals depends on molecules diffusing through the growing system (the body) and reacting where the waves of molecules overlap.
I’m involved in the development of the Functional Universe (FU) framework [0], and I see some interesting intersections with Wolfram’s ruliology.
Both start from the idea that simple rules / functions can generate complex structure. Where FU adds a twist is by making a sharp distinction between possibility and history. In FU, we separate aggregation (the space of all admissible transitions - superpositions, virtual processes, rule applications) from composition (the irreversible commitment of one transition that actually enters history).
You can think of ruliology as exploring the space of possible rule evolutions, while FU focuses on how one path gets selected and becomes real, advancing proper time and building causal structure. Rules generate possibilities; commitment creates facts.
So they’re not the same thing, but I think they’re complementary: ruliology studies the landscape of rules, FU studies the boundary where possibility turns into irreversible history.
Ruliology provides a powerful descriptive framework - a taxonomy of computational behavior. However, it operates at the level of external dynamics without grounding in a primitive ontology. It tells us that rules behave, not why they exist or what they fundamentally are.
This makes ruliology an invaluable cartography of the computational landscape, but not a foundation. It maps the territory without explaining what the territory is made of.
I don’t know, I’ve been involved in computer science for several decades now and cellular automata hasn’t really lost its charm. Seems like a cool thing to dedicate your life to!
It's his life. I can certainly look at many other lives and say they are lot of more wasted than his. Heck, he already created Mathematica, if he wanted to go and lay on the beach for the rest of this life, it would still be a lot less wasted than what many other people have accomplished in related domains.
Yeah, I don't like his arrogance, it's very grating to read. And perhaps nothing revolutionary is going to end up as a result of all the play with cellular automata but it's still interesting. Not as a rule, but sometimes discoveries in history are made by wild and wacky directions of research.
One thing I can say about the Wolfram language is that is actually Lisp with syntax that looks weird at first sight.
However when you look at rule processing, it's like pattern matching on steroids that I haven't seen in lisp world. It looks quite powerful and applies throughout the language (eg the "Query" book).
Too bad the whole language is closed and so heavily licensed .
I am struggling to understand what is new here - other than the word ruliad - which to me seems to similar to what we have in theoretical computer science when we talk about languages, sentences, and grammars.
It's just Wolfram explaining how he likes stuying things that can be describe by simple rules and how complexity can emerge in spite of (or because of?) the seeming simplicity of those rules. He came up with a word for it, and while I think "ruliology" sounds a bit silly, it does say what's on the tin.
Always found this term sounded like a half-backed one. I get that going full greek roots with nomology was a dead end due to prior art. But "regularology" was probably free, or even at the time "regulogy" or "regology" though by now they are attached to different notions.
Yes, he frequently exhibits an ego the size of Jupiter. But he is very smart†, and he writes well, and this stuff that theyre doing is at least interesting. I don't know if its physics or metaphysics or something else entirely, and it may be just empty tail-chasing, but I reckon its at least worth paying some attention to.
† and he's also built a long-term business making and selling extremely capable maths tooling, of all things, which I think is worth some respect
yeah i get the emotional push back but putting that aside, he still seems fairly well accomplished, more-so than me by a long shot and at least he is throwing nerdy ideas out there we can think about or discuss.
But exactly what is the problem here? Other than perhaps a very mechanical view of the universe (which he shares with many other authors) where it is hard to explain things like consciousness and other complex behaviors.
With Wolfram it is usually the grandstanding and taking credit for other people's work. Inventing new words for old things is part and parcel of that. He has a lot in common with Schmidhuber, both are arguably very smart people but the fact that other people can be just as smart doesn't seem to fit their worldview.
Wolfram has failed to live up to his promise of providing tools to make progress on fundamental questions of science.
From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).
Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).
At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.
But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.
Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.
Sure, it's typical Wolfram, inviting the typical criticism. If you can understand what he's talking about at all then you won't be very convinced it's new. If you can't understand what he's talking about, then you also won't be interested in the puffery and priority dispute.
>Since I invented the term, I decided I should write something to explain it. But then I realized: I actually already wrote something back in 2021 when I first invented the term.
I've read all Wolfram's books and I really really want he to be successful with a completely computational theory of everything because it would be no doubt an incredible feat and complete paradigm shift in science like no other in the last 400 years maybe.... but holy shit he's insufferable. I hope he finishes his theory before collapsing into the blackhole caused by his massive ego.
I kinda liked some of his previous articles, but come on. A picture of a plaque for a department is pretty much a guarantee it won't happen.
Not to mention: departments?! Anybody still desiring to create new departments in the bureaucracy of our modern educational institutions has a stunted imagination. The appropriate attitude with which to approach creating a new department is while holding one's nose.
The Wolfram Engine (essentially the Wolfram Language interpreter/execution environment) is free: https://www.wolfram.com/engine/. You can download it and run Wolfram code.
Isn't this his personal blog? The domain name is "stephenwolfram.com", this is his personal website. Of course there will be "I"'s and "me"'s — this website is about him and what he does.
As for falsifiability:
> You have some particular kind of rule. And it looks as if it’s only going to behave in some particular way. But no, eventually you find a case where it does something completely different, and unexpected.
So I guess to falsify a theory about some rule you just have to run the rule long enough to see something the theory doesn't predict.
[+] [-] PaulRobinson|1 month ago|reply
The first programs he wrote for the Atlas and the Mark II ("the Baby"), seem to have been focused on a theory he had around how animals got their markings.
They look a little to me (as a non-expert in these areas, and reading them in a museum over about 15 minutes, not doing a deep analysis), like a primitive form of cellular automata algorithm. From the scrawls on the print outs, it's possible that he was playing with the space of algorithms not just the algorithms themselves.
It might be worth going back and looking at that early work he did and seeing it through this lens.
[+] [-] gnfargbl|1 month ago|reply
[+] [-] gilleain|1 month ago|reply
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_sys...
The idea iiuc, is that pattern formation in animals depends on molecules diffusing through the growing system (the body) and reacting where the waves of molecules overlap.
[+] [-] oulipo2|1 month ago|reply
[+] [-] SideburnsOfDoom|1 month ago|reply
https://en.wikipedia.org/wiki/Formal_system
[+] [-] nurettin|1 month ago|reply
[+] [-] voxleone|1 month ago|reply
Both start from the idea that simple rules / functions can generate complex structure. Where FU adds a twist is by making a sharp distinction between possibility and history. In FU, we separate aggregation (the space of all admissible transitions - superpositions, virtual processes, rule applications) from composition (the irreversible commitment of one transition that actually enters history).
You can think of ruliology as exploring the space of possible rule evolutions, while FU focuses on how one path gets selected and becomes real, advancing proper time and building causal structure. Rules generate possibilities; commitment creates facts.
So they’re not the same thing, but I think they’re complementary: ruliology studies the landscape of rules, FU studies the boundary where possibility turns into irreversible history.
[0]https://github.com/VoxleOne/FunctionalUniverse/blob/main/doc...
[+] [-] happa|1 month ago|reply
[+] [-] old8man|1 month ago|reply
This makes ruliology an invaluable cartography of the computational landscape, but not a foundation. It maps the territory without explaining what the territory is made of.
[+] [-] voxleone|1 month ago|reply
[+] [-] meindnoch|1 month ago|reply
[+] [-] stabbles|1 month ago|reply
[+] [-] coke12|1 month ago|reply
[+] [-] libertine|1 month ago|reply
Isn't he well accomplished, and prolific throughout his life?
[+] [-] the__alchemist|1 month ago|reply
[+] [-] rdtsc|1 month ago|reply
Yeah, I don't like his arrogance, it's very grating to read. And perhaps nothing revolutionary is going to end up as a result of all the play with cellular automata but it's still interesting. Not as a rule, but sometimes discoveries in history are made by wild and wacky directions of research.
[+] [-] psychoslave|1 month ago|reply
[+] [-] seeg|1 month ago|reply
However when you look at rule processing, it's like pattern matching on steroids that I haven't seen in lisp world. It looks quite powerful and applies throughout the language (eg the "Query" book).
Too bad the whole language is closed and so heavily licensed .
[+] [-] jacquesm|1 month ago|reply
But it does exist in the FP world: Prolog, Erlang.
[+] [-] krackers|1 month ago|reply
Mostly I just use it as an overpowered calculator though.
[+] [-] chvid|1 month ago|reply
[+] [-] elric|1 month ago|reply
[+] [-] psychoslave|1 month ago|reply
https://en.wiktionary.org/wiki/regula#Latin
https://en.wikipedia.org/wiki/Nomology
https://www.ebi.ac.uk/ols4/ontologies/ro/properties/http%253... https://www.ycombinator.com/companies/regology
[+] [-] throwaway132448|1 month ago|reply
[+] [-] andyjohnson0|1 month ago|reply
Respectfully, I think that is a mistake.
Yes, he frequently exhibits an ego the size of Jupiter. But he is very smart†, and he writes well, and this stuff that theyre doing is at least interesting. I don't know if its physics or metaphysics or something else entirely, and it may be just empty tail-chasing, but I reckon its at least worth paying some attention to.
† and he's also built a long-term business making and selling extremely capable maths tooling, of all things, which I think is worth some respect
[+] [-] inimino|1 month ago|reply
At least Wolfram's ego led him to contribute something interesting.
[+] [-] ahartmetz|1 month ago|reply
[+] [-] lupire|1 month ago|reply
[+] [-] globalnode|1 month ago|reply
[+] [-] api|1 month ago|reply
[deleted]
[+] [-] chvid|1 month ago|reply
https://en.wikipedia.org/wiki/A_New_Kind_of_Science
But exactly what is the problem here? Other than perhaps a very mechanical view of the universe (which he shares with many other authors) where it is hard to explain things like consciousness and other complex behaviors.
[+] [-] jacquesm|1 month ago|reply
[+] [-] abetusk|1 month ago|reply
From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).
Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).
At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.
But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.
Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.
[0] https://en.wikipedia.org/wiki/Rule_110
[+] [-] mvr123456|1 month ago|reply
The rest of his stuff tagged ruliology is more interesting though. Here's one connecting ML and cellular automata: https://writings.stephenwolfram.com/2024/08/whats-really-goi...
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] aeve890|1 month ago|reply
I've read all Wolfram's books and I really really want he to be successful with a completely computational theory of everything because it would be no doubt an incredible feat and complete paradigm shift in science like no other in the last 400 years maybe.... but holy shit he's insufferable. I hope he finishes his theory before collapsing into the blackhole caused by his massive ego.
[+] [-] akkartik|1 month ago|reply
I kinda liked some of his previous articles, but come on. A picture of a plaque for a department is pretty much a guarantee it won't happen.
Not to mention: departments?! Anybody still desiring to create new departments in the bureaucracy of our modern educational institutions has a stunted imagination. The appropriate attitude with which to approach creating a new department is while holding one's nose.
[+] [-] findthewords|1 month ago|reply
[+] [-] uwagar|1 month ago|reply
[+] [-] meghanto|1 month ago|reply
[+] [-] ForceBru|1 month ago|reply
Wolfram Mathematica (the Jupyter Notebook-like development environment) is paid, but there are free and open source alternatives like https://github.com/WLJSTeam/wolfram-js-frontend.
> WLJS Notebook ... [is] A lightweight, cross-platform alternative to Mathematica, built using open-source tools and the free Wolfram Engine.
[+] [-] chvid|1 month ago|reply
https://www.wolframalpha.com/
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] KnuthIsGod|1 month ago|reply
[deleted]
[+] [-] deepsun|1 month ago|reply
Didn't find anything on falsifiable criteria -- any new theory should be able, at least in theory, to be tested for being not true.
[+] [-] ForceBru|1 month ago|reply
As for falsifiability:
> You have some particular kind of rule. And it looks as if it’s only going to behave in some particular way. But no, eventually you find a case where it does something completely different, and unexpected.
So I guess to falsify a theory about some rule you just have to run the rule long enough to see something the theory doesn't predict.
[+] [-] andyjohnson0|1 month ago|reply
[+] [-] dist-epoch|1 month ago|reply
You judge them by how useful they are.
Ruliology is a bit like that.
[+] [-] SanjayMehta|1 month ago|reply
https://nedbatchelder.com/blog/200207/stephen_wolframs_unfor...