I spent several years' worth of weekends working on this, and I'm glad to see it here on Hacker News.
I started working on this problem when I learned about Lamping's algorithm for optimal lambda reduction [1]. He invented a beautiful algorithm (often referred to as his "abstract algorithm"), which uses fan-in and fan-out nodes to reduce lambda terms optimally (with optimal sharing). Unfortunately, in order to make it fully work, some fans needed to duplicate one another while others needed to cancel one another. To determine this correctly Lamping had to extend his abstract algorithm with several delimiter node types and many additional graph reduction rules. These delimiter nodes perform "bookkeeping", making sure the right fan nodes match. I was dissatisfied with the need for these additional nodes and rules. There had to be a better way.
My goal was to try to implement Lamping's abstract algorithm without adding any delimiter nodes, and to do it under the interaction net paradigm to ensure perfect confluence. I tried countless solutions, and finally Delta-Nets was born. Feel free to ask any questions.
I recently started building a programming language on top of Delta-Nets, called Pur (https://pur.dev/).
I linked to the paper in a recent comment [1]. The author has been active in the Higher Order Community Discord channel for quite a while. The Higher Order Company [2] is developing HVM (Higher Order Virtual Machine), a high performance implementation of interaction nets for both CPUs and GPUs, and started the channel to discuss development and related topics, such as the Bend language for programming the HVM, discussed on HN at [3].
The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets)
. Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.
The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.
I find it much easier to see what is going on when selecting λ-calculus instead of Δ-Nets. E.g. for the mandatory Y Combinator,
λf.(λx.f (x x)) (λx.f (x x))
for which the difference with
λf.(λx.f (x x)) (λx.f (x x) f)
is very clear, whereas with Δ-nets the difference is more subtle. I guess it is because the visualization has more information than with the λ-calculus.
I don't know much about any of this, and I'm far from an expert on the lambda calculus. But I skimmed paper by OP, which is informative: https://arxiv.org/abs/2505.20314
The lambda calculus is an alternate model of computation created by Alonzo Church in the 1930s. It's basically a restricted programming language where the only things you can do are define anonymous functions, and apply those anonymous functions to input variables or the output of other functions.
The Lambda calculus statement And = λp.λq.p q p is more conventionally notated as And(p, q) = p(q(p)).
You can reduce lambda expressions -- basically, this is "Simplifying this program as far as possible" what you expect a good optimizing compiler to do.
When there are multiple possible simplifications, which one should you pick? This is a complicated problem people basically solved years ago, but the solutions are also complicated and have a lot of moving parts. OP came up with a graph-based system called ∆-Nets that is less complicated and more clearly explained than the older solutions. ∆-Nets is a lot easier for people to understand and implement, so it might have practical applications making better performance and tooling for lambda-calculus-based programming languages. Which might in turn have practical benefits for areas where those languages are used (e.g. compilers).
The linked simulation lets you write a lambda calculus program, see what it looks like as a ∆-Net graph, and click to simplify the graph.
It's only tangentially related to OP, but YouTube channel 2swap recently had an interesting video about lambda calculus that you don't have to be an expert to enjoy: https://www.youtube.com/watch?v=RcVA8Nj6HEo
Cosign, 10 hours in and comments are exclusively people who seemingly know each other already riffing on top of something that's not clear to an audience outside the community, or replying to a coarser version of request with ~0 information. (some tells: referring to other people by first name; having a 1:1 discussion about the meaning of a fork by some other person)
This is quite different. Salvadori's work aims for optimal reduction of the full lambda calculus (which requires something called "bookkeeping"/"oracle"), while HOC works on optimal/parallel reduction of a certain subset of the lambda calculus.
Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.
The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.
The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.
But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?
I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?
Author here. Other experts in this field have also used the term "lambda reduction", including Levy himself [1] and Lamping [2], both which are referenced in the Delta-Nets paper. "Lambda-reduction" is clearly an abbreviation of Lambda-calculi reduction.
The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.
Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin
I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)
But if I were to go sniffing for red flags:
From the first commit:
lambdacalc.ts:
// The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.
What?
util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it
> it's very weird they're calling this "lambda-reduction"
That was my reaction as well, only ever having heard of β-reduction, α-conversion (to prevent variable collisions), and η-reduction (the logical equivalence of a β-redex of a term and a bound variable with the term itself, provided the variable does not occur free in said term). Sloppy use of nomenclature is absolutely a red flag.
danaugrs|3 months ago
I spent several years' worth of weekends working on this, and I'm glad to see it here on Hacker News.
I started working on this problem when I learned about Lamping's algorithm for optimal lambda reduction [1]. He invented a beautiful algorithm (often referred to as his "abstract algorithm"), which uses fan-in and fan-out nodes to reduce lambda terms optimally (with optimal sharing). Unfortunately, in order to make it fully work, some fans needed to duplicate one another while others needed to cancel one another. To determine this correctly Lamping had to extend his abstract algorithm with several delimiter node types and many additional graph reduction rules. These delimiter nodes perform "bookkeeping", making sure the right fan nodes match. I was dissatisfied with the need for these additional nodes and rules. There had to be a better way.
My goal was to try to implement Lamping's abstract algorithm without adding any delimiter nodes, and to do it under the interaction net paradigm to ensure perfect confluence. I tried countless solutions, and finally Delta-Nets was born. Feel free to ask any questions.
I recently started building a programming language on top of Delta-Nets, called Pur (https://pur.dev/).
Feel free to follow along this journey:
https://x.com/danaugrs
https://x.com/purlanguage
[1] https://dl.acm.org/doi/pdf/10.1145/96709.96711
tromp|3 months ago
The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets) . Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.
The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.
[1] https://news.ycombinator.com/item?id=46034355
[2] https://higherorderco.com/
[3] https://news.ycombinator.com/item?id=40390287
xjm|3 months ago
λf.(λx.f (x x)) (λx.f (x x))
for which the difference with
λf.(λx.f (x x)) (λx.f (x x) f)
is very clear, whereas with Δ-nets the difference is more subtle. I guess it is because the visualization has more information than with the λ-calculus.
fnord77|3 months ago
csense|3 months ago
The lambda calculus is an alternate model of computation created by Alonzo Church in the 1930s. It's basically a restricted programming language where the only things you can do are define anonymous functions, and apply those anonymous functions to input variables or the output of other functions.
The Lambda calculus statement And = λp.λq.p q p is more conventionally notated as And(p, q) = p(q(p)).
You can reduce lambda expressions -- basically, this is "Simplifying this program as far as possible" what you expect a good optimizing compiler to do.
When there are multiple possible simplifications, which one should you pick? This is a complicated problem people basically solved years ago, but the solutions are also complicated and have a lot of moving parts. OP came up with a graph-based system called ∆-Nets that is less complicated and more clearly explained than the older solutions. ∆-Nets is a lot easier for people to understand and implement, so it might have practical applications making better performance and tooling for lambda-calculus-based programming languages. Which might in turn have practical benefits for areas where those languages are used (e.g. compilers).
The linked simulation lets you write a lambda calculus program, see what it looks like as a ∆-Net graph, and click to simplify the graph.
It's only tangentially related to OP, but YouTube channel 2swap recently had an interesting video about lambda calculus that you don't have to be an expert to enjoy: https://www.youtube.com/watch?v=RcVA8Nj6HEo
layer8|3 months ago
refulgentis|3 months ago
asgr|3 months ago
I see Salvatore has a fork, so they are obviously aware of it. unsure whether theyre proposing the exact same thing without reference or citation…
marvinborner|3 months ago
Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.
The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.
alan-jordan13|3 months ago
[deleted]
Andrew-Tate|3 months ago
[deleted]
qsort|3 months ago
The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.
But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?
I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?
danaugrs|3 months ago
[1] https://dl.acm.org/doi/abs/10.1145/143165.143172
[2] https://dl.acm.org/doi/pdf/10.1145/96709.96711
arethuza|3 months ago
papes_|3 months ago
mrkeen|3 months ago
Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin
I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)
But if I were to go sniffing for red flags:
From the first commit:
lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.
What?
util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it
anonnon|3 months ago
That was my reaction as well, only ever having heard of β-reduction, α-conversion (to prevent variable collisions), and η-reduction (the logical equivalence of a β-redex of a term and a bound variable with the term itself, provided the variable does not occur free in said term). Sloppy use of nomenclature is absolutely a red flag.