top | item 42291724

(no title)

puzzledobserver | 1 year ago

I share this experience.

Perhaps another way of asking the question is: Are there any results, either about individual programs or in PL theory as a whole, that were made simpler / clarified / generalized because of category theoretic insights?

discuss

order

throwaway81523|1 year ago

Sure, Haskell's type system (and type systems in general) are informed by categories:

https://en.wikibooks.org/wiki/Haskell/Category_theory

This article is also nice:

https://www.haskellforall.com/2012/08/the-category-design-pa...

It explins how Haskell's monad laws follow directly from how categories work.

BoiledCabbage|1 year ago

Yes, but it's much, muvh faster and more productivr to just learn Haskell's type system (mainly Functors, Monads and Applicatives) as 3 individual "design patterns" than it is to figure it how you can even begin to apply the Yoneda Lemma to whatever programing problem you have in front of you.

Category Theory has birthed some useful abstraction, but you don't really need any of CT to learn and use the abstractions.

The basics of Abstract Algebra on the otherhand are absolutely worth the bit of time it takes to learn them.

Groups, semi-groups, rings, fields, monoids - distribution, identity, zeros, associstivity, communitavity are pretty trivia to learn - most people already know the underlying concepts and they pop up all the time in programing.

Monoids are incredibly useful - most people already know them, and intuitively use them, but it's helpful naming and understanding the pattern.

Category Therory just doesn't have the same return on investment for software development.

Koshkin|1 year ago

Indeed; Haskell is purely functional, and Category Theory is nothing but a purely functional language of mathematics.

bmitc|1 year ago

But what does that buy you?

pizza|1 year ago

The thing about questions like this is that the complexity of mathematical explanations is highly dependent on what each reader considers "simple." Consider two different approaches to understanding a concept, expressed in information-theoretic terms:

  H(concept) + H(existing explanation|concept) 
vs

  H(concept) + H(existing explanation|concept) + H(categorical explanation|existing explanation, concept)
This additional complexity layer of categorical framing has a nonzero cognitive cost, which varies based on the learner's intuitions and background. The investment in learning categorical thinking only becomes worthwhile when the learner can amortize its cost by envisioning its broad applicability - when they can see how categorical frameworks enable systematic problem decomposition and generalization. This implies they've already recognized the limitations and redundancies of non-categorical approaches, understanding intuitively how many concepts would need to be reinvented as the problem evolves within its conceptual framework (gestell).

However, there exists a middle path that often goes unrecognized as categorical thinking, despite embodying its essence. This approach involves incrementally discovering generalizations -- "oh, this can be generalized with this type" or "oh, if I wrap this in another type I can do something else later on" or "oh this syntactic sugar for this particular operator overload feels quite nice"

matt-noonan|1 year ago

Yes, there are a number of them. Here are some examples off the top of my head:

- Moggi was studying the problem of equivalence of programs, and noted that the traditional approach to modeling a program as a total function Input -> Output is problematic. He pioneered the use of monads and Kleisli categories as a foundation for reasoning about equivalence of real programs, including all the real-world nastiness like non-termination, partiality (e.g. throwing an exception that kills the program), non-determinism, and so on. https://person.dibris.unige.it/moggi-eugenio/ftp/lics89.pdf

- Linear logic (and it's close relative affine logic) was the inspiration behind Rust's ownership model, from what I understand. Linear logic was originally described in terms of the sequent calculus by Girard (http://girard.perso.math.cnrs.fr/linear.pdf), but later work used certain categories as a model of linear logic (https://ncatlab.org/nlab/files/SeelyLinearLogic.pdf). This answered and clarified a number of questions stemming from Girard's original work.

- Cartesian-closed categories (CCCs) form models of the simply-typed lambda calculus, in the sense that any lambda term can be interpreted as a value in a CCC. Conal Elliott pointed out that this means that a lambda term doesn't have just one natural meaning; it can be given a multitude of meanings by interpreting the same term in different CCCs. He shows how to use this idea to "interpret" a program into a circuit that implements the program. http://conal.net/papers/compiling-to-categories/

- Mokhov, Mitchell, and Jones studied the similarities and differences between real-world build systems and explained them using different kinds of categories. https://www.microsoft.com/en-us/research/uploads/prod/2018/0...

- There is a classical construction about minimizing a DFA due to Brzozowski which is a bit magical. Given a DFA, do the following process twice: (a) get an NFA for the reverse language by reversing all edges in the DFA and swapping start / accept nodes, then (b) drop any nodes which are not reachable from a start node in the NFA. The result will be the minimal DFA that accepts the same language as your original DFA! Bonchi, Bonsangue, Rutten, and Silva analyzed Brzozowski's algorithm from a categorical perspective, which allowed them to give a very clear explanation of why it works along with a novel generalization of Brzozowski's algorithm to other kinds of automata. https://alexandrasilva.org/files/RechabilityObservability.pd...

- I would also put the development of lenses in this list, but they haven't leaked very far outside of the Haskell universe yet so I don't think they are a compelling example. Check back in 5 years perhaps. Here's a blog post describing how lenses relate to jq and xpath: https://chrispenner.ca/posts/traversal-systems

- I've personally had success in finding useful generalizations of existing algorithms by finding a monoid in the algorithm and replacing it with a category, using the fact that categories are like "many-kinded monoids" in some sense. I haven't written any of these cases up yet, so check back in 2 years or so. In any case, they've been useful enough to drive some unique user-facing features.

sevensor|1 year ago

This comment pairs very well with the recent thread on Heaviside’s operator calculus. He got ahead of theory, did groundbreaking work, and was ultimately superseded by more principled approaches. I think this illustrates a kind of intellectual leapfrog. Sometimes we do things that are right, but it’s not clear why they work, and other times we have theoretical structures that open up new realms of practice.

matt-noonan|1 year ago

I just realized I botched the description of Brzozowski's algorithm, step (b) should be "determinize the NFA using the powerset construction". Mea culpa.