top | item 41222528

From Sets to Categories (2023)

68 points| boris_m | 1 year ago |abuseofnotation.github.io | reply

47 comments

order
[+] khazhoux|1 year ago|reply
I spent many hours many years ago studying CT and I was left very frustrated. Endless definitions without any meaningful results or insights. My (admittedly cartoonish) takeaway from Category Theory was: "We can think of all math as consisting of things and relationships between things!"

Like, ok?

It seems to appeal to computer science folks who want to level up their math chops, enjoy an exclusive "boutique" branch, and long for grand unification schemes. But IMHO it simply doesn't deliver. It's a whole lotta talk to say nothing.

I always compare with abstract algebra, where you start with a basic definition of groups and within a few pages you reach Lagrange's Theorem and now you understand something amazing about primes and sets, and you can keep squeezing and twisting the basic definitions for ever-more complex and brain-punishing results.

[+] will-burner|1 year ago|reply
I've been saying this in this thread, but Grothendieck used the language of category theory to revolutionalize algebraic geometry. All model algebraic geometry is written in the language of category theory. The best example of a specific problem solved using category theory is Grothendieck's proofs of the Weil Conjectures. In particular the proof uses etal cohomology which in it's definition uses category theory.

https://en.wikipedia.org/wiki/Weil_conjectures https://en.wikipedia.org/wiki/%C3%89tale_cohomology

If you can get to the point of understanding this stuff, which admittedly takes a long as time, I promise you that you'll appreciate category theory, lol. I didn't understand the significance of the Weil conjectures until my 4th year in a math phd program where I studied number theory.

The problem (at least partly) is that it's possible to read about and understand the statements in category theory long before you can understand any of the elegant and great uses of it, great uses of it in pure mathematics at least. I'm not familiar with the uses of category theory in computer science.

[+] Chinjut|1 year ago|reply
Categories are very similar to groups. If you can see how groups come up often and so it can be useful to think about the concept of groups, it's exactly the same thing with categories.

A group models the invertible maps from one structure to itself. A groupoid models the invertible maps between a number of structures. A monoid models the maps from one structure to itself (without presuming invertibility). A category models the maps between a number of structures (without presuming invertibility).

[+] xanderlewis|1 year ago|reply
Have you tried learning something like algebraic topology, or representation theory, or anything that actually uses it (and for which it’s truly indispensable)?

I think if you do, you’ll discover quickly why category theory is so nice — and useful! Its origins aren’t in staring at a blank sheet of paper and trying to come up with needlessly complicated notations and idea; it comes only after centuries of observation and gradual pattern recognition.

Conversely: if you don’t, I don’t think you’ll ever get it. At least not without much difficulty.

It’s a bit like set theory: basically everyone uses it because it’s such a useful paradigm; hardly anyone studies it for its own sake. Those who do can tell you ‘theorems’ of set theory, and those who do the same for category theory can tell you ‘theorems’ of category theory. The rest of us use it as a convenient (and mind-expanding) language.

By the way, you shouldn’t consider it to be some kind of ontological theory (or even foundation) for mathematics. It makes no attempt to tell us what there is, or even particularly what is true; it gives us a way of simultaneously generalising thousands of definitions and facts, allowing us to see commonalities between a priori only distantly related concepts and therefore further generalisations more easily.

[+] cryptonector|1 year ago|reply
It's just a lot of words for saying that you use [fill in the blank with your language of choice's words, like classes or interfaces] to wrap your classes with such that the wrappers provide methods that let you do a variety of things:

  - associate immutable contextual stuff with your data
    (think configuration, in C you'd add a context object
     pointer argument to all your functions, but in Haskell
     you'd just wrap your objects in monads)
  - associate "mutable" contextual stuff with your data
    (think state, in C you'd add a context...)
  - abstract out I/O so you can write fully deterministic
    code that's easy to test
  - etc. (e.g., tracing, logging, transactions,
    backtracking, and whatever else you can imagine)
along with standard methods that allow you to chain "statements" into a single expression while still looking like it's "statements".
[+] TwentyPosts|1 year ago|reply
How much of a background do your have in abstract algebra? I don't think category theory makes much sense or is very satisfying unless you can already list a bunch of categories you interact with regularly, and have results which can be expressed in terms of categories.
[+] grumpyprole|1 year ago|reply
You won't get much insight from a study period measured in hours. CT has actually been incredibly beneficial for computer science. Monads were first used to give denotational semantics to imperative languages. CT has also influenced Haskells design, giving many general and consistent abstractions across it's libraries. Looking ahead, CT is teaching us how to build software in total languages.
[+] xelxebar|1 year ago|reply
It has struck me for a while that associativity can be seen as a higher-order commutativity. Specifically, for the real numbers, it's associativity just says that you can commute the order in which you evaluate the multiplication map.

To make that more explicit, consider the space of left-multiplication maps Lr(x) = rx for real numbers r and x. Similarly, for right-multiplication maps Rr(x) = xr. Then the associative rule a(xc) = (ax)c can be re-written La∘Rc(x) = Rc∘La(x), which is precisely a commutativity relation.

The above obviously generalizes to arbitrary Abelian groups, but I'm curious if there's more depth to this idea of associativity as "higher-order commutativity" than just what I've said here.

[+] hughesjj|1 year ago|reply
I don't know if there's more depth to it but fwiw I think that's a really cool insight regardless.
[+] sesm|1 year ago|reply
Is there any Category Theory tutorial that illustates that we need this apparatus to solve some real problem? For example, for Group Theory there is an excellent book 'Abel's theorem in problems and solutions' that does exactly that.
[+] js8|1 year ago|reply
> that illustates that we need this apparatus to solve some real problem?

I think your question is wrong in a sense. Category theory is one of several languagues of mathematics, and there are analogies between them. It's kinda like asking "is there a computer program that requires to be written in C?"

So I think there is an element of taste whether you prefer category theory to some other (logical) language. That being said, just like in programming, it's still useful to know more than one language, because they potentially have different strengths.

[+] gremgoth|1 year ago|reply
Modern algebraic topology, especially homological algebra, more or less requires category theory... intro textbooks such as Rotman's will contain primers on category theory for this reason.
[+] will-burner|1 year ago|reply
It's annoying that you need so much math to understand the utility of Category Theory. I learned a bunch of Category Theory before I ever saw it used in a useful way.

Grothendieck wrote modern Algebraic Geometry in the language of Category Theory. This is the first time I saw Category Theory really used in a useful way. Grothendieck's proofs of the Weil conjectures I would say is a good example of using Category Theory to solve a famous problem. Category Theory is used to define and work with etale cohomology and etale cohomology plays a fundamental role in Grothendieck's proofs of the Weil conjectures.

https://en.wikipedia.org/wiki/Weil_conjectures

[+] GrantMoyer|1 year ago|reply
I found the series of Category Theory lectures by Bartosz Milewski[1] extremely helpful and approachable. It indroduces the abstract concepts of category theory while giving concrete examples of those concepts and tying some key concepts back to properties of types in programming languages.

[1]: https://www.youtube.com/playlist?list=PLbgaMIhjbmEnaH_LTkxLI...

[+] mrkeen|1 year ago|reply
I haven't dug far into CT. I'm slowly making my way through Modern Foundations of Mathematics (Richard Southwell) [1] that was posted here recently.

That said, two comments:

1) The definition of a category is just objects, arrows, and composition. If you're looking for more features, you might be disappointed. (If you've grown up with 'methods' rather than arrows, then you don't necessarily have composition.)

Writing your logic with objects and arrows is just damn pleasant. If I have bytes, and an arrow from bytes to JSON, then I have JSON. If I have also have an arrow from JSON to a particular entry in the JSON, then by the property of composition, I have an arrow from bytes to that entry in the JSON.

2) The various CT structures are re-used over and over and over again, in wildly different contexts. I just read about 'logict' on another post. If you follow the link [2] and look under the 'Instances' heading, you can see it implements the usual CT suspects: Functor, Applicative, Monad, Monoid, etc. So I already know how to drive this unfamiliar technology. A few days ago I read about 'Omega' on yet another post - same deal [3]. What else? Parsers [4], Streaming IO [5], Generators in property-based-testing [6], Effect systems [7] (yet another thing I saw just the other day on another post), ACID-transactions [8] (if in-memory transactions can count as 'Durable'. You don't get stale reads in any case).

They're also widespread in other languages: Famously LINQ in C#. Java 8 Streams, Optionals, CompletableFutures, RX/Observables. However these are more monad-like or monad-inspired rather than literally implementing the Monad interface. So you still understand them and know how to drive them even if you don't know all the implementation details.

However what's lacking (compared to Haskell) is the library code targeting monads. For example, I am always lacking something in Java Futures which should be right there: an arrow I can use to get from List<Future<T>> to Future<List<T>>. In Haskell that code ('sequence') would belong to List (in this case 'Traversable' [9]), not Future, as it can target any Monad. This saves on an n*m implementation problem: i.e. List and logict don't need to know about each other, vector and Omega don't need to know about each other, etc.

  [1] https://www.youtube.com/playlist?list=PLCTMeyjMKRkqTM2-9HXH81tvpdROs-nz3
  [2] https://hackage.haskell.org/package/logict-0.8.1.0/docs/Control-Monad-Logic.html#g:2
  [3] https://hackage.haskell.org/package/control-monad-omega-0.3.2/docs/Control-Monad-Omega.html
  [4] https://hackage.haskell.org/package/parsec-3.1.17.0/docs/Text-Parsec.html#t:ParsecT
  [5] https://hackage.haskell.org/package/conduit-1.3.5/docs/Data-Conduit.html#g:1
  [6] https://hackage.haskell.org/package/QuickCheck-2.15.0.1/docs/Test-QuickCheck-Gen.html#g:1
  [7] https://hackage.haskell.org/package/bluefin-0.0.6.1/docs/Bluefin-Eff.html#g:1
  [8] https://hackage.haskell.org/package/stm-2.5.3.1/docs/Control-Monad-STM.html
  [9] https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-Traversable.html#t:Traversable
[+] kidintech|1 year ago|reply
Why does every category theory primer use this exact formulation: (.*) is just an [arrow|object] in the category of (.*)`

Every undergrad course, office hour, research paper, and manual that I've ever seen spams it.

[+] cryptonector|1 year ago|reply
Because arrows are functions/mappings, and everything we do in programming involves arrows, even in languages where arrows aren't used as notation.

The common formulation is that a "monad is just a monoid in the category of endofunctors", which is not saying much but with big words, and the joke lies in understanding what it's saying. Bartosz Milewski has a lecture video series on youtube that's all about explaining that joke, and I highly recommend it because it's actually a wonderful CS lecture series.

[+] GrantMoyer|1 year ago|reply
As I understand it, it's a bit of an inside joke to minimize the complexity of mathematical structure. It's frequent use is along the same lines as the frequent use of "* Considered Harmful" in CS.
[+] keithalewis|1 year ago|reply
A semigroup is a set M and a function m: M x M -> M that is associative, m(a,m(b,c)) = m(m(a,b),c). Writing ab for m(a, b) this can be written a(bc) = (ab)c. A monoid is a semigroup with an identity e satisfying em = m = me for every m in M. This can be used for map-reduce and pivot tables.

A (small) category is a partial monoid. The domain of m is the set of composable arrows.

[+] Xcelerate|1 year ago|reply
As a non-mathematician, I'm confused how category theory relates back to formal systems. Is it its own formal system, or is it built on top of ZFC or type theory? If it's built on top of something else, what sort of algorithm would tell me whether the DAG that constitutes a proof from a set of axioms to a theorem involves category theory or not?
[+] Chinjut|1 year ago|reply
The answers to these questions are exactly the same as for group theory or graph theory or linear algebra or any other such thing.