As someone with a maths degree, yet who admittedly hasn't looked into category theory beyond some basic notions, I still don't quite understand why anyone would want to learn category theory before e.g. abstract algebra or even just fundamental mathematical reasoning (definition, theorem, proof).
Maybe I'm missing something but it seems to me that all you can study monads in programming languages without having to also learn the category theory abstraction of it. The same goes for semigroups, monoids, folds, functors, ...
I'm sure there's a benefit in category theory once you want to unify all these abstractions and talk about all of them together, but I see so many people who think they have to learn CT before they can understand monads in e.g. Haskell, and I don't really understand why.
I agree with you. CT seems mostly useful as a way of devising entirely new abstractions, but once those abstractions are developed, you don't need CT to use them.
For example if it were 2008 and you want to be inventor Applicative functors for use in Haskell, then knowing lax monoidal functors from CT might be helpful. But if you want to just use Applicative functors, you don't need to learn lax monoidal functors first.
So programmers probably don't need to learn CT because they can just let the computer scientists devise their abstractions for them. But if you want to be a computer scientist and one day devise your own abstractions[0], then maybe CT would be helpful.
Do you have any recommended reading for learning CT from the perspective of an engineer who does want to make their own abstractions?
Your description is the single best sales pitch for learning it that I've ever heard. I'm legitimately interested now — in a way that I simply wasn't before your comment.
Everyone else who tries to hype up CT is always like, "Whoa, bro, don't you know that addition is actually a monoid endofunctor over the set of all hyper-reals?" (Or something.) I guess that sort of breathless mathematical revelation is supposed to be so mind-blowing that anyone who hears it gets instantly hooked. My usual response was always just, "So what?"
But you're telling me I can become a better engineer? A meta-engineer, who can engineer things to engineer with? Yeah, I'd definitely be up for that.
My method was to spend 25 years listening to colleagues mumble about Category Theory and slowly picking up the basics. Even I didn't really use Category Theory in my abstraction work. It's just that after months of effort to crack my problem, I showed my pages of work to the Category Theory folks and they were like "oh, it's simply Yoneda this and Yoneda that and your proof can be done in 4 lines.".
That said, if I had to guess at what would be effective at getting up to speed without spending 25 years, would be to checkout https://github.com/hmemcpy/milewski-ctfp-pdf Category Theory for Programmers. Milewski was one of those people who were like "Oh, it's simply Yoneda this and Yoneda that", and he figured it out all himself in parallel without seeing my proof.
But I doubt it will be like, knowing Category Theory will enable you to have super powers for abstraction design. Rather it will be a matter of having enough mathematical tools at your disposal plus the right inspiration at the right time to realize that ones of those tools can happen to solve your abstraction programming problem you happen to be facing at some moment, in a way that is not obvious, if you are lucky. In fact, you likely have to first guess at what the right abstraction is and then fall back on Category Theory to verify the sanity of your guess.
Well, will wind up being less tensor than peers to point of being tagged as tuple when you walk in; but still 'All about the blocks' and cache on hand.
Independent of mathematical numerical systems used, everthing gets loaded at hoursX0000. Everything else after that is just bracket arrangements.
If let things stack up, then it's all about what can be accomplished before zeroing out.
(author here) I found the corecursive podcast episode about "portal abstractions" with sam ritchie quite inspiring. It's about monoids from the algebraic point of view, but the mindset is very similar, I think.
> As someone with a maths degree, yet who admittedly hasn't looked into category theory beyond some basic notions, I still don't quite understand why anyone would want to learn category theory before e.g. abstract algebra
Because people think “category theory” means “abstract math” in general, due to cargo-culting in and around the Haskell community.
I mean absolutely no disrespect to anyone doing serious work in CT or in Haskell - and I'm actually very interested in Haskell as a language per se and have been exploring it more in recent months - but having spent some time in Haskell communities, I have to agree.
There's a lot of empty posturing by people who don't really seem to understand a lot of mathematics but still seem to have very strong convictions about certain aspects of it (constructivism is another such thing).
I've only really seen constructivism talked about by people who actually have a strong math background. Because it is hard to speak out against, say, the classical notions of existence that say that there are more real numbers than rational ones unless you actually understand why the classical proofs don't work constructively. And not just as, "We don't allow that proof."
To make that concrete, let's use computable analysis as a foundation for constructivism. In short, we represent Cauchy sequences as computer programs about which we can prove things in our favorite axiom system. You can build up a version of real analysis from that. It is easy to attempt Cantor's diagonal argument. You'll get a concrete program. But it will only represent a real number in our system if our axiom system can prove that every program it proves works, works as proven. This immediately brings up consistency. And so Gödel proves that showing this program represents a computable number in our system would imply our axioms to be inconsistent. (Bad axioms! Bad axioms!) And therefore Cantor's proof fails to produce a number in our system.
Of course classically we would say that if the axioms are consistent, then the program will compute a Cauchy sequence. And so it really does represent a real number even though we couldn't verify it. But whether we accept this alternate argument is a question of philosophy, not logic.
> I've only really seen constructivism talked about by people who actually have a strong math background.
Which I don't mind. If you know your maths, your logic and ideally even your philosophy it's perfectly fine to work in constructivism or even prefer it.
But I've definitely seen some dumb, uninformed hot takes about how maths is just this big conspiracy that doesn't allow dissenting opinions because they've seen some NJ Wildberger video where he shittalks the real numbers.
So it's not even just constructivism per se, more this mix of unprincipled finitism + constructivism + no axiom of choice (or any mixture thereof), as opposed to anything grounded in some rigorous methodology. Basically the "I don't understand set theory or logic but I've heard that Banach Tarski is weird, so mainstream mathematics must be full of shit".
You'd think people wouldn't have hot takes about mathematics, but somehow they do.
I'm glad I haven't seen that then. But conversely it does annoy me to see classical mathematicians dogmatically claim as true statements that depend on their philosophy. And that I have seen a lot of.
As for the specific complaint, once you have Constructivism, Banach Tarski falls apart (even in versions with the axiom of choice). So if you don't like the weirdness, there is no need to accept it. (But you won't get advanced degrees in math unless you can at least temporarily pretend to accept it. Zorn's lemma is just too widely used.)
But can we agree to both dislike people who argue that 0.999... is not 1?
You can account for and "do" classical math perfectly fine as a constructivist+finitist. It just means that any statements about non-constructive existence or non-decidable choice must be phrased as negative statements in the logic; positive disjunction or existentials are reserved for anything that's decidable/constructible.
And uses of the axiom of choice are viewed with suspicion even by many who are doing perfectly classical math, so there's nothing wrong with pointing them out as assumptions ("if we admit of choice being applicable to X, we have Y").
In theory, maybe. In practice you have to work to reconcile the constructive "all functions are continuous" with classical results like "a strictly increasing function can be discontinuous on a dense set." (The two disagree on what a function is.)
As for choice, try doing much functional analysis without Zorn's lemma.
There are parts of math that you can do constructively. But most of it you really can't. And choice is embedded in a lot more math than you'd guess.
> In theory, maybe. In practice you have to work to reconcile the constructive "all functions are continuous" with classical results like "a strictly increasing function can be discontinuous on a dense set." [T]he two disagree on what a function is.
Well yes, it's still a different kind of math. What's not true however is this common notion that a constructivist can only ever view all "classical" math as pure nonsense. In many ways, it ought to be a lot easier for someone committed to constructivist semantics to grok a classical development than the converse - because decision procedures, computations etc. are way more of an afterthought to mainstream mathematicians.
That's why for me, constructivism - or in general mathematics with different axioms - are additions to classical mathematics, not replacements. And in that sense, they're fine for me. You'll definitely find people online defending the position that all classical maths is bullshit - I'm not saying that actual researchers behave like that, though.
But I'm also a strict formalist, or more accurately, I think whichever axioms happen to be useful should be used, without much need for there to be a epistemological commitment.
Yeah when I read all of this "category theory expands your mind" thinking, it makes me think that folks were looking for mathematical maturity more than category theory. Work through any good book on algebra and analysis and you'll learn and prove a bunch of stuff about objects and relations.
I also have a math background but am now a software engineer, and I agree. One can learn category theory all by itself, but it's relatively unimportant in software and computer science as far as I can tell. It's really only powerful as a unifying, abstraction, and conceptual relating tool in upper mathematics where the divisions of mathematics begin to blur.
author here, for context, I do have a reasonable background in maths (algebra, analysis, statistics) at a CS master-ish level (self-taught and a long time ago, though), as well as spent quite some time with programming language / type theory when I was younger, and I do use monads quite a bit in my day to day programming. In fact, the fundamental algebra concepts (rings, groups, etc...) as well as fundamental CS theory (grammars, proof theory, semantics, ...) were the only maths I felt reasonably comfortable with, compared to say, calculus or statistics which really did my head in and ultimately caused me to drop out of university.
It's a common observation that many mathematicians and maths students tend to fall into either the algebraic or the analytic camp, although of course, there are more nuances and some people are genuinely good at both.
I myself find beauty in some parts of analysis, but algebra definitely speaks more to me (e.g. I find a proof of Lagrange's Theorem more beautiful than one of the Intermediate Value Theorem). So I can relate.
No way? It's a pretty even split between analysts and algebraists from what I can tell. Really depends on what you're researching. Only the internet and online CS-adjacent communities have these crazy algebra-dominated communities.
How do you know analysis at a "CS master-ish level" (not sure what that means given it's a mathematical subject) but calculus did your head in? I'm not being judgmental but am just genuinely confused given that analysis is the foundation for calculus. :)
Calculus (at least our exams and lecture) were so much about "calculating" and applying rules and clever tricks on complex functions and multiple dimensions which just... didn't work for me. I had a lot of other things going on, so this recollection is probably off. Calculus and diff eqs started making sense to me when I started working in mathematica and systemmodeler, and realizing that it was a brilliant tool. It probably also came from the way it was taught, which was completely devoid of any historical or practical context. It was just a lot of "compute the laplacian of e^i_sin(phi) or whatever.
I found groups, limits, continuity, fields to be much more amenable to my thinking. It felt way more similar to the things I really enjoyed (building software in scheme and C and writing exploits in assembly).
Oh, I see. So it seems it was more about the presentation and interest rather than the concepts. You may like Calculus by Michael Spivak if you're interested in it again. It is much more analytical than computational.
That's one of the books I used when I got back into it. These days, there is so much great material on youtube and edx and coursera (plus maths stackexchange and discords) that it's absolutely fantastic to learn maths, as fantastic instructors are immediately available.
For me, I realized thst I need to write the code to accompany the maths, because it helps me disambiguate mathematical notation and put "types" on things.
It helped me a lot to work alongside real maths PhDs and realize that they have a very "intuitive" approach to maths, and don't necessarily yeet lemmas and proofs around all day, instead spending a lot of time brainstorming on the whiteboard and discussing things out, very similarly to software design. It was one of them telling me that my thinking was very mathematical that helped me "deconstruct" the fear of maths that I had built up.
I agree that using programming to understand a domain is very useful, as it’s something I’m trying to do a lot of these days. Although sometimes, there needs to be some simplification to do that that maybe skirts around the meat of the matter. But as always, a diverse approach is more powerful than a singular one.
Yeah, [edit: I doubt this]. I have a degree in math and cs. It sounds like parent comment doesn't really know analysis. The exercises in Rudin 1 are way harder than memorizing some rules to evaluate integrals. [Edit: at least for me; and I've never heard someone say the opposite before.]
At least for American university-level calculus vs. American university-level analysis/topology over metric spaces.
> self-taught
Ah, makes more sense. I am also mostly self taught for most math. From my experience, I find it hard to believe after you did the exercises in PoMA, that calculus was harder.
That's how I remember it. I know I had to take calculus twice, but passed analysis the first time. To be fair, this is 20+ years ago and my memory sucks at the best of times.
Looking at it now, I think maybe I just found it too boring to remember the tricks, and too tedious to not make mistakes, while I found coming up with proofs more "interesting" and requiring less rote memorization.
Ah, for sure pure math is much more enjoyable to learn. I'm glad you find math interesting. Especially as a writer; that's really cool. Sorry for my previous comment.
I wish the university calculus changed the curriculum. I didn't find it particularly useful to learn how to integrate e.g. inverse trig given that WolframAlpha exists. There are lots of other examples of teaching memorization of silly tricks like that. I guess they probably do it because it's easy to test.
No offense taken, I totally understand that it can come across as odd. I hope my clarification makes more sense. Once I learned more about say, why leibniz and newton came up with their formulations, and what say, the inverse of the square root of the determinant of the jacobian actually allows one to formulate (how a space "dilate" when a continuous distortion is applied to it, for example), calculus (applied calculus) became a much more interesting topic for me.
How much do you think your predilection for algebra affected your interests? Most folks who do CT as a hobby that I know are much weaker in analytical concepts than any working mathematicians that I knew.
Honestly I'm pretty weak in both, but they seem to reflect my intuitions, if that makes sense? I'm nowhere near a working mathematician, and proofs usually bore me. I prefer writing code.
If you're looking for an alternate perspective, I would suggest taking an advanced undergraduate analysis book and working through it, proofs and all. It'll teach you the same kinds of stuff that you're getting out of CT but will give you "working" (lol) knowledge of how to apply it. A lot of mathematical work is about creating math objects and relations and proving properties of these objects through their relations. The exercises in a good algebra book will exercise this muscle and make your software organization better, at least IMO. Learning the definitions just isn't a substitute.
I can see why categories appeal if you're just looking at definitions, but CT often just abstracts/standardizes an approach that folks were already working with when working with rings, groups, topologies, etc. Going through the proofs will teach you when to apply what.
Aluffi (Algebra: Chapter 0) makes a great book that teaches the basics of CT and brings it up while teaching regular algebra. I highly recommend it.
I studied CS in germany in 1999 and we had "analysis 1+2+3" which I think was a fairly rigorous analysis treatment, I definitely did more than my fair share of proofs. I think that foundation accompanied me through all those years of programming and helped me read and pick up concepts related to type theory, algebra and functional programming.
In parallel I developed this very "visual" way of thinking about structure that makes me really resonate with CT. In a way I'm less interested in learning new maths, more so than learning more about how people approach (complex) structures I already know with these very rudimentary concepts, and seeing how their formalizations match my "folk mathematics".
I'm having a lot of fun!
edit: thanks for the book recommendation! that looks like my kind of maths book
i skimmed through aluffi, and it is indeed exactly the material I had at university in analysis and algebra, augmented with the category theory angle. this is great, thanks a lot. Now I have to find an affordable copy somewhere.
I also like the focus on quotient groups and galois fields, as well as using graphs as recurring examples, which have been the only times I really put the pedal to the maths metal in my programming (for some crypto and error correction stuff).
To me it's part of the "stretch your mind" philosophy. Plenty of
exercises that add some suppleness and help with all round problem
solving. It's one reason I keep up with Lisp even though nobody cares
much to learn or pay me to program in it these days. Any programme of
discrete and abstract maths surely has the same benefits, with a
little bit of different theory areas; number, group, sets,
operator... all healthy stuff unless taken to extremes :)
author here, Finding higher abstractions ultimately is about being able to draw parallels amongst a wider variety of concepts. It doesn't change the concrete thing / the lower levels of abstractions in the least, and it often makes sense to stay at that lower level when communicating or getting things done. Plus I personally enjoy it and I feel it gives me a much wider array of things to draw from when designing / thinking. It broadens my toolbox, even if at the end of the day, all I do is hammer nails.
Yeah Im unsure what the point is. The motivation for CT in programming seems weak.
Eg Stuff about types and endofunctors feels like trivial examples using the jargon just to use it. OTOH discovering the Top->Grp functor in algebraic topology is kind of unexpected and revealing.
> or even just fundamental mathematical reasoning (definition, theorem, proof).
Logic and proof theory is actually one of the areas in math where category theory is most clearly useful. Intuitively, this is because reflexivity ("A leads to A") and transitivity ("A leads to B and B leads to C, hence A leads to C") are well-known rules of any kind of proof writing or indeed of argumentation in general, and they exactly describe a category.
How exactly is it useful? Reflexivity and transitivity are concepts independent from category theory. How does category theory, specifically, help here?
As someone familiar with algebraic objects and relations etc., I don’t see what category theory is adding in terms of practical usefulness in software development.
I know a fair bit about logic and none of it involves any CT. That's not to say you can't find it in there somewhere as a unifying concept, but the fundamental theorems such as completeness and compactness of FOL, basic model theory, Gödel's theorems, etc. do not require any CT.
Also, reflexivity, transitivity and symmetry just define an equivalence relation, no need for CT there.
I concur, the theory is not necessary to understand how to use it. Perhaps it is counter-intuitive and one would like hone their intuition. Still there are much better approaches to developing an adequate intuition than the theory, this is why traditional Math textbooks rely on exposition first, to establish the context of the theory...
I would be willing to drink the kool-aid if I saw it being used in a practical way. I always feel these posts are filled with category theory jargon without ever explaining why any of the jargon is relevant or useful. I’ve even watched some applied category theory courses online and have yet to feel I’ve gained anything substantive from them.
However, as I started off with, I’m always willing to try something out or see the reason in something. Can anyone give me a practical applied way in which category theory is a benefit to your design rather than just creating higher level jargon to label your current design with?
Category theory as it applies to programming is not dissimilar to learning design patterns. Some people go their whole careers never touching them and write perfectly fine programs. The same is true for category theory.
Those who learn either or both, have a larger set of mental abstractions that largely transcend any one programming language. They can say, this thing and that thing share a common salient property through a certain abstract pattern. There is a certain sense of mathematical beauty in recognizing common patterns and mechanisms. There is also the choice of subtly or drastically altering one's coding style. The same with each, a programmer can use just the right design pattern in just the right place and perhaps simplify or codify a section so that it's easier to understand the next time around (or decide not to!) or they can go completely wild and over-pattern their code. The same applies to category theory. One can recognize that some section is actually a closed reduction and accepts any semigroup and then decide to codify this into the program or not.
I tend to be a collector of ideas, so learning category theory and it's application in programming gives me all these little gems and ways to re-understabd code. Sometimes I use them and sometimes I don't, but my life is more joyful just knowing them regardless.
For me, I use the simple stuff (Semigroup, Monoid, Monads, Functors, ..) the most. Often times I'll be reasoning about a problem I'm working on in Haskell and realize it is a monad and I can reuse all of the existing monadic control structures. It is also helpful the other way, where you start working with someone else's code and seeing that it is a e.g. Monad immediately tells you so much concrete info about the structure, where in a less structured language you might need to reed through a bunch of docs to understand how to manipulate some objects. The "killer app" is all of that extra structure shared throughout all of the codebases.
Same here. I drank the cool aid a few years back, and started using fp-ts in my frontend projects, hoping to use algebraic data types regularly. But today all I use is the Monad. I can't find any motivation to write abstract algebra to build UI widgets.
> I can't find any motivation to write abstract algebra to build UI widgets
This made me chuckle, because I am at this very moment trying to apply the "tagless final style" described here[0] to a custom GUI in a personal-for-fun-and-learning ocaml project : )
Category theory does provide new algorithms if you unroll all of its definitions. For example, it reveals that SQL conjunctive queries can be run both "forward" and "backward" - an algorithm that is invisible in traditional SQL literature because that literature lacks the vocabulary to even talk about what "running backward" even means - it isn't an 'inverse' relationship but an 'adjoint' one. We also use it to provide a new semantics for data integration. Link: https://www.categoricaldata.net
The way to run conjunctive SQL queries forward and backward is described in this paper, https://www.cambridge.org/core/journals/journal-of-functiona... , (also available on the arxiv), where they are referred to as query 'evaluation' and 'co-evaluation', respectively. We never would have been able to discover co-evaluation if not for category theory! The previous link includes this paper, and many others.
Bi-directional data exchange has many uses. For example, given a set of conjunctive queries Q, because coeval_Q is left adjoint to eval_Q, the composition coeval_Q o eval_Q forms a monad, whose unit can be used to quantify the extent to which the original query Q is "information preserving" on a particular source (so query/data quality). As another example, we use the technique to load data into OWL ontologies from SQL sources, by specifying an OWL to SQL projection query (tends to be easy) and then running it in reverse (tends to be hard). But no doubt more applications await!
Can you point to some examples of owl/sql transforms being flipped? I have trouble believing that an invertible transformation is hard (presumably each step is invertible, right), and certainly "never would have been able to discover" seems inconceivable to me.
Looking at the paper it is very dense and abstract, also 50 pages long.
Edit: on reflection I am doing a bit of sealioning which was not my intention but it does look that way. I'll try to read your paper but if you assure me cat theory really allowed you to do those things you claim, I'll accept you at your word.
You might try pages 8-16 of this presentation: https://www.categoricaldata.net/cql/lambdaconf.pdf . The examples are relational to relational and simplistic but they do illustrate running the same transformation both forward and backward, as well as show the "unit" of such a "monad". We implemented everything in public software, so hopefully the software is even better than my word! As for loading SQL to RDF specifically, I'd be happy to share that technique, but it isn't public yet- please ping me at ryan@conexus.com.
Could this possibly be explained to the average programmer, who doesn't have the foggiest notion what conjunctive queries, coevaluation, or monads are?
In Ruby and many other languages, you have this idea of a string concatenation:
"foo" + "bar" -> "foobar"
"foo" + "" -> "foo"
That makes it a monoid. Instead of talking about OOP patterns, knowing that the "+" operator is a monoid for string objects lets us write code that is composable.
Some language platforms will implement the first concat, but not the latter. Without the latter, you couldn't do things like, accept an empty user input. You would have to write code like:
This applies to things like, hashes, or even more complex objects -- such as ActiveRecord named scopes, even if they don't use a "+" operator. (But maybe, Hash objects can be "+" together instead of calling merge())
This is all applicable to how you can design libraries and DSLs that seem intuitive and easy to use. Instead of just thinking about bundling a number of methods together with a data structure, you can make sure that a single method within that bundle works the way you expected across a number of objects. You might even be using these ideas in your code, but don't realize it.
I see no more than Groups in your comment. Which are very useful! Also having commutative operations makes for Abelian Groups, which enable operations to be reordered, and makes for implicit parallelism.
I have no idea. I don't think I have really yet to grok the ideas around categories and morphisms. I'm very slow at learning this stuff.
I do know that each of the small number of intuitions I have gained has sharpened how I reason about my code, that once I grokked a single intuition, they seem both simple and profound, and that there are even more that I don't yet understand.
So for example, when you wrote "Also having commutative operations makes for Abelian Groups, which enable operations to be reordered, and makes for implicit parallelism", I never thought about that. Initially, I could almost feel like something coming together. After I let that sink in for a bit, and then it starts opening doors in my mind. And then I realize it's something that I have used before, but never saw it in this way.
I was mainly answering the parent comment about why someone might want to drink the kool-aide as it were. I suppose my answer is, even though I know there are some powerful ideas with CT itself, the intuitions along the way each have many applications in my day-to-day work.
Actually, it is weird to see groups mentioned in this discussion at all. Non-commutative (generally) groups are useful in dealing with symmetries, but what symmetries can we talk about here? Abelian groups (and modules in general), on the other hand, are completely different beasts (seemingly only useful in homological algebra, algebraic geometry, and topology).
Strings are a (free) monoid with respect to concatenation, sure, but it is easier to learn what a monoid is using strings as an example, rather to try and "learn" about strings by discussing monoids first. Why this is deemed useful by some is beyond me.
In category theory, everything has to be defined in terms of objects and morphisms, which can be a bit challenging, but it means that you can apply those concepts to anything that has "an arrow". it's of course just playing with abstractions, and a lot of the intuitive thinking in CT goes back to using the category of sets. For me however seeing the "arrow and composition" formulation unlocks something that allows me to think in a broader fashion than the more traditional number/data structure/set way of thinking.
It's always hard to point out when some very abstract knowledge has guided your intuition, but I apply monads very often to compose things that go beyond "purely functional immutable data function composition". for example, composing the scheduling of PLC actions so that error handling / restarts can be abstracted away. The more recent insights of learning how to model things with just morphisms are broadening my concepts of a functor. I used to thing of a functor as some kind of data structure that you can map a function on, but in CT a functor is a morphism between categories that maps objects and morphisms while preserving identity and composition. This means I can start generating designs by thinking "I wonder if there is amorphism between my database schema category and my category of configuration files or whatever. Maybe something useful comes out, maybe not.
Morphisms are not a concept that is specific to category theory. Are there any lemmas or theorems of category theory that are useful in software development?
that's interestingly enough not something I'm interested in (formally deriving things and doing "proper" maths as part of my software work). I care much more about developing new intuitions, and seeing the category theoretical definition of monoid https://en.wikipedia.org/wiki/Monoid_(category_theory) inspired me a lot.
In fact I think that one of the great things about software is that you can not care about being very formal, and you can just bruteforce your way through a lot of things. DO your database calls really compose? or can pgbouncer throw some nonsense in there? Well maybe it breaks the abstraction, but we can just not care and put in a cloudformation alert in there and move on.
Mostly confirm that drawing things as boxes and arrows even though I implement them as folds/monoids (state machines, binary protocol decoders) and implementing the comonoid by "reversing the arrows" (to do log reconstruction when debugging) has a name, and thus is a fruitful practice for me to continue doing, instead of it being "those weird drawings I keep noodling".
Nothing is rocket science, and you can totally get there without even the traditional monoid definition, but seeing the same diagrams I kept drawing before porting something to more "traditional" code patterns was quite validating.
completely rough whatever that tries to match stuff I did on a ps2 protocol decoder a while ago. Being able to invert state machines that use time ticks to advance requires adding additional counters to make them work properly, iirc. As you can tell none of this is formal, in fact I worked on this before starting my CT joint, but you can maybe see why I resonate more with the CT formulation of things.
The bigger picture here is that when I do embedded dev, I only have so much memory to keep an event log, so I need to be able to work backwards from my last none state, instead of traditionally walking forward from an initial state and incoming events. That this whole concept of "swapping the arrows around" is something that reputable people actually study was super inspiring.
What language only allows concatenating a non-empty array?
And it still doesn't explain why string concatenation being a monoid is useful in ruby. It's useful in Haskell because implementing one of their category-theory typeclasses means that type now inherits a stdlib-sized API that works for any monoid, not concrete types. But even Haskell hasn't proven that it's worth the jargon or abstraction after all these years; every other language remains productive without designing APIs at such a high level of abstraction.
> Instead of just thinking about bundling a number of methods together with a data structure, you can make sure that a single method within that bundle works the way you expected across a number of objects. You might even be using these ideas in your code, but don't even realize it.
I think this was GP's point. They complain that no-one explains why the category theory _jargon_ is useful. As you point out, the jargon is not needed in order to use the concept. Someone might even come up with the concept independently of the jargon.
Most of us can go our whole careers without the "insight" that string concatenation is a "monoid". I don't know any languages that would balk at "foo" + "" or [a, b].concat([]). This all seems academic beyond belief.
For most of my years in primary education, math was easy. Until it was not. I ran headlong into the idea of "instantaneous rate of change", and I was confronted with the existential crises that there is nothing inherently concrete or "real" about the idea of instantaneous rate of change. My mind just got stuck on there, and I nearly failed the course in high school.
Everything after that point was not easy, and somewhere, there was a belief forming that maybe I was not so good at math.
Two things I encountered changed my perspective on that. The first was reading a math graduate student's experience where they talk about struggling with concepts, like groping about in a dark room until one day, you find the light switch and you see it all come together. The other is Kalid Azad's "Math, Better Explained" series. His approach is to introduce the intuition first, before going into the rigor. Between those two, I realized that I had never really learned how to learn math.
Once I started with the intuition, I also realized why people get stuck on things. There are people who don't get negative numbers. They keep looking for something concrete, as if there were something intrinsic to reality that makes negative numbers "real". And there isn't. And yet, in my day-to-day life, I work with negative numbers. I am far better able to reason and navigate the modern world because I grok the intuition of negative numbers.
Then there are the cultures where there isn't even a notion of natural numbers. Their counting system is, "one", "two", and "many". In their day to day life, to echo your sentiment, they can go on their whole life without the "insight" that things can be countable. Many of them probably won't even care that they don't have the intuition of countable things. And yet, in this modern culture, I find myself introducing the idea to my toddler.
Ultimately, it's up to each individual's decision how far they want to go with this. Each culture and civilization has a norm of a minimum set of intuitions in order for that person to navigate that civilization. Category Theory is outside that norm, even among the software engineer subculture. Perhaps one day, it will be the norm, but not today. Beyond that, no one can make you grok a mathematical intuition, as useful as they are once grokked.
I am not great at math. But I learned about complex numbers for fun. It took a bit to make them “real” for me, 3B1B helped a lot as did asking myself how I find negative numbers real (negative integer: if something in the future will exist, it won’t and the negative integer will be incremented, aka a debt to the future existence of whatever the number represents).
Complex numbers: the number line is just a metaphor. What would happen if you make it 2D? Oh you can solve negative roots. Whenever a number needs to spin or have a negative root, it’s a useful tool. Numbers are tools. Cool, that’s “real” enough for me.
I know I will never ever use it. Or at least, not in my current state of how I live my life. I liked learning it, there’s something to be said for what is beyond your horizon. But I do think just like complex numbers, category theory needs a motivation that is compelling. In retrospect, learning complex numbers is most likely not useful for me.
Oh wait a second, complex numbers helped me to understand trig. I never understood it in high school but 3B1B made it intuitive with complex numbers
Nevermind, I’ll just let this all stand here. It’s fun to be open and convinced by myself to the other side while writing my comment :’)
I’m sure if I would have learned category theory, I would have written a similar comment as I think there are a lot of parallels there.
Azad has this description of imaginary numbers that really tickled me: “numbers can rotate”.
I remember my high school teacher teaching imaginary numbers in the trig class, and one of the other kids asked what they could be used for. This was an honors class and the kid skipped a math grade. Our math teacher couldn’t talk about it off the bat, and unconvincingly told us about applications in electrical and electronic engineering. We still went through it, but none of use knew why any of it was relevant.
I think if he had said “numbers can rotate”, maybe some of us (maybe not me!) might have been intrigued by the idea. It would have been a way to describe how EM rotate.
My own personal motivation for pursuing CT has to do with working with paradigms, and how are they related, and how they are not. Categories and morphisms seem to talk about the kind of things we can talk about with paradigms, yet much more precisely.
For example, your language can implement sum(Array<X>) for any monoid X. And the sum of an empty list of integers can automatically return 0, while the sum of an empty list of strings can automatically return "". This sounds simple, but many programming languages make you learn special commands and notation for strings. A compiler could also leverage the fact that length() is a homomorphism, so length(a + b) = length(a) + length(b).
Monoids are really simple, so you probably won't get a lot more examples there. But monads are a different story. Once you know the pattern, you really miss them. For example, Promises in Javascript are like painful versions of monads. There is a lot of special-case language support and reasoning around Promises that only apply to Promises. In a language with native support for monads, you could use the same code or libraries to handle patterns arising when using promises, as patterns arising with other monads like I/O.
In summary, if you never try it, you may never know or care what you're missing, but that doesn't mean you're not missing anything...
I find I get a lot of value out of monads even if the language doesn't expose them.
For example, it allows me to quickly draw a single arrow in a big architecture diagram because I know I'll be able to take a list of promises and turn it into a promise of a list, and then take a function that takes a list and returns more lists, and flatmap it on top of that. Even if Promise is "not really" a monad, and I will probably have to write the machinery manually.
It's what I mean when I say "you can bulldoze your way through a lot of broken abstractions when programming", but at the conceptual stage and when designing, you can get a lot of mileage out of the theoretical constructs.
I can think of hundreds of “exceptions” to the category theoretic approach.
So for example anything you can do to “string” type that isn’t based on human language you should be able to do to an array of bytes, integers, or “well-behaved” objects. (Also “escaped” strings or strings in various encodings such as UTF-16 or MBCS.)
Show me a language that implements all of those functions with interchangeable and uniform syntax!
What does it even mean when I say “well behaved”? It means that those objects need to have some traits in common with characters, such as: copyable, comparable, etc…
To implement regex for objects would also require a “range” trait and perhaps others. Category Theory lets us talk about these traits with a formal language with mathematically proven strict rules.
With C++ template meta programming it’s possible to get most of the above, but not all, and it’s not used that way in most code. It’s also weakly typed in a sense that C++ can’t express the required trait bounds properly. Rust and Haskell try as well but still fall short.
Category Theory shows us what we should aspire to. Unfortunately the tooling hasn’t matured enough yet, but now that GATs have made it into Rust I’m hopeful there will be some progress!
Strings/lists under concatenation do not form a group since concatenation is not uniquely invertible. (In the sense of "there is no list -xs that you can concatenate onto xs to get the empty list.)
Again, I don't see how this piece of information is useful for a programmer. And if it is, I'll bet it's 10x more useful if expressed in more ordinary language.
This is useful for example when implementing compiler optimizations, or concrete string implementations. It means that you can reduce "foo" + "bar" + foo to at least "foobar" + foo and that you can intern "foobar". Would that work the same without calling it a monoid? Of course, monoid is just a name. But that name allows you to go shopping in a wide variety of other fields and work other people have done and mine it for ideas or even concrete implementations.
A vast, overwhelming majority of programmers will never implement a concrete string type or compiler optimizations. Can you show me how this kind of theory is practical for a working programmer who does things like batch processing jobs, web applications, embedded software, event-driven distributed systems, etc?
Consider the monoid abstraction in the context of batch processing. Anywhere you have a monoid, you have a corresponding “banker’s law” that says you can find the generalized sum of a collection of items by partitioning the items into groups, computing the sum over each group, and then taking the sum of the sums as your final result. This idea has many applications in batch processing.
For example, in the MapReduce framework, this idea gives rise to “Combiners” that summarize the results of each map worker to massively lower the cost of shuffling the results of the Map stage over the network prior to the Reduce stage.
Another example: In distributed database systems, this idea allows many kinds of addition-like aggregations to be performed more efficiently by computing the local sum for each group under the active GROUP BY clause before combining the groups’ subtotals to yield the wanted gobal totals.
Basically, in any situation in which you have to compute a global sum of some kind over a collection of items, and some of those items are local to each worker, you can compute the global sum faster and with fewer resources whenever you can take advantage of a monoidal structure.
Which is exactly the concept between optimizing for string concatenation and interning them, ultimately. Sure you can make do with the "algebraic" definition of a monoid, or entirely without it, but that doesn't mean the abstraction isn't there to inform your thinking and your research.
One point that really stuck with me is how people who apply category theory (say, people at the topos institute) use the concepts as little tools, and everytime a problem crosses their way, they try out all the little tools to see if one of them works, similar to how Feynman describes carrying out problems in his head until one day a technique unlocks something).
Having more general abstractions just allows them to be applied to more problems.
my personal favourite: seeing the foldable (a monoid with two types, kind of) in event driven systems means you can model a lot of your embedded software / web applications as a set of functional state/event reducing functions, and build a clean system right from the get go. Seeing the functors in there tells you which part of the state can be split, parallelized or batched for performance or modularity).
Again, these are all very possible without knowing category theory, but category theory studies what the abstractions behind this could be. I find that the huge amount of intuition (driven by some formalism picked up along the way, but even more so by just writing a lot of code) I developed is reflected in what I see in category theory. It's why I genuinely think that web development is just like embedded development: https://the.scapegoat.dev/embedded-programming-is-like-web-d... which seems to be way more controversial than I thought it would be.
I once wrote a unifying parent class for several client-specific reports we had. Think the parent class implementing the top-level control flow/logic and the child implementations having methods that it calls into, sometimes once (e.g. for the header) and sometimes per-row.
Specific reports needed various kinds of customization. Some needed to call the client's web API to get some extra data for each row (and since this was across the internet, it needed to be async). Some needed to accumulate statistics on each row and sum them over the whole report at the end. One needed to query an ancillary database for each row, but all those queries had to be part of the same transaction so that they would be consistent.
Now in theory you can do ad-hoc things for each of those cases. You could make the per-row method always async (i.e. return Future), so that you can override it to do a web API call sometimes. You could stash the statistics in a mutable variable in the report that accumulates them, remembering to do locking. You could use a session on the ancillary database bound to a threadlocal to do the transaction management (most database libraries assume that's how you're doing things anyway), and as long as your returned Futures are never actually async then it would probably work. But realistically it would be very hard to do safely, and I'd never have spotted the underlying symmetry that let me pull out the high-level logic. More likely we'd've stuck with three distinct copy-pasted versions of the report code, with all the maintenance burden that implies.
In principle an abstraction never tells you something you didn't already know - whatever properties you're using, you could've always worked them out "by hand" for that specific case. Like, imagine programming without the concept of a "collection" or any idea of the things you can do on a collection generically (such as traverse it) - instead you just figure out that it's possible to traverse a linked list or a red-black tree or an array, so you write the code that works on all of them when you need it. That's absolutely a way that you can program. But if you don't have this vocabulary of concepts and patterns then realistically you miss so many chances to unify and simplify your code. And if you have the rigorous category-theory concepts, rather than fuzzier design patterns, then you have quick, objective rules for figuring out when your patterns apply - and, even more importantly, when they don't. You can use library code for handling those concepts with confidence, instead of e.g. wondering whether it's ok for your custom iterator to expect the library to always call hasNext() before it calls next(). It's a huge multiplier in practice.
That's correct. But understanding that string concatenation forms a monoid is quite nice, because it means that libraries can offer you this as an interface and you can choose the type you want to use.
Sorry for the wall of text, but I think to maybe help you understand why people (like me) like to work with it a bit more explicitly, I'll have to make it more concrete and give lots of examples. The good stuff comes at the end.
So let's say you have a list or array type. You want to aggregate all the things inside. Let me write pseudo code from here
let balances = List(1, 4, 23, 7)
let overallBalance = ???
How do you calculate that? Well it's simple - use a for-loop or call .reduce on it or maybe your language even has a builtin sum() function that works for lists of numbers right?
let overallBalance = sum(balances)
Now what happens if you want to concatenate strings instead? Can you reuse sum()? Probably not - you will have to hope that your language / std lib has another function for that. Or you have to fall back to implementing it yourself.
Not so if monoids are explicitly supported. Because then, it will be exactly(!) the same function (which has been implemented only once) - no type-overloading or anything.
let overallBalance = combineAll(balances)
let concattenated = combineAll(listOfStrings)
Okay, seems a little bit helpful but also doesn't seem super readable, so I guess that's maybe not very convincing. But the reason why I personally love to work with monoids as an explicit concept is the thing that comes next.
Let's say you got bitten because you used plain numbers (or even proper money-types) for the balances but in the code at some point you mixed up two things and you added a balance to the users age (or something like that) because you used the wrong variable by accident.
You decide to make Balance an explicit type
class Balance( private innerValue of type number/money )
So the code changes
let balances = List(Balance(1), Balance(4), Balance(23), Balance(7))
let overallBalance = sum(balances)
But the second line stops compiling. The std lib sum function doesn't support your custom balance class for obvious reasons. You will have to unwrap the inner values and then wrap them again (both for the sum() method or in your handwritten for-loop).
In case you use a duck-typed language where you can just "delegate" the plus-operation to the inner value: congratulations, you are already using monoids without calling it like that. Unfortunately, there is no protection against problems, such as that + might mean different things on different types and they can be used with sum() but cause unexpected results (read: bugs).
In case you use a language that has good supports for monoids, you essentially have to add just one line:
a monoid exists for class Balance using Balance.innerValue
That's it. You can now do
let balances = List(Balance(1), Balance(4), Balance(23), Balance(7))
let overallBalance = combineAll(balances)
And, magically, "overallBalance" will be of type Balance and be the aggregated balance.
In case you think that it can't work as easy as that, I'm happy to show you some runnable code in a concrete language that does exactly that. :)
On top of that, it does not end here.
Let's take it a step further. Let's say don't only have the account-balances of a single person (that would be the example above). You have that for multiple people.
And you want to calculate the overall combined balance. Now it gets interesting. Even in a duck-typed language, you can't use sum() anymore, because the inner lists don't support that. You will have to to fall back to a manual two step process, such as
But with monoids it's different. Since we know how to combine balances in a monoidic way, we also automatically know how to do that for a list that contains balances. In fact, we can do so for any list that contains something that we know of how to combine it. That means, without any other code changes required, you can simply do
let overallBalance = combineAll(combineAll(listOfBalances))
This is recursive and goes as far as you want. And it does not only work with lists, but also Maps and other structures. Imagine you have a map with keys that are strings and values that are of any type but constrained to be a monoid. E.g. Map("user1" -> Balance(3), "user2" -> Balance(5)). Or Map("user1" -> List(Balance(2), Balance(3), "user2" -> List(Balance(5))). Or even maps of maps.
Now if we have two of those and we know that the values are monoids, we can combine them as well, using again the same function, no matter what is inside. E.g.:
let map1 = Map("user1" -> Balance(3), "user2" -> Balance(4))
let map2 = Map("user2" -> Balance(5), "user3" -> Balance(6))
let aggregated = combine(map1, map2)
This is such a powerful concept and makes a lot of things so convenient that I'm always crying when I work in a language that does not support it and I have to handroll all of my aggregations.
One note at the end: all of this can be absolutely typesafe in the sense that if you try to call combine/combineAll on something that isn't combinable (= is not a monoid) it will fail to compile and tell you so. This is not theory, I use this every day at work.
I think the real insight of category theory is you’re already doing it, eg, whiteboard diagrams.
Category theory is the language of diagrams; it studies math from that perspective. However, it turns out that category theory is equivalent to type theory (ie, what computers use). So the reason why diagrams can be reliably used to represent the semantics of type theory expressions is category theory.
My workflow of developing ubiquitous language with clients and then converting that into a diagram mapping classes of things and their relationships followed by translating that diagram into typed statements expressing the same semantics is applied category theory.
The “category theory types” are just names for patterns in those diagrams.
Trying to understand them without first getting that foundational relationship is the path to frustration — much like reading GoF before groking OOP.
> However, it turns out that category theory is equivalent to type theory (ie, what computers use).
Just to elaborate on this a little, a program of type B in context of variables of types A1, A2, ..., An, can be modeled as an arrow (A1 * A2 * ... * An) -> B. (More elaborate type theories get, well, more elaborate.) The various ways you might put programs together (e.g. if-expressions, loops, ...) become various ways you can assemble arrows in your category. Sequential composition in a category is the most fundamental, since it describes dataflow from the output side of one program to the input site of another; but the other ways of composing programs together appear as additional structure on the category.
Categories take "semicolon" as the most primitive notion, and that's all a category really is. In fact, if you've heard monads called "the programmable semicolon", it's because any monad induces a related category whose effectful programs `a ~> b` are really pure programs `a -> m b`, where `m` is the monad.
I’m a math major. I learned category theory in school.
I think of category theory as an easy way to remember things. If some concept can be expressed as category theory, it can often be expressed in a very simple way that’s easy to remember. However, if you try to teach math using category theory from the beginning, it feels a little like trying to teach literature to someone who can’t read yet.
Anything directly useful from category theory can be expressed as something more concrete, so you don’t NEED category theory, in that sense. But category theory has the funny ability to generalize results. So you take some theorem you figured out, express it using category theory, and realize that it applies to a ton of different fields that you weren’t even considering.
The effort to reward of category theory is not especially high for most people. It did give us some cool things which are slowly filtering into day to day life—algebraic data types and software transactional memory are two developments that owe a lot to category theory.
Almost every field you can study in mathematics can be thought of as a “category”, and when you take a math class in undergrad, it usually focuses on one or two specific categories. Category theory is the study of categories in general. As in, “What if, instead of studying a specific field of math, you studied what these different fields had in common, and the relationships between them.”
The part where it gets wacky is when you realize that “categories” is, itself, a category.
> I always feel these posts are filled with category theory jargon
The 80/20 rule really applies here. Most of the time there's only a few key type classes that people use. Pretty much just Monad, Monoid, Applicative, Functor. If you grok those, then what people say makes a lot more sense.
> Can anyone give me a practical applied way in which category theory is a benefit to your design
Monad is probably the best example, because so many different, important libraries use it. Once you know how to code against one of them, you kinda know how to code against all of them!
The following is code written in the STM monad. It handles all the locking and concurrency for me:
transfer x from to = do
f <- getAccountMoney from
when (f < x) abort
t <- getAccountMoney to
setAccountMoney to (t + x)
setAccountMoney from (f - x)
This same code could be in the State monad (excluding the line with 'abort' on it). Or it could be in the ST monad. Or the IO monad. Each of those is doing radically different things under the hood, which I don't have the brainpower to understand. But because they expose a monadic interface to me, I already know how to drive them using simple, readable, high-level code.
As well as the 4 monads mentioned above, parsers are monadic, same with Maybe and Either, List. The main concurrency library 'async' is also monadic (though it just uses the IO monad rather than defining an Async monad).
Often the people writing your mainstream libraries know this stuff too. If you're calling flatMap (or thenCompose), you're calling monadic code. The writers just shirk from calling it monadic because they're worried people will suddenly not understand it if it has a scary name.
Looks like to the hard part is ‶handl[ing] all the locking and concurrency″ with a pretty interface; that it satisfies the definition of a monad is a convenience because the `do` keyword requires it, but so would it be if it satisfied any other one, wouldn't it?
1. Because the underlying concept for handling composing locks (STM, which I don't know much about, but I'm going to assume is a way to wrap operations within a transaction) is sound
2. it is easy to define the corresponding monad
3. which makes it usable in a do notation
4. producing readable and future-proof code
The step 2 is almost trivial, 1 is where the thinking is at. The do notation is nice to have, but not that important (to me at least). I have no problem writing this in CPS style when using javascript or c++:
If you have 24 minutes, George Wilson's talk about propagators is really interesting and demonstrates how we can find solutions to problems by looking at math.
I feel like another application which is maybe not talked about all that much is that knowing category theory gives you power to name some design pattern, google that, and tap into that vast mathematical knowledge that humanity already discovered. This becomes incredibly valuable once you become aware of how much you don't know. Or maybe just write that bare manimum code that works, idc.
Oh and also when you recognize your design to be something from ct its probably quality. Shit code cant be described with simple math (simple as in all math is simple, not as in math that is easy to understand).
Category theory is to functional/declarative programming what design patterns are to OO programming.
Patterns for decomposing problems and composing solutions.
To ask “How is it useful?” Is to ask “How are design patterns useful in OO?”.
But you already know the answer…
Is there an advantage over just inventing your own vocabulary? Yeah! You don’t have to reinvent 70+ years of Mathematics, or teach your language to other people before you can have a meaningful design discussion.
>What do those 70 years of mathematics do for me as a software engineer?
You mean other than inventing the entire industry in which you work? It's a weird question that I have no idea how to answer for you.
What is 70+ years of computer science doing for you as a software engineer? Are you standing on the shoulder of giants; or are you inventing/re-discovering everything by yourself from first principles?
>It's the other way around. Most engineers will have no clue what you are saying.
So how do engineers understand each other if we all invent our own meta-language to speak about software design in the abstract?
Misscommunication is what happens by default unless you intentionally become part of a community and learn the meta-language. Mathematics is one such community which has existed for millenia and benefits from the network effect. It's an anti-entropy thing.
There are many parallels here to the Tower of Babel.
I'm not sure if this is supposed to be sarcastic, but taking it at face value, mathematics are the underpinning of both computer hardware and computer science. Since we are talking about more abstract mathematics, it is what gave us the lambda calculus, complexity analysis of algorithms, type theory, relational algebra, distributed systems.
more pragmatically, libraries like redux, react are heavily influenced by concepts from functional programming, rust has novel concepts from type theory, data engineering in the cloud age leverages a lot of algebraic concepts to achieve massive data throughput. We have parser combinators, lambdas, stronger typing, map and flatmap in most languages these days. These all come directly from mathematics.
Oh, I thought your point was that mathematics wasn't bringing anything to computers. As far as I understand it, category theory is more about finding commonalities across mathematical fields (or indeed, scientific / engineering fields), more so than solving more concrete problems.
What does that give you? For me, I think it gives me an easier way to see common abstractions across different problems I work on.
I am at the beginning of my CT journey itself, but a layman's understanding of monads, functors and applicatives gets me really far writing pretty much the same code when i do bare-metal embedded or frontend javascript. The point is not that I couldn't write bare-metal code or frontend code without, is that I am much quicker seeing "my javascript promises are like a monad" and "my coroutines are like a monad" and being able to shrink my cognitive load.
>Functional programming concepts don't require learning category theory
This is such a reductionist world-view. Programming concepts don't require you to learn the theory of computation either, but having a theoretical/abstract grounding for what computation is disconnected from any particular programming language/model of computation helps. A lot.
>Type theory isn't category theory
It depends on what you mean by "isn't".
There is a 1:1 correspondence between type theory and category theory constructs.
"I’ve even watched some applied category theory courses online and have yet to feel I’ve gained anything substantive from them."
To understand category theory's appeal in a programming context, you must understand the duality of power in programming languages. Some people call a language "powerful" when it lets you do whatever you want in the particular scope you are currently operating in, so, for instance, Ruby is a very powerful language because you can be sitting in some method in some class somewhere, and you can reach into any other class you like and rewrite that other class's methods entirely. Some call a language powerful when the language itself is capable of understanding what it is your code is doing, and further manipulating it, such as by being able to optimize it well, or offer further features on top of your language like lifetime management in Rust. This type of power comes from restricting what a programmer is capable of doing at any given moment.
If your idea of "power" comes from the first sort of language, then category theory is going to be of very little value to you in my opinion. You pretty much have to be using something as deeply restrictive as Haskell to care about it at all. I am not making a value judgment here. If you never want to program in Haskell, then by all means keep walking past category theory.
Haskell is very much a language that is powerful in the second way. It restricts the programmer moreso than any other comparable language; the only languages that go farther essentially go right past the limit of what you could call a "general purpose language" and I daresay there are those who would assert that Haskell is already past that point.
Category theory then becomes interesting because it is simultaneously a set of deeply restricted primitives that are also capable of doing a lot of real work, and category theory provides a vocabulary for talking about such restricted primitives in a coherent way. There is some sense in which it doesn't "need" to be category theory; it is perfectly possible to come to an understanding of all the relevant concepts purely in a Haskell context without reference to "category theory", hence the completely correct claim that Haskell in no way necessitates "understanding category theory". You will hear terminology from it, but you won't need to separately "study" it to understand Haskell; you'll simply pick up on how Haskell is using the terminology on your own just fine. (Real mathematical category theory actually differs in a couple of slight, but important ways from what Haskell uses.)
So, it becomes possible it Haskell to say things like "You don't really need a full monad for that; an applicative can do that just fine", and this has meaning (basically, "you used more power than was necessary, so you're missing out on the additional things that can be built on top of a lower-power, higher-guarantee primitive as a result") in the community.
This mindset of using minimal-power primitives and "category theory" are not technically the same thing, but once you get deeply into using minimal-power primitives pervasively, and composing them together, you are basically stepping into category theory space anyhow. You can't help but end up rediscovering the same things category theorists discovered; there are a hundred paths to category theory and this is one of the more clear ones of them, despite the aforementioned slight differences. I'd say it's a fair thing to understand it as this progression; Haskell was the language that decided to study how much we could do with how little, and it so happens the branch of math that this ends up leading to is category theory, but as a programmer it isn't necessary to "study" category theory in Haskell. You'll osmose it just fine, if not better by concretely using it than any dry study could produce.
To be honest, I'm not even convinced there's that much value in "studying" category theory as its own thing as a programmer. I never studied it formally, and I wrote Haskell just fine, and I understand what the people using category theory terms are talking about in a Haskell context. If nothing else, you can always check the Haskell code.
There is a nearby parallel universe where the Haskell-Prime community in the late 1990s and early 2000s was otherwise the same, but nobody involved knew category theory terminology, so "monad" is called something else, nobody talks about category theory, and nobody has any angst over it, despite the language being otherwise completely identical other than the renames. In this universe, some people do point out that this thing in Haskell-Prime corresponds to some terms in category theory, but this is seen as nothing more than moderately interesting trivia and yet another place where category theory was rediscovered.
Then it seems fair to say that, since Haskell is the most restrictive language (or at least most restrictive in anything like general use), it had to have the most powerful theory/vocabulary/techniques for dealing with the limitations. (Or, more negatively, nobody else willingly put themselves in a straight-jacket so tight that they had to use monads to be able to do many real-world things, like I/O or state.)
> But I bet their Monad-Prime tutorials still suck.
Funniest thing I've read today. Thank you for that gem.
> Or, more negatively, nobody else willingly put themselves in a straight-jacket so tight that they had to use monads to be able to do many real-world things, like I/O or state.
I want to emphasize how true this is. In the Haskell 1.2 report, before Haskell Monads existed, the type of `main` was more or less:
type Dialogue = [Response] -> [Request]
main :: Dialogue
The main function was took a lazy list of, let's call it OS responses, and issued a lazy list of requests to the OS, whose response you would find at some point on the input list. Your task was to keep your lazy generation of `Request`s in sync with yourself as you consumed their `Response`s in turn. Read ahead one to many `Response`s and your `main` would dead-lock. Forget to consume a `Response` and you would start reading the wrong `Response` to your request.
The suggestion was to use continuation passing style (something that later we see conforms to the Continuation monad interface) to keep things in check, but this was not enforced, and by all accounts was a nightmare.
So yes, Haskell's laziness straight-jacket (Haskell was basically invented to study the use of lazy programming without having to purchase Miranda) basically means they had to invent the use of Monads for programming, or at the very least invent the IO monad.
They describe composable patterns with a clear hierarchy. The jargon is useful, since it is the language in which they are described. I mean Turing machine, filesystem, socket and pipe are also jargon.
A good example is LINQ, they use monads and functors. They just gave them other names.
It's not kool-aid, it's applied maths. Just because you don't already know and understand it (and hence don't see the benefit of it) doesn't mean there's no clear and obvious benefit. But you do have to learn before you understand.
To be fair though, there is a wide spectrum of "usefulness" for pure mathematics concepts in applied mathematics. For less contemporary examples you don't need to know about Lebesgue measure to do practical numerical quadrature for applications. Or you don't need to know that rotation matrices are part of the SO3 group to do computer graphics. Sometimes the abstractions are powerful and sometimes they are kind of a distraction (or even make communication a problem). There typically needs to be a super compelling application to motivate a shift in baseline knowledge.
Apologies for the jargon, but there isn't room in a comment box for a detailed explanation.
"The Essence of the Iterator Pattern"[0] posed an open problem in type theory as to whether or not "lawful traversals", of type `∀ F:Applicative. (X -> F X) -> (B -> F B)` could appear for anything other than what is called a "finitary container"[1], B := ∃S. X^(P S) where the (P S) is a natural number (or equivalently a polynomial functor B := ∃n. C_n * X^n). Or rather, there was an open question as to what laws are needed so that "lawful traversals" would imply that B is a finitary container.
Eventually I came up with a fairly long proof[2] that the original laws proposed in "The Essence of the Iterator Pattern" were in fact adequate to ensure that B is a finitary container. Mauro Jaskelioff[3] rewrote my proof in the language of category theory to be few lines long: use Yoneda lemma twice and a few other manipulations.
Anyhow, the upshot of reframing this as a simple argument in Category Theory meant that, with the help of James Deikun, I was able go generalize the argument to other types of containers. In particular we were able to device a new kind of "optic"[6] for "Naperian containers" (also known as a representable functors) which are of the form B:=P->X (aka containers with a fixed shape).
Since then people have been able to come up with a whole host of classes of containers, optics for those classes, representations for those optics, and a complete characterization of operations available for those classes of containers, and how to make different container classes transparently composable with each other[4].
For example, we know that finitary containers are characterized by their traversal function (i.e the essence of the iterator pattern). Naperian containers are characterized by their zip function. Unary containers are characterized by a pair of getter and setter functions, something implicitly understood by many programmers. And so on.
I don't know if all this means that people should learn category theory. In fact, I really only know the basics, and I can almost but not quite follow "Doubles for Monoidal Categories"[5]. But now-a-days I have an "optical" view of the containers in, and forming, my data structures, slotting them into various named classes of containers and then knowing exactly what operations are available and how to compose them.
This is a lot to digest, but I’m responding to this one because of I think this attempts to answer my question and because of your 4th reference. That being said I need to ask some questions.
Would I be correct in saying — there are less complex objects, which are defined in category theory, that are useful in a generic way in combination with one another?
If the answer to that question is yes, is there a place I look to find the mappings of these software topics/designs to these category theory objects such that I can compose them together without needing the underlying design? If that were the case, I could see the use in understanding category theory so that I could compose more modular and generically useful code.
I have to admit though, I’m still fairly speculative and my thinking on this is very abstract and probably very flawed.
I think your understanding is a little off the mark, at least with respect to my not-so-well written comment, which was just meant as a rambling story about how category theory influenced my and others development of (functional) programming "optics" over the last decade.
The new "optics" people are have designed in reference 4 are not themselves "elements of category theory", but category theory provides the theoretical framework that these composable optics live in. Once that framework is identified, it becomes much easier to say "oh, this new optic definition I've just identified, fits into this framework, and so does this other optic, and so forth"
To make a loose analogy, one could imagine identifying triangles and cubes and such and such that have these rotational and reflective symmetries, and notice how you can compose those operations, and then see how performing specific shuffles of cards is also composable and has some properties similar to rotating cubes in that repeated operations have a certain period and other such things. Then when someone introduces you to group theory and says yes both the cube rotations and the card shuffling can all be seen as a group and group theory can be used to generically answer your questions about these otherwise seemingly very different operations. And then you start seeing that lots of things are groups, like that Rubik cube puzzle you have, or lines through an elliptic curve. Coming up with new optics is analogous to noticing various things are groups, a task that is much easier when you have group theory and know how to recognize a group when you see it.
That said, I think a better answer to your question may be in one of my later replies https://news.ycombinator.com/item?id=33806744: Category Theory has been useful for devising an entirely new (functional) programming abstraction, the theory of "optics" in my case. But for day-to-day programming you don't normally need to invent entirely new programming abstractions, you can just use the ones that have already been developed by the computer scientists.
I’m curious if this is an answer that others would agree with. I could see your reasoning being valid on this, but I tend to see others responding that it has much more broad utility than that.
Typically when something is confusing to me and I see nothing to grasp onto, it usually means something is flawed or poorly communicated. In other words, when no one can explain why something has utility (in everyday programming) and there are a fair number of responses, then the chance of bad communication goes down and bad reasonings (non-answers) goes up. I have a feeling it is not coincidence that you have the opinion it is not utilitarian in everyday programming and that I responded to your initial post.
I'm not entirely sure if that is what you are asking, but for example pure functions from a type A to a type B can be compose with pure functions from a type B to a type C, to form a pure function from A to C. A lot of programmer oriented CT takes place in the category of types, with morphisms being a function from one type to another. Programming in a purely functional style makes composition of functions... trivial, compared to a function that has side-effects due to mutation.
Now of course this sounds trivial, but you can build much more refined concepts out of that simple idea. For example, creating product and sum types.
Here are some of the more practical insights for medium/large-scale software design that have stuck with me. They apply regardless of whether I'm working in a language that lets me easily express categorical-ish stuff in the type system:
- Triviality
The most trivial cases are actually the most important. If I build a little DSL, I give it an explicit "identity" construction in the internal representation, and use it liberally in the implementation, transformation phases, and tests, even though obviously its denotation as a no-op will be optimized away by my little DSL compiler, and it may not even be exposed as an atomic construct in the user-facing representation.
- Free constructions
When it's difficult to separate a model from an implementation, when simple interfaces are not enough because the model and implementation are substantially complex, multilayered systems on their own, often the answer is to use a free construction.
Free constructions are a killer tool for allowing the late-binding of rather complex concerns, while allowing for reasoning, transformation, and testing of the underlying components without worrying about that late-bound behavior. "Late-bound" here does not have to mean OO-style dynamic dispatch -- in fact now you have more expressive power to fuse some behaviors together in a staged manner with no runtime indirection, or to make other behaviors legitimately dynamic, differing from one call to the next. Tradeoffs in the flexibility/performance space are easier to make, to change, and to self-document.
This is particularly useful when you need to start addressing "meta" concerns as actual user-facing concerns -- when your project matures from a simple application that does Foo into a suite of tools for manipulating and reasoning about Foo and Fooness. Your Foo engine currently just does Foo, but now you want it to not just do Foo, but return a description of all the sub-Foo tasks it performed -- but only if the configuration has requested those details, because sometimes the user just wants to Foo. So you embed the basic, non-meta-level entities representing the actual component tasks in a free construction, compose them within that free construction, and implement the configurable details of that composition elsewhere.
- "Sameness"
Making clear distinctions between different notions of sameness is important. Identity, equality, isomorphism, adjointness -- sure, you might never explicitly implement an adjunction, but these are important notions to keep separate, at least conceptually if not concretely in source code entities, as a piece of software grows more complex. If you treat two things as fundamentally the same in more ways than they really are, you reach a point where you now can no longer recover crucial information in a later phase of computation. In an indexed/fibered construction, this X may be basically the same as that X, but this X, viewed as the image of a transformation applied to Y is quite different from that X, viewed as the image of a transformation applied to Z, although the images are equal. So can I just pass my X's around everywhere, or do I need to pass or at least store somewhere the references to Y and Z? What if computationally I can only get an X as the result of applying a function to Y, but conceptually the functional dependency (that is, as a mathematical function) points the other way around, from X to Y? Paying attention to these easy-to-miss distinctions can save you from code-architecting yourself into a corner. You may discover that the true reserve currency of the problem domain is Y when it looks like X, or vice versa.
Knowing category theory helps you better design interfaces. Exactly like the interfaces in OOP.
Alot of the generic interfaces you design in OOP end up being useless. Never re-used and pointless. You never needed to make these interfaces in the first place
In category theory, you will be able to create and identify interfaces that are universal, general and widely used throughout your code. Category theory allows you to organize your code efficiently via interfaces and thus enable Maximum reuse of logic.
When creating an interface you largely use your gut. Category theory allows you to draw from established interfaces that are more fundamental. Basically class types in haskell.
If you want to see the benefit of category theory, you really need to use haskell. Monads for example are a pattern from category theory.
Also a lot of what people are talking about in this thread is exactly what I'm saying^^. I'm just saying it in less complicated jargon.
Or Scala. They're the only two common-use languages I'm aware of whose type systems are expressive enough to define what a monad (or a category, or a ...) is internally. Of course you can identify particular monads (or ...) in any language, but you can't talk about them inside most languages.
I strongly recommend this presentation: "John Baez: "Symmetric Monoidal Categories A Rosetta Stone"[0]. It is easy to get lost in the ocean of theorems and definitions and overlook some powerful core concepts.
> Category theory is really the mathematics of abstraction
Mathematics is the mathematics of abstraction. That's all you're doing in math, from the beginning to the end. What's the same between "I have two sheep in this pen and five in that one" and "I have two apples in this basket and five in that one"? Hmm, you can abstract out 2+5=7, since it works the same in both contexts.
Everything in math is creating and exploring abstractions. The good ones tend to stick around, the bad ones tend to be forgotten.
Category theory is the mathematics of composition. How much knowledge can you throw away and create reusable patterns of composition? How many interesting things can you find by adding back the smallest bits of additional structure on top of that minimum? How well do those things compose together, anyway?
> Category theory is the mathematics of composition.
I don't know why you're being downvoted (tone?) but you're right. Categories formalize exactly and only the notion of composition; its power is that this notion appears everywhere when you know what to look for. Most mathematical abstractions have composition baked in at one level or another, so I think the author should be pardoned for their phrasing; but I find yours more enlightening.
algebra of typed composition, a discipline for making definitions, study of universal properties, theory of duality, formal theory of analogy, mathematical model of mathematical models, inherently computational, science of analogy, mathematical study of (abstract) algebras of functions, general mathematical theory of structures,
Bartosz's teaching/explanation style does not work for me.
I have no formal critique of him, or even concrete things to point at to say why it doesn't work for me, but I wanted to add my comment in case someone else was also feeling discouraged by not taking much away from the writings/lectures of an often and highly recommended person.
I generally learn very well from lectures/essays, but that hasn't been true for me re: his work.
I really like the "Programming with categories" lecture because you basically get 3 teaching styles for the same material. Brendan Fong is very straightforward "definition lemma definition lemma", Spivak is slightly more example-oriented, and then Bartosz is... bartosz. The juxtaposition helps to unstick me. I do use Eugenia Cheng's Catsters video to complement as well. If I had to chose one personal favourite, it would be her for sure.
As a programmer and hobbyist math reader, I found category theory to be very unrewarding (and I gave up on it) because of the lack of interesting theorems and lemmas. My takeaway was that there's Yoneda lemma and really nothing interesting before you reach that. Like, CT describes a set of rules but very little emerges from those rules.
My complaint has nothing to do with whether CT is useful or practical. By contrast, abstract algebra can be taught (e.g., in Herstein's books) as pure abstraction (very dry and without presenting any real-world connections), but you reach Lagrange's theorem right away -- which is an simple-but-awesome result that will wake up your brain. You reach Cayley's theorem and others quickly, and each is more exciting that the last. And this is all while still in the realm of purely-abstract math.
> As a programmer and hobbyist math reader, I found category theory to be very unrewarding (and I gave up on it) because of the lack of interesting theorems and lemmas. My takeaway was that there's Yoneda lemma and really nothing interesting before you reach that. Like, CT describes a set of rules but very little emerges from those rules.
The fact it's so rare to find a counterintuitive fact in CT, so that you rarely find yourself proving something and you mostly spend your time constructing effective tools, it's a feature not a bug!
McBride's "Don't touch the green slime!" is a great paper/saying about this principle. Work with the compiler, not against it. The compiler is very dumb so it understands only trivial things.
There's a legacy pseudomachist culture of 'hard is good' in mathematics (but really, it's transversal to science) which rewards working hard and not working smart.
> As a programmer and hobbyist math reader, I found category theory to be very unrewarding (and I gave up on it) because of the lack of interesting theorems and lemmas. My takeaway was that there's Yoneda lemma and really nothing interesting before you reach that. Like, CT describes a set of rules but very little emerges from those rules.
This is correct. The only somewhat interesting result in Category theory is the Yoneda Lemma, everything else is machinery for diagram chasing. It's ubiquitous in the same way that short exact sequences are ubiquitous -- and about as interesting as short exact sequences.
I think most hobbyists or those who are intellectually curious would be better off studying physics or engineering first, and then picking up the math with applications along the way. For example, you can study multivariable calculus and then complex analysis, which naturally leads to questions of topology as you find try to solve integral equations, and then obstruction theory can come up. Lots of cool stuff to study that is actually interesting. I would never foist category theory on someone who finds math to be interesting and enjoyable -- that would be like throwing a match that just caught flame into the rain.
My formulation is "if it's not trivial, it's probably not good" when I implement the necessary functions for the "type class" (to take a haskellism) to work. If your `bind` implementation monad doesn't look like it could have been written by someone who just used the function types, it's probably not right. Thanks for the link to the mcbride-ism:
> As a programmer and hobbyist math reader, I found category theory to be very unrewarding (and I gave up on it) because of the lack of interesting theorems and lemmas. My takeaway was that there's Yoneda lemma and really nothing interesting before you reach that
One of the most interesting things in CT are adjoints. They happen literally everywhere. For example, static analysis via abstract interpretation is an example of adjoint (which in this case is called Galois connection). Free structures also give rise to adjoints.
Sine there're a lot of discussion in this tread. My opinion about category theory and programming is:
- You don't need category theory to be a programmer (or good or 10x or 100x programmer)
- It makes sense to learn category theory because its beautiful like it makes sense to study arts, literature or philosophy. Not because it useful, but because it gives you pleasure.
as with every abstraction, its value is in the value it brings you. That sounds a bit tautological, but I think that's the crux of the matter. If you like abstraction, and abstractions help you think, you are going to find value in them. If you feel they bring you nothing, then there is no need for you to use them.
My problem with category theory (my limited study of it, several years ago) was that it describes and defines a list of properties, but those properties don't combine to reveal any unexpected, exciting results.
Again, with my abstract algebra example from above: after just a couple of basic abstract algebra definitions, you learn about subgroups. Simple enough, and not particularly exciting so far. But then you quickly reach Lagrange's Theorem, which shows that in a finite group, the number of subgroups divides the order of the parent group. And that means... if the group has a prime number of elements, then it can't contain any (non-trivial) subgroups at all! That's super cool and not at all obvious in the original definition of groups and subgroup. And it keeps going from there, with a brain-punishing amount of results that all just emerge from the basic definitions of a group, ring, and field.
In contrast, category theory just felt empty. This is definition of a functor. This is a monad. Here's different kinds of morphisms. Etc.
Dunno, maybe I just needed to keep reading. But my sense from flipping forward through my CT books is that it was mostly just more concept definitions.
>My problem with category theory (my limited study of it, several years ago) was that it describes and defines a list of properties, but those properties don't combine to reveal any unexpected, exciting results.
IMO, the main value of category theory is unifying existing math knowledge in one theory. I.e. it helps you see connections between seemingly unrelated areas of math. I.e. in some sense it's a pure abstraction.
- An operation may be "functorial", meaning that it preserves more structure than perhaps originally thought. For instance, the "fundamental group" operation is indeed functorial, which means that it acts on continuous functions in a nice way as well as topological spaces. Other examples are tensor-product, vector-space-duality, forming function-spaces in certain categories, etc.
- Two categories may be isomorphic, but this is trivial.
- Two categories may be equivalent, which while a weaker notion than isomorphism, is sufficient for most things which can be expressed in categorical language to be true for both categories. This is helpful when one category is well-understood and the other one is an object of present interest. (One application is showing that the category of representations of a fixed quiver Q is Krull-Schmidt, by showing that it's equivalent to another category with the Krull-Schmidt property).
- A functor between two categories may admit a left-adjoint. It then immediately preserves all limits (it's "continuous") which immediately means that a great deal of structure gets preserved by it.
- A functor between two categories may preserve all limits. It may (under some circumstances, expressed by the "adjoint functor theorem") therefore admit a left adjoint. This may be a non-trivial fact of interest in its own right. It's related to dualities in optimisation and game theory.
- There's isolated results like the Seifert Van-Kampen Theorem (which states that the Fundamental-Group functor preserves pushouts) which would be difficult to express without categorical language.
Ultimately, Category Theory appears to be a language for compressing complicated facts about structure-preservation of operations.
Category theory is helpful in advanced algebra, and helpful too in advanced topology, and is in its absolute element in any area which combines the two, like algebraic topology and algebraic geometry. In the latter two areas, you've got lots of functors between algebraic categories, lots of functors between topological categories, and even functors going between algebraic categories and topological categories.
There's also categorical logic, which is where the CS-adjacent stuff seems to be found. But this is of little interest to everyday programming, and is very forbidding for people who lack the requisite mathematical maturity. Only the most dedicated should enter its harsh plains, and should expect to gain nothing without tremendous efforts and sacrifice.
Where do adjoint functors occur in CS? They occur in advanced algebra, and they occur in topology, but where else? And indeed, the fact that they preserve limits/colimits may help speed up communication and thinking. But I'm not seeing CS connections here.
>The first example is trivial, and serves as an illustrative but useless example (which I don't need).
Yep. Most of the example of adjoints trivial if you know the field of math where they are used. The interesting part is why it happens almost everywhere.
Another one is a free monoid. With pair of forgetful/free monoid functors. It sounds a bit mathematical but for type T free monoid is a List<T> in a programming language with generics.
As a general, i.e. React programmer, there's not a lot of value. However, as I said, if you work in some areas, i.e. programming languages, or logic it might be even requirement in some areas to be productive.
I became very interested in CT from a biological perspective. The work originates from Robert Rosen who describes how biological/complex systems are not (completely) reducible to formal systems (aka any model of a natural system will always be incomplete+it is impossible to explicitly list out all the rules governing the system) because they "contain a model of themselves" to which they anticipate the future and act on towards their benifit.
Many relate his work as closely tied to Gödel's incompleteness theorem but for biology. The purpose of using category theory is that it is general enough to not talk about specific parts of an organism's construction, but rather how their general functional parts relate to each other.
This crowd may not want to hear this, but as a former mathematician, the DS/Algorithms knowledge I gleaned from Leetcode grinding has been far more useful in my day-to-day work than any of the category theory I once used on a regular basis.
I think learning the essentials of another field can be more valuable than expected, whether it's a mathematician learning to program or a software engineer learning patterns of abstraction. When you've gone deep on one field, going even deeper has diminishing returns.
Software engineers and (non-categorical) mathematicians are both forced to deal with patterns of abstraction on a regular basis, so they naturally pick up a lot of what category theory concerns itself with. This can make category theory sound like quite a lot of words for very little gain. But I do think that formalizing one's intuition -- taking something understood implicitly and giving it explicit form -- can be a useful tool in its own right.
Most of the material out there on category theory attempts to formalize the algebraist's intuition. Software engineers then have to learn algebra just to learn category theory. I don't think that's essential to category theory, and I think we're starting to see more and more "elementary" category theory that doesn't take some other whole edifice as given. The fact that so many software engineers exist who do find value in a categorical perspective (it's more than you'd expect!) suggests that we'll get there eventually.
I find CT highly fascinating, worked through parts of 7 Sketches in Composability and have a functional programming background. I see the appeal, but I came to the conclusion that my time is better spent observing, learning about and designing with abstractions like monads, applicatives and so on rather than to learn the theory behind it.
There seems to be a tiny handful of people that can use category theory as a resource to craft something relevant to software (the stereotypical example in my mind being Edward Kmett of Haskell Fame), but I am certainly not one of them, and that is not something that would change with learning more Category Theory (whatever that might mean: Proving some theorems, discovering more categories, ...)
To the author: I am looking forward to a retrospective at a later time. I wish you a good journey, happy diagram-chasing!
> Category theory is literally the mathematics of boxes (objects), arrows (morphisms), and composition.
As I've learned more about category theory (and its applications), I've found string diagrams to be a really nice tool. But string diagrams flip the above around -- morphisms are boxes, and objects are wires. A wire is just a point-to-point link; it doesn't do anything on its own, it just connects a port of one type with a matching port on another site. It's the graphical realization of the composition operation itself.
Meanwhile, morphisms are the things that actually have behavior and real identities, so we draw them as boxes with inputs and outputs and give them names. This lines up with my understanding that in category theory, it's the morphisms that matter; the objects are no more than labels governing how you can stick the morphisms together. You may as well call the morphisms "widgets" or "components", and call the objects "interfaces", because that's literally correct.
This ends up really nice in the context of distributed systems, where concurrency gives you a monoidal category. In string diagrams, concurrently is quite literally the result of putting two diagrams side-by-side, rather than connecting them end-to-end. It's lovely, and some of the research I'm doing right now is actually founded on formalizing the message-passing causal diagrams distsys researchers and practitioners use every day, and using them as a vehicle for building programs that are more amenable to being proven correct. (Distributed systems are hard to verify for both humans and computers; I'd like to think that what I'm doing will be just as good for giving human practitioners a good framework for convincing themselves their code is correct -- or better, making bugs more obvious.)
Honestly, this is a waste of time and is not going to help as much as the author thinks it is. There is no secret from category theory that will make one's design 10x better. Not everything is even applicable.
>I hope that category theory will allow me to formalize my intuitive understanding of abstraction, put words to it, and use those words (or, better, words others have written) to explain my thinking.
Note how this goal doesn't include anything about improving the author's software engineering ability. The goal is to be able to use category theory to describe existing concepts. Why not us just describe it in ways that 99% of software engineers already understand? Why not learn good software design directly instead of through a subset of category theory which you have to translate?
author here, the way I see it is that nothing about my software per se will change. I do indeed like to write things in a "plain" fashion instead of littering my code with high-level constructs that only a few people understand.
However, it is "generative" knowledge. Having a good intuition for monads is what allows me to break down a fair amount of the mundane tasks I have to do (say, breaking down and migrating infrastructure configuration) in steps that work and stand the test of time. That's the thing about say, monad. An applied monad just looks utterly trivial, because, really, they are. If they weren't they wouldn't be worth using.
It also allows me to mine the literature or reuse things I've worked out in the past, for example exploiting monoidal structure for optimizing computation by leveraging associativity, for example.
Also, this is very much what works for me. I love to think in "structure" and patterns, and category theory diagrams and the way concepts are formulated brings me "joy", if that makes sense. It resonates with my cognition in the way say, common lisp, does, and that makes it quite fun and enjoyable.
>It also allows me to mine the literature or reuse things I've worked out in the past, for example exploiting monoidal structure for optimizing computation by leveraging associativity, for example.
This is an algebraic concept and it is possible to learn this idea directly in a software engineering context than a mathematical one.
This is a bit unfair. The author never claimed it would make them a 10x engineer.
Learning almost anything is useful in some capacity. Is everything you learn equally important? Maybe, maybe not, but you never know in advance how things might end up useful.
Of all the things one can study, math seems to be a pretty safe bet in terms of ROI.
From reading the article he justified his motivation of learning category theory to help him learn how to express himself better. I feel he is looking for some knowledge he is missing from category theory.
I am not against learning category theory because you find it interesting, but one should be honest that it isn't going to benefit ones software engineering ability at least compared to studying good design directly.
>math seems to be a pretty safe bet in terms of ROI.
Ask most people how much math they have used of which they learned it university or high school. There are niches of math that are useful to niches. Study the wrong niche of math when you are in the wrong niche that doesn't use it, it will be a poor investment.
I feel that learning about concepts from category theory is... studying good design. I wouldn't know how to measure my software engineering ability anyway, but I do think I do write better software with that knowledge.
Understanding and learning to identify more and more monads was a big threshold for me. I never write "monad" in my code, because that's not I apply this knowledge. It's knowing how to write say, an interrupt handler, so that it will compose cleanly with 3 more interrupt handlers, or in such a way that I can easily simulate my embedded system with 5 lines of javascript.
Category theory is very helpful when looking a programming from a metadata / multi-domain interactions perspective aka (interaction between different abstraction layers / non-linear scip )
Although, applying CT to lisp ( . ) bit more handy (to quot(.) or quote()
I second this recommendation; Cheng's book is probably the most approachable of the recent wave of category theory texts which does not assume the whole edifice of abstract algebra as a starting point.
Most CT texts introduce categories around page 1, and the Yoneda lemma a handful of pages in. Cheng's book builds up intuitions until chapter 8, where categories are defined over the course of several pages, and supposedly (I'm not there yet) ends the book with the Yoneda lemma.
You might think this means the book is mostly fluff. First: if you read it and think that, you're probably a more advanced reader than she's targeting. Second: goodness no, it's packed to the brim with categorical intuitions; there's a whole way of thinking that she's trying to motivate. Categories are just a formalization of this way of thinking; if you're not onboard with the thinking, the formalization is going to be hollow to you no matter what.
I worked through a popular Introductory Category theory text. Did all the problems and made sure I understood everything in there really. It became clear by the end why some mathematicians like category theory. It lifts some results in certain areas to a higher level of generality.
As for me it helped me with understanding some of what was being discussed on r/haskell. It didn't help one iota getting me a Haskell job though, failed every Haskell code pairing I've done.
I recommend "how to bake pi" by Eugenia Cheng for a high level introduction, followed by her in-depth (but still approachable) book "The Joy of Abstraction"
I am also in the process of learning cat theory, I don't have a cs background, but I ending working as full-stack developer because it was economically more rewarding, I will recommend to programmers to have a good understanding of set theory, binary relations and order theory (linear orders, partial orders, lattices), it gives you very powerful abstractions that allow you to write new algorithms about anything.
"Functor" has been used in multiple ways in different fields. It's apparently a term of art in linguistics; we call C++ classes implementing `operator()` "functors"; Prolog terms have parts called "functors" (apparently imported from linguistics); and of course category theory has them.
As far as mathematics is generally concerned, I think the category theory concept is the origin. Monoids, on the other hand, did exist before category theory; the modern conception as one-object categories is nice, but it's definitely an import.
They're concepts in category theory? Since category theory's goal is to find common abstractions behind concepts from other fields (mathematics or otherwise), it makes sense some of those concepts appear in other fields. Their definition in a category theory context is different, usually, since you can only express them using morphisms between objects in a category.
Yes, these words do have a meaning in category theory. But for a haskell programmer, a functor is a type with a map operation, for a ocaml program it's a module parameterised by a module signature. A monoid a type with a function T x T -> T. That's it.
You can know these things and still know nothing about category theory.
Category theory? Learning something when you don’t absolutely have to in order to write code? Sure it might be fun and expand your mind but I’ll have none of it you understand! Bah humbug!
To obviously debunk any and all FP autism coming at your way.
To give credit where credit is due, most of the time some application functionality being described through FP jargon gives a better spec of its behavior than the same through OOP jargon.
Author here, I actually think so. The rendering of the virtual DOM as well as hooks are fairly interesting mathematical constructs. A pattern I use a lot in web applications is the state reducer, which is really a fold over events and state. Seeing the functional nature of it (and reactive programming in general) can make for quite composable react code, while it is easy to make either a verbose mess of propdrilling or do a lot of tricky context mixing.
I'd love to see some examples of your diagrams. Are they all hand-written or do you use online tools for them? I do quite a bit of graph diagramming but not for detailed planning of implementation. Best example I've seen along those lines are the xState tools for state charts (https://stately.ai/viz).
This looks very mundane but I do think about it very mathematically as well: user interaction with a search engine. The "encode" arrow for example is very much about NLP and tokenizing, which is a functor from the "category" of natural language to the functor of "lucene tokens" which then has a functor to "lucene queries". This is of course the very mundane typing of functions as:
function parseQuery(query: NaturalLanguageString): LuceneQuery
nothing spectacular, but the abstract approach means I know I can cache/batch/precompute/distribute/pipeline/modularize it.
Similar mathematical concepts apply to all the other arrows, even if some are a bit wild (how does a search result influence a person's ideas?) But it means I can try to model the "wildness", and create say a probabilistic model to exercise and understand how my search engine actually performs (see for example click models and probabilistic graphical models)
I love xstate! I do use plantuml a lot for sketching, but most is done on paper or whiteboards and is quite transient in nature. I must have hundreds if not thousands of sketchbooks pages that look like this:
I gave an Elm talk years ago at the shared office space I used to work at. A few devs there later told me the talk helped them better understand React.
Of course Elm is closely related to Haskell, which is a playground for category theory. I think learning the why behind it all can be useful in subtle ways.
Elm is a pure functional language where all data is immutable and functions are pure meaning they are guaranteed to have no side effects.
Having no side effects in JS is easy (just don't do it!) but immutability takes some effort.
React requires immutability so that if it sees the reference to an object again, it knows that it contains the same data. If it promised to work when mutating objects it would continuously need to deep search inside them to see what changed.
In JS, some array operations mutate the array, some copy it, you have to know specifically what operation you are using. In Elm, nothing mutates objects. All built in functions and functions you create will not do this.
In short - you can do (state, action) => state in any programming language, but mistakes caused by mutations are impossible in Elm by design.
Maybe I'm missing something but it seems to me that all you can study monads in programming languages without having to also learn the category theory abstraction of it. The same goes for semigroups, monoids, folds, functors, ...
I'm sure there's a benefit in category theory once you want to unify all these abstractions and talk about all of them together, but I see so many people who think they have to learn CT before they can understand monads in e.g. Haskell, and I don't really understand why.