Hacker News new | past | comments | ask | show | jobs | submit login
Dualities in functional programming (dicioccio.fr)
72 points by prathyvsh 16 days ago | hide | past | favorite | 24 comments



I like this and have a similar frame of mind. I wish more languages would make their sum types true duals of their structs (with a single payload datum - instead we have tuple variants and structure variants and so-on complicating matters for at best a character of syntax (but careful design could eliminate even that).

The author may be interested in coeffects as the dual of effects. They are pretty much as stated: coeffects are the context environment the that program executes in. This link has some prototypes of languages that let you create coeffects in your code: https://tomasp.net/coeffects/

I was going to bring up covariance vs contravariance but the author mentions they were skipped for brevity.

Finally I really think simplifying to the point we ignore the idea of consumed-produced values is unfortunate. Consider how “inout” parameters aren’t too much different to a mutable reference (though the change in binding may be more explicit in a pure functional language). From the article it’s pretty clear why inout/reference parameters need to be invariant (both covariant and contravariant). And there is a whole lot that this kind of reasoning can bring to understanding Rust’s borrow checker and possibly how to create something simpler with the same strengths. But for that we’d need to model immutable references, unique mutable references and dare I say also volatile references (the environment IS volatile, whether it’s your physical peripherals or your application database, and IMO not having a language construct for that makes life harder than needs be).


The best practical understanding of Petricek's work I gained from using Clojurescript's Re-frame library. In re-frame, coeffects represent the data that an event handler needs from the outside world to do its job (like current time, random numbers, or local storage values). This aligns with the academic concept of tracking contextual dependencies, just applied practically in a front-end framework context.

The main difference is that re-frame uses a more pragmatic, simplified version focused specifically on event handling in web applications, rather than the full mathematical formalism of the academic work.

Learning re-frame was a light-bulb moment for me. I highly recommend giving it a try, even if there's never a plan to use Clojure or Clojurescript in practice.


The Erlang OTP handles the issue of data state reconciliation in a rather robust manner. However, this does not span all possible use-cases efficiently.

Have a great day =3


There is a similar duality between “require” and “provide” that I’ve been trying to work more with recently. The idea that we can require some behavior verified via a test and also provide that behavior via a fake might mean that the two can be unified (test is the dual of fake, so maybe you can get tests and fakes from the same code, that’s the hope anyways).

Identifying a duality means that there might be some opportunity to transfer or unify concepts, but they might not be strong enough, or the duality isn’t pure enough to be very useful. The obvious dual of a precondition is not effect, but postcondition (like require/provide, import/export, in/out, etc…). So I’m having a hard time seeing the fruit of calling precondition and effect a dual.


Things can have two dual dimensions, resulting in four concepts & four relationships.

Perhaps the fourth concept is “event” giving us:

Precondition <-> Postcondition, the relationship between a condition before & after a computation.

Event <-> side effect, the relationship between the environment activating a computational change, vs a computation activating an environment change.

Precondition <-> Event, the relationship of an environmental change, triggering a computational change.

Side effect <-> Post-condition, the relationship between a computational change, triggering an environmental change.

—-

If we think of the loop as both initiated and consumed by the environment, OR by the computation, we get dual loops through all four steps:

We have environment changes interpreted as an event by computation, which responds with an appropriate side effect, which in turn changes the environment. Environment to environment, through computation.

Or the loop can start with a side effect, causing an environmental change, whose event lets the computation react to the environments reaction to the initial side effect. Computation to computation, through envirinment.


that sounds very interesting


Besides those mentioned in the article, most everything in functional programming has a dual, usually by prefixing "co-". Cofunctor, coapplicative, comonoid, comonad, &c. All things with reversed arrows and varying degrees of usefulness.


That looks like category theory rather than functional programming.


That's just another path to understanding and gaining practical insights about things. While these mathematical concepts can provide rigorous foundations for understanding software patterns, they're not really essential for applying the practical knowledge of:

- How to handle failures (null vs defaults)

- How to structure APIs (separated vs combined)

- How to organize services (monolith vs microservices)

Most developers already using those category theory concepts without even realizing:

Functor patter - is just mapping over sequences; Applicatives - sequence operations; Monads - promise chaining in JS; Comonads for context management (e.g., with-open db-connection), etc.


The two are pretty much the same when you get down to it


In my experience, what computer scientists (in particular programming language researchers) consider to be "category theory" is very different from what mathematicians (in particular those working in algebraic geometry, algebraic topology, homological/homotopical algebra, ...) consider to be the important parts of category theory.

In my very biased and unfair perspective, the "computer science perspective" on category theory is rather applying the first 50 introductory pages of a decent textbook about category theory, while for mathematicians, where category theory actually starts to become somewhat interesting is only, say, from page 150 on, when also a lot of additional mathematical concepts that actually motivate (or even necessitate) these much more complicated category theory topics have additionally become introduced.


But why is that a problem?

Category theory is an API for mathematics that was developed with specific applications in mind that the API seeks to unify and make easier to think about. Those application domains are algebraic geometry, algebraic topology, homological/homotopical algebra. Every API comes with trade-offs: typically an API makes one domain easier, at the cost of making other domains harder. Example: CSS is Turing complete. And I think CSS is really good at helping with styling webpages. But I would not want to write a compiler is CSS.

Computer scientists, like myself, who read from Page 150 onwards have just found the API stylised for algebraic geometry, algebraic topology, homological/homotopical algebra, ... not that useful, for applications in computer science. Unlike the first 50 pages, which have been very useful. More specifically, we found the cost of using purely categorical APIs not worth the benefits in many application domains. Maybe we are missing something, maybe we overlooked something. But, given the investments since the 1990s of computer science into category theory, I'd like to see more evidence for!

To conclude with a concrete example: why would I write a compiler using an API for homotopical algebra?


Who told you that?


Don't think anybody told me that, but I do believe it to be the case


Why?


Category theory is equivalent to type theory.


It's probably more accurate to say that they are complementary rather than equivalent.

- Category theory emphasizes morphisms and composition

- Type theory emphasizes terms and computation

There are important correspondences between them (like the Curry-Howard-Lambek correspondence), but they have different strengths and different ways of expressing concepts.

Sorry, I couldn't resist the temptation of being a pedantic formalist.


There exists a rich theory about duality in computation, a forgotten twin of lambda calculus: sequent calculus https://ps.cs.uni-tuebingen.de/publications/ostermann22intro... I recommend you check it out if you are at least a little curious about duality in programming.

I've been thinking about duality at the core of programming language design for a while now. Motivated by asynchronous computation and mainly focused on the question: How to build programming abstractions from input-output-duality? It is fascinating seeing the common abstractions just naturally evolve from there.

As a preliminary result I wrote a theory of computation: http://perma-curious.eu/e3lli/core/ It describes how to go from input-output-duality to an advanced lisp dialect.


That author has a bit of a misunderstanding about Lisp:

"Lisp evaluation works by mutual recursion of eval and apply. Eval looks at an expression and if it is a function application it calls apply. Apply in turn calls eval on the arguments and invokes its function on them. This is call-by-value."

In classical Lisp and its descendants, apply is a function. Indeed, the paradigm is call by value, and therefore that function receives all of its arguments already evaluated.

When eval determines that its input form is a function call, it recurses first on eval to evaluate the argument expressions to a list of values. It then uses apply, which is a function which takes two arguments: the function to be applied, and a list object of values to be the arguments. apply doesn't do any evaluating, just the binding of the function to the arguments. eval needs this API, without which it has no way to pass a dynamically constructed argument list to a function.

It's not clear if there is a meaningful duality there; apply is a service required by eval. eval is not required by apply. apply can be written in terms of eval, but that eval cannot then use that apply. Also, it's mildly ugly, because apply has to carefully add quotes to the argument material to prevent eval from evaluating it again, so that just the desired effect is obtained of a function application and no other evaluations.


I tried digging through both the article, SICP and your comment, and here are my thoughts: I think both statements are correct in their context.

The perma-curious.eu article is right about the eval-apply cycle as presented in SICP. The SICP metacircular evaluator presents eval and apply as two mutually recursive procedures that form the core of a Lisp interpreter. In this implementation, apply does evaluate arguments indirectly through eval. The mutual recursion between eval and apply is indeed fundamental to Lisp, and it's clearly explained in the "wizard book". The call-by-value evaluation strategy is also correctly described - arguments are evaluated before being passed to functions. This eval-apply cycle forms the essence of Lisp's execution model.

But your comment is also valid, your correction is more precise:

- apply is indeed a function that takes already-evaluated arguments

- eval does the evaluation of arguments before calling apply

- apply simply binds and executes the function with its arguments

- The relationship isn't truly dual - apply depends on eval, but not vice versa

I think the confusion came from conflating the built-in apply function with the apply procedure used in implementing an interpreter.


SICP is not the horse's mouth in this matter, and doesn't purport to be. It's presentations are designed to suit its pedagogical aims, not to build up a historically accurate Lisp.

Last I heard, they supposedly rendered the book and course into Python.


A formal account regarding what the author puts as effects and pre conditions https://arxiv.org/abs/1001.1662


The reason null values are a problem is because most compilers do not track the difference between a nullable and a nonnullable type. If they would, then null values would indeed be similar to Nothing in the maybe type.


The tag in tagged union is an implementation detail.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: