May be worth to mention a 'Functor' in OCaml is a different kind of beast than the 'Functor' explained in this article (which is ~ the Haskell Functor type-class).
And perhaps add something about (OCaml) modules in that case?
The whole list seems to be mostly Haskell-inspired terminology translated to JavaScript(?), so OCaml-style modules and thus functors might make less sense to describe thoroughly. (Even though the OCaml people have a better claim to be using Functor in something resembling its mathematical meaning.)
> Even though the OCaml people have a better claim to be using Functor in something resembling its mathematical meaning.
I think that is not accurate. Haskell's notion of a functor is basically taken from category theory. Same with applicative functor. It is easy to draw commutative diagrams for things like `Maybe` or `Either a` to prove it!
In contrast in OCaml functors are module functions. Do modules form categories? Maybe? I haven't seen anyone talking about category of OCaml modules. Perhaps in some papers. But even if they do I wouldn't call OCaml's claim better. When one says Functor in Haskell it is generally understood as "the category theory thing". When one says functor in OCaml I think most people, myself included, would think "module function", whether it satisfies category-theory requirements or not.
Also, worth pointing out that OCaml too has applicative functors. Those are such functors that when applied to same modules produce same types. Whether that fits with the category-theoretic notion of applicative functor... haven't thought about that! Maybe?
Basically
module M1 = Make(F)(G)
module M2 = Make(F)(G)
then if `M1.t` is the same as `M2.t` for some `t` the we have an applicative functor. The other side of the coin are generative functors: each time you apply them they mint new types:
module M1 = Make(F)(G)()
module M2 = Make(F)(G)()
Then `M1.t` /= `M2.t`. This has been introduced in OCaml `4.02`
Basically anything not straightforwardly isomorphic to category Hask is considered to be not quite functional programming, so Haskell terminology will dominate.
As shakna wrote in another comment, the term 'functor' has different meanings across fields of mathematics. The meanings in Haskell and Ocaml originate from different definitions.
I don't think they originate from different definitions. Concepts in category theory are very abstract on purpose, because it focuses on the main essence of the idea, divorced from any particular application of it. Which will greatly vary, but have a common set of properties.
In Haskell functors are endofunctors in the category Hask. Endofunctors are a special case of functors where the source and result category are the same, and Hask is a category of Haskell types. So essentially Haskell functors are all Hask -> Hask transformations. For example applying "fmap show" to a Maybe Int produces a Maybe String, which are both in Hask.
In OCaml you can treat modules of the same signature as categories. If you look at [1] and look at the example for intervals, you have one category of Comparable modules and another category of Interval modules. The Make_interval functor maps objects from Comparable to Interval.
So it really is the same concept, just different applications, but they follow the same theoretical framework and properties.
I assume @hemanth is happy for the project README to appear on the blog of a recruitment agency. Along with the rest of the blogs that seem to be copy pasted from the original location. Should I guess functional-works obtained permission from all the original copyright holders?
yeah, we reached out directly to Hemanth and got a written consent to re-publish as long as the attributions were visible enough - he is credited as the author, plus there is a link for the original source at the bottom.
As with all articles on our blog that are not written by us, the authors can reach out to us at any time if they want to make any changes to the article and we'll be happy to comply. As well, they can make the changes by themselves at any point.
Regarding the MIT license, we didn't think of mentioning it on the blog - but thank you for head's up, we'll look into it.
>We have a platform that givs visibility to the great work that’s being done in the functional space.
Wow, that's disingenous. That's nothing like what you do. You have a platform that promotes your brand by copying and pasting other people's content into a worse presentation with your logo in a sticky navbar stuck to the top of the page.
You guessed correct. We ask each author for prior permission to rebublish. Although around 50% of the posts you see have been contributed directly by the authors which we then will approve before they go live on our site. We actively curated posts we feel our audience of 50,000 functional programmers will find interesting but only publish those that we have the permission to do so.
I think the jump from 'Category' to 'Value: Anything that can be assigned to a variable.' was a somewhat jarring change in abstraction level.
Also, trying to explain Monad, Comonad, or Applicative as 'jargon' is probably a step too far IMO. They are not important for getting started with FP, and describing them without their equational properties is kind of meaningless.
That being said I liked alot of the inclusions; partial application, closure, composition. I think the collection is probably slightly guilty of trying to be too clever by including advanced concepts.
These terms provide a concise way to communicate more complex concepts in a technical, specialized context: practically the definition of "jargon."
But that's also the reason I agree with you as regards their utility in most programming. Simply knowing the "what" of the definitions is barely a start. Those who know the definitions and are comfortable enough with the material to use them effectively would find little value in lists of this sort. For most (almost all) programming efforts outside a very narrow niche of academic applications it's not even necessary.
I think it makes sense to include some of the advanced jargon in a list of this kind because they're words that people are likely to encounter in online discussions about functional programming.
I don't think this is strictly true. Sure there's liftA2 that does that but as a general concept, you can lift things of various kinds into a monad (or other thing like that). For example, `pure`/`return` (`of` in fantasy-land) lifts a value into an applicative/monad.
I'm not sure if lifting is just application of a coalgebra but I'm still learning too. :)
AFAIK lifting is informal and specifically refers to lifting functions. Confusion arises when this is accomplished via Applicative which... erm "lifts" currying itself.
> "Since these rules govern composition at very abstract level, category theory is great at uncovering new ways of composing things."
I understood this much already prior. What I'd like next now are at least a handful (half-dozen-or-so) of concrete and truly compelling examples of such "uncoverings" ;)
Monads are so pervasive that multiple languages have determined that they deserve special syntax (see: Haskell's do-notation, OCaml's let-syntax). Those syntaxes play nicely with a ton of seemingly-different types, such as promises, options (maybes/nullables), lists, monadic error handling (either, or_error), etc.
On a similar note, recognizing the structure of monads has in the past allowed me to abstract things in an interesting way. For example, say I have a dict lookup function (takes a key, returns a value) and I want to build a bunch of useful functions on top of that. But we want to use them in many different contexts - the dict might be remote or locally in memory (promises vs. not) and we may or may not be in a test context (monadic vs exception-based error-handling). By recognizing that we can just abstract over monads, we can easily handle these 4 cases (promise, promise+error monad, error monad, unit monad).
I failed to find F# computation expressions, probably because I searched for the word "monad" which they seem to be avoiding, but that's another great example.
Loved the concise explanations of the What. A nice extension of this would be the Why, something I usually struggle to understand. Currying as an example, I'm none the wiser as to why it's a preferable way of composing a function and it's arguments, or when either is applicable.
Currying is a form of higher order function, and it's not necessarily preferable, but often just makes sense.
I think it is in some way overused in Haskell for example. When your function takes five different parameters, it's unlikely that the curried style really makes sense, other than that it's just convenient and idiomatic in Haskell—in many cases you would be better off using a record type for the parameters.
But if you consider for example the map function, it really makes sense to consider it a higher order function in a "curried" way. That is to say, map f makes sense on its own, because map takes a function and transforms it into a function that works on lists.
So the type
map :: (a -> b) -> ([a] -> [b])
is really nice, and nicer than the uncurried alternative:
map :: (a -> b, [a]) -> [b]
Of course they're isomorphic, but the curried one seems to definitely win in terms of elegance, and you avoid ever having to type
\x -> map (f, x)
or similar.
I think if Haskell had really convenient named parameters, people would use currying less, but we'd still want to use the curried style for many functions that actually make sense as higher-order functions.
> Currying as an example, I'm none the wiser as to why it's a preferable way of composing a function and it's arguments, or when either is applicable.
The primary advantage of curried functions is that they're trivially partially applicable.
With an uncurried function, if you have a function of two arguments and you want to apply the first but "hold off" on the second you have to wrap the thing in a new function with a single argument which will do the application later (the language may also provide a helper) whereas with curried functions, you can just apply one of the parameters:
let add5 = (a) => a.map((n) => n + 5)
versus
let add5 = map (\n -> n + 5)
This means you can very easily create "small specialisations" of very generic functions.
Languages with curried functions also tend to provide features like the ability to treat operators as functions ("sections"):
I considered that quite a lot while contributing to the repo but it's super easy to go overboard and turn a single definition into many pages of explanation. I feel that this glossary should be just a taste of the concepts in a shallow, practical, and hand-wavy way.
To be honest, each concept like "partial application", or "applicative functors" deserves an hour-long lecture or more but if the content went that deep I think it'd lose its current audience.
Let's say you have a validation function that matches a string with a given regex, and you need to use that same logic in several places, using a different regex. Currying allows you to write something like this:
validate :: Regex -> String -> T
validate r s = ...
validateIpAddr :: String -> T
validateIpAddr = validate ipRegex
validateZipCode :: String -> T
validateZipCode = validate zipCodeRegex
And so on and so forth. It can reduce boilerplate, and lets you do stuff like map partially applied functions to functors.
It's `validateIpAddr(String)`: the first argument is fixed. Suppose you have a list of strings, and you want to extract the ones that are IP addresses. Then you can just write
filter (validate ipRegex) listOfStrings
instead of
filter validateIpAddr listOfStrings
where validateIpAddr string = validate ipRegex string
It's interesting how simple and understandable now these concepts are with JS. I couldn't imagine a language written in 7 days ending up with ability to handle all this.
The original JS borrowed heavily from Scheme (lisp1 + everything evaluatable) and Self (prototype model of inheritance) and then was covered with a bunch of Java-like nonsense, which (IMO) just made things more complex.
There's been a nice functional language buried in JS all this time. It's still buried in there someplace.
Modern JS is just syntactic sugar on top of concepts introduced in original JS. It's so much flexible you have to appreciate the beauty; you can have getters/setters or classes but it the core is such a simple concept; you can mold it any way you like it.
Those are all just particular cases of the same concept. A "map", well, maps values in a domain to values in a codomain. A mathematical function is a map. A key/value data structure can be viewed as a partial function on a non contiguous, discrete domain. A hashmap is just an implementation detail of a key/value structure. A map over an array is again just a function mapping an array in one domain to another array in a codomain.
While that's true in a sense, I think it sweeps important details under the rug, which confuses things more than it enlightens. You can iterate over the domain of a map data structure, but not (in general) a map function.
For purposes of education and communication, I think it's unfortunate that we use the same name for these things.
Looks like the article on Works Hub was copied & pasted from the original source on GitHub, as others have mentioned in subthreads. Would be great if we could update the article link to the primary source to give the many contributors credit.
Yeah, I saw that and disliked that they copied the entire very long post then buried the attribution in one small line at the very bottom which most people will not read to. Without putting that upfront, the site that copied is playing in a gray area of plagiarism where this is presumed to be original content from the top of the post.
Nice idea - should be moved to the top to avoid confusion. However, all post are approved by the author before they go live so it's really down to the writer to structure the post how they wish.
I know this is off topic, but I really wish pages wouldn't hide content behind JavaScript. The page I see would work just as well as just HTML with a little CSS styling.
indeed the links to the sections don't work for now and we're looking into it.
regarding the JS stuff, the site is built as an SPA and the blog is only one feature, the others are job postings and a personal dashboard where a signed-in user can pin their favorite jobs to keep them for further reference
It's also built using ClojureScript, Reframe, Clojure & GraphQL plus a revamped version launch is planned for mid-December, so I'm afraid the JS is still going to be there :)
Yeah, when I was young and I found my comments every second under my forums posts too, trying to retain the meaning. That looked silly. There is always something you can learn as you grow up. (No pressure.)
Correction: Haskell jargon. The continued conflation of Haskell and functional programming as a whole annoys me greatly. (The majority of functional programming is not statically typed! Clojure and the other lisps are far more widely used than Haskell and the MLs, not even to mention that basically every language has first-class functions now.)
"Constant" is a term almost never spoken at Haskell circles, and "referential transparency" appears much more often within lispers and Clojure practitioners than between haskellers.
It is a stretch to suggest that Haskellers don't discuss referential transparency as much as Lispers do. It's a fundamentally important concept in writing and reasoning about Haskell programs.
There's a large "fish don't talk about the water" effect. When talking about Haskell, people talk about referential transparency, but when writing Haskell, it's very rare.
So you won't have to spend an eternity to reverse engineer the syntax of a language you don't know. Like a lean Chinese book written in Chinese would be a bit counter productive intended for westerners.
It always wondered me why javascript people would spend an eternity to get a new syntax. Is it afterpain of learning the precedence or []+{} rules? It is not that bad in other languages, and no, it is not Chinese, let you believe.
Apparently-idiomatic (I think—I can't tell) Haskell reads like line noise mixed with pig latin to me. I've bounced off it harder than any other programming language I've ever taken more than 15 seconds to try to understand. It makes me feel the way I assume dyslexic people feel.
It's not "Haskell syntax" - the language itself doesn't have that many syntactic constructs. The "line noise" feeling comes from the ability to define new operators with custom precedence and associativity and the fact that people use the custom operators for everything. I don't know why it is so - OCaml also allows you to define your own operators, but OCaml programmers tend to use that ability much less frequently.
Understanding Haskell syntax requires a lot more than learning the precedence of []+{}. Plus, if you already have a very chewy and hard topic to learn, adding a new PL syntax on top of it certainly won't help.
I can get usually into new syntax fairly fast, but with languages like Haskell, where the syntax and the concepts also new, then complexity just rise exponentially for me and I have too many questions with too few answers, so I lose interest.
Yes, you can do it in C++. But it's not a good medium for teaching those concepts. (The excellent article you linked to explains them in Haskell first, and then shows how to do them in C++ only afterwards.)
However much it might pain you, JavaScript is the most popular programming language on the planet. Most developers who are interested in learning functional programming are JavaScript developers. It would be boneheaded to use anything other than JavaScript.
> Most developers who are interested in learning functional programming are JavaScript developers.
Whatever they believe, they will probably gain much more from learning a second language (any second language) than by learning about functional programming jargon.