Hacker News new | past | comments | ask | show | jobs | submit login
What’s so great about functional programming anyway? (jrsinclair.com)
402 points by redbell on Nov 16, 2022 | hide | past | favorite | 569 comments



I love softcore FP (immutability, first-class functions, etc), but I always get the impression that hardcore FP, like in this article, is about reimplementing programming language concepts on top of the actual programming language used.

The pipe feels like a block of restricted statements, `peekErr` and `scan` recreate conditionals, the Maybe monad behaves like a try-catch here, etc.

Of course there are differences, like `map` working on both lists and tasks, as opposed to a naive iteration. That's cool. But this new programming language comes with limitations, like the difficulty in peeking ahead or behind during the list iteration, reusing data between passes, and not to mention the lost tooling, like stack traces and breakpoints.

I've written many (toy) programming languages, the process is exactly like this article. And it feels awesome. But I question the wisdom of presenting all this abstract scaffolding as a viable Javascript solution, as opposed to a Javascripter's introduction to Haskell or Clojure, where this scaffolding is the language.


I agree 100%.

I've worked on some big TypeScript code bases that make heavy use of sort of FP the article promotes; dealing with stack traces and debugging the code is incredibly painful. Jumping around between 47 layers of curried wrapper functions and function composers that manage all state mostly in closures is a real drag.

Until the tooling is better, I can't recommend these idioms for real work.

TBF, there is a kind of beauty to the approach and as you really dig into it and understand it, it can feel like a revelation. There's something addictive about it. But any feeling of joy is obliterated by the pain of tracing an error in a debugger that isn't equipped to handle the idiom.


As a complement to what you said: a far better paradigm to everything-must-be-purely-functional is "write as much of your program as is practical in the functional style, and then for the (hopefully small) remnant, just give it the side-effects and mutation".

This leads to fewer errors than the imperative/object-oriented paradigms, and greater development velocity (and quicker developer onboarding, and better tools...) than the 100%-purely-functional strategy.

Hopefully, over time, we'll get functional programming techniques that can easily model more and more of real-world problems (while incurring minimal programmer learning curve and cognitive overhead), making that remainder get smaller without extra programmer pain - but we may never eliminate it completely, and that's ok. 100 lines of effectful code is far easier to examine and debug then the entire application, and our job as programmers is generally not to write purely functional code, but to build a software artifact.

The above applies to type systems, too, which is why gradual typing is so amazing. Usually 99/.9% of a codebase can be easily statically-typed, while the remaining small fraction contains logic that is complex enough that you have to contort your type system to address it, and in the process break junior's developers' heads - often better to box it, add runtime checks, and carefully manually examine the code then resort to templates or macros parameterized on lifetime variables or something equally eldritch.

(and, like the above, hopefully as time goes on we'll get more advanced and easier-to-use type systems to shrink that remainder)


"softcore FP" is a great way to put it.

JS with Ramda (and an FRP library, if needed) is the sweet spot for me. I use a lot of pipes() but usually don't break them down into smaller functions until there is a reason to; but FP makes it trivial to do so.


It looks nice in code, but it's a hell to debug.


Yep, debugging Ramda code is terrible experience.

We maintain a service which is making heavy use of Ramda. It seemed like the right tool for the job, because the service is mostly doing data transformation, and the code ends up "clean" and terse. However, we found that onboarding people who are not familiar with FP is time consuming and often people outright refused to learn it. We decided to ditch Ramda, rewrite the service in vanilla JS, prefering immutability where possible. We're about halfway done and it was definitely the right decision. Sure, `R.mapObjIndexed(doSomething, someObject)` is simpler compared to `Object.fromEntries(Object.entries(someObject).map(doSomething))` and now there's a ton of multi-level object spreads, but at least it's simple enough to understand for anyone familiar with JS.

We also came up with a `pipe` function that handles Promises. It makes chaining (async) functions very convenient:

const pipe = (...functions) => (input) => functions.reduce((chain, currentFunction) => chain.then(currentFunction), Promise.resolve(input));


> I love softcore FP (immutability, first-class functions, etc),

I couldn't come up with a succinct way to say exactly this, since this is very much how I am with it right now. Thanks for that.

I will add I'm comfortable with, and prefer, map/select/reduce over for loops where provided.


I've found this too. Once I got comfortable with map/reduce/filter in JS, the thought of using an imperative while or for loop feels as backwards as writing a goto/label.


I think that's sort of true; to me a lot of the value of fancier and FP is that you can do things "in userspace", with plain values and functions, rather than needing magical language keywords. Magical language keywords do have some advantages because they can have tight integration with the language implementation, like stack traces as you mention. But they're also hard to understand and reason about, especially when it comes to how they interact with each other. (E.g. in languages that have both builtin async and builtin exceptions, the interaction between the two tends to be complex and confusing; whereas where async and error handling are implemented by libraries, it's easy to see exactly what's happening in any scenario, since it all just follows the normal rules of the language).


I don't know Clojure, but stack traces and breakpoints are nearly unusable in Haskell also, for exactly the reason you say. It's a weakness of functional programming, not the language.

That said, you need these things less when you don't have mutable state and you do have referential integrity so you can zoom in on the critical path of side effecting code.


Hmm ... What's your problem with ghci debugging? Iirc it does not work worse then eg pdb.

The only problem I encountered regularly was that bugs did only manifest in the optimized versions not in the debugged ones.


> It's a weakness of functional programming, not the language.

I would strongly object this.

Have you seen ZIO?

https://zio.dev/


FWIW stack traces and break points work fine in Clojure. See eg https://calva.io/debugger/ for some animated gifs showing this in action.

(Without tooling the stack traces can be noisy.. but that's incidental in this discussion context)


Sorry, I wouldn't dream to imply that Clojure is inferior to Javascript. My dig was at the article's implementation of half of Clojure[1], which does have tooling problems.

[1] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule


The only thing stopping Functional Programming from taking off are functional programmers.

It took me years (and Elixir) to get past the "here are some words, some mean what you think, some mean something completely different to what you know, some mean almost the same thing. Also here's 30 overloaded operators that you have to juggle in your head and we wont introduce things like variables for another 4 pages." without running back to something imperative to actually build something.

Functional programming's great when you have immutability to drop an entire class of bugs and the mental energy that accompanies that, can pass-around and compose-with small functions easily which makes writing, testing and figuring much simpler and can make your brain click that "its all data" without trying to over complicate it by attaching methods to it.

Honestly I think that's 90% of what the royal You needs to understand to get FP and actually try it. Selling people on "its all stateless man" or other ivory tower tripe is such garbage because (as the tutorials always point out) a program with no state is useless, and then the poor shmuck has to learn about monads just to add a key to a map and print something at which point they're asking "how was this better again?"


I think that's a bit harsh. It's frustrating to translate an imperative task to an FP language, but it's also frustrating to translate a Bash command into Python, or Prolog semantics into Go, or any program full of mutations and global state into (safe) Rust.

I think a lot of the friction you mentioned comes from learning imperative programming first. Our vocabulary, tools, and even requirements come with an implicit imperative assumption.

PS: I didn't downvote you, though your tone is harsher than I prefer for HN.


I would argue that command sequences that mutate global state are far more intuitive to humans (food recipes, furniture assembly manuals, the steps to take to fix a flat tire on your bike, etc.) than functions. So it’s not just what we learn first in a pedagogic sense. We’re already wired for imperative programming. “Thread the state of the world through the food recipe instructions” is a ridiculous concept to a normal person.


I agree in principle. However, if you work imperatively, you'll use _much_ more (global) state than if you worked with a pragmatic functional language like Clojure. Which in turn leads to normal people not understanding what's going on, since humans are built to keep, say, 10 things (atoms) in mind, not 100.


> “Thread the state of the world through the food recipe instructions”

Oh, that's how it works? That actually makes sense. Thanks!


I think it is very dependent on the problem at hand.

Some traditional CS algorithm? Mutable it is. But a compiler with different passes is a great fit for the FP paradigm.

But even if we go to pure math, you will have constructive proofs beside traditional ones.


I strongly disagree, it is definitely the teachers and advocates of FP which are lacking.

There are plenty of programs and libraries written in "imperative" languages like Python with lots of functional style applied. That should be the starting point, not sermonizing about whether a monad can even be defined with English words.


Of course you can use English words. To quote:

"A monad is just a monoid in the category of endofunctors, what's the problem?"

/s


Haskell is particularly, egregiously bad for this.

None of the concepts are particularly complex. They're unfamiliar, sort of, but they're not insanely obscure.

But then there's the labelling. If an ordinary for-loop was called a yarborough and a function call a demicurry it really wouldn't help with the learning.

I realise the concepts are supposed to map[1] to category theory, but category theory stole words like monad from elsewhere anyway.

[1] Ha.


This comment only makes sense in the monolingual USA, where learning a new language is an arduous task and not something people do all the time.

German, Polish, and Chinese programmer all learn what "for" means.

And anyway, Java calls functions "methods", yet somehow we survived.


That is somewhat true, but in practice, not really so.

Imagine you're a Chinese programmer. Obviously a lot of open source documentation is in English, so you have to learn basic English anyways (I'm bilingual from an early age, but most people struggle if they start the process even in their teens).

Then you see English-speaking programmers talking about yarborough and demicurry as if they were English words. Now you're majorly confused. Worse, an encyclopedia of philosphy tells you that it means something (see eg. https://plato.stanford.edu/entries/leibniz/#MonWorPhe ) What's the actual connection? When you ask, people casually tell you to read up on category theory, which is apparently something they teach to advanced Math majors. Remember we're struggling to learn basic English here!!!


Tangent: what I like about (the) Haskell (ecosystem) is that most often you'll find the exact right tool for a job.

Eg: in Python you get `dict` ... In Haskell you get very specialized versions of you need them. Do you have integers as keys? `IntMap` is your friend. Hash able keys? `HashMap`! Don't know anything about your keys except for it a key is different from another? You still functions that tread a list of tuples as a map.


> I think a lot of the friction you mentioned comes from learning imperative programming first. Our vocabulary, tools, and even requirements come with an implicit imperative assumption.

Yes that's exactly my point and why no one cares about how great a monad is when trying to learn FP.


How is that a problem of functional programming? When you learn a language you don’t start with “Well in English we say…” you learn language with its idioms and quirks.


It's not. It's a problem with people trying to sell FP to non FP programmers. A non FP programmer doesn't care about monads. They care about making what they already do simpler and clearer.


Fair enough. I guess it's the same as when you're trying to teach imperative programming starting from classes and OOP.


> When you learn a language you don’t start with “Well in English we say…” you learn language with its idioms and quirks.

Idioms and quirks come after rote memorization of basic vocabulary and sentence structure, which are then used to learn the idioms and quirks. I think Bongo is saying FP-ers are skipping the basic vocabulary part and jumping straight to the idioms and quirks.

It was interesting in 2015/2016 to see co-workers learn some of that basic vocabulary of functional programming without realizing they were touching on it, using map/fold/etc when learning React.


s = 1; s = 2;

is same as

unit(1).chain(add(1))

That's the beauty of immutable data.


You have to admit it’s not a great example, though, and it comes off as verbose for no reason.


1. it's not

2. the beauty of immutable data isn't in hiding meaning behind meaningless piles of functional abstractions


> Functional programming's great when you have immutability to drop an entire class of bugs and the mental energy that accompanies that, can pass-around and compose-with small functions easily which makes writing, testing and figuring much simpler and can make your brain click that "its all data" without trying to over complicate it by attaching methods to it.

A hundred percent. FP for me, is "let's stop gluing state and functions together for no reason" and get all the benefits above. I understand pure functions, avoiding state, closures, and using first class functions as the basic building blocks of creating higher level structures.

When it becomes "a functor is a monoid of the schlerm frumulet" - ala the typical FP advocacy blog / latest React state management manifesto masquerading as library documentation, I zone out. The odd thing is, I don't feel I've lost anything by doing so.


I agree that Elixir is the ideal gateway into FP. It's also quite a good argument that "FP" at its core is something more fundamental than what is talked about in this article. Elixir doesn't really use many of the category theory derived approaches at all, it has a convenient way of doing method chaining and some higher order function stuff and that's about it. And the results are excellent code.

The two main FP things that you need to learn to do Elixir well (IMO) are thinking about data structures first, and purity. Choose the right representation of the data at each stage and then what functions do you need to move between each. Make those functions side effect free (but not obsessively). Then you put your stateful code in GenServers and you have useful real code and most of it was incredibly easy to test.


Pretty much this. I've been learning Common Lisp and the way I try to design my programs is to have a somewhat clear understand on how your data should transform from input to output. Then you write the functions.

Let's say your data is a log file and you want to load everything in memory. You write the code that returns the data, which you either store in a variable or compose with other functions. At the end, you compose all of these functions in your programs. Everything is an expression, meaning that you can extract things easily into functions. This is how you make things readable. `load-log-from-file` and `auth-failure-count` is better than keeping everything together in a single function.


One take away from functional programming that I have incorporated into my Java (and C#) code: If/When possible, never (ever ever!!!) re-assign a local variable. (Rare exception: Classic C-style for-loop iteration with mutable `i` integer variable.) Really, really, I am not trolling / not a joke. How / Why? How? All primative parameter value must be final, e.g., `final iint level`, and local variables must always be final. Why? Final vars are significantly easier to "reason about" when reading code, as compared to non-final (mutable) vars. If you are using containers, always, always, always use immutable containers from Google Guava unless you have an exceptionally good reason. Yes, the memory overhead will be higher, but big enterprise corps do not care about adding 4/8/16 extra GB RAM to host your Java / C# app. Yes, I know that any `Object` in Java is potentially mutable. As a work-around, try to make all `Object`s immutable where possible. On the surface, the whole thing / exercise feels like a terrible Computer Science nitemare. In practice, it becomes trivial to convert code from single-threaded to multi-threaded because most of your data classes are already "truly" final.


That's interesting. On the other hand, I've seen code go through great lengths to not do a double assign.

I doubt it was easier to reason about at all. The code set an error var on a number of if / else conditions. They then used a separate flag, so that a double write wouldn't be necessary. ie. int flag = 0;

int err; // notice no initializer

if(stuff) { flag = 1; err = 1; } else {}

if(other_stuff) { flag = 1; err = 2; } else {}

if(flag) { LogError(err); }

I follow the "avoid double write" convention too, but in some cases, it can become very difficult to reason whether a local is uninitialized; this is a pretty terrible class of bug, because you work with garbage data off of the stack.

Setting to zero at declaration makes it much clearer that it will never be uninitialized, but it's essentially meaningless and arguably a form of dead code.

Maybe someone will suggest a redesign? More functions, smaller functions? It seems that there's a strong (though unpopular) argument that you should use medium sized functions, and some teams insist on doing this. In some cases, it can be easier to find the content, verify specifications.

Edit: On second thought I can see we didn't avoid the double write, just did it in a different variable. So I don't understand what the author's point is, lol.


I've seen some imperative code in the wild that looks like this:

    bool go = true;
    if (go) doA(&go);
    if (go) doB(&go);
    if (go) doC(&go);
As a functional thinker, this is "just" an encoding of the Maybe monad, so I actually quite like it. If you care about small efficiency wins, though, it's not great -- an early bail means checking all the remaining conditions. It's cute though!

I'm generally pretty okay with similar patterns over accumulators, as long as it's clear that the purpose of that variable is to be an accumulator. If we're just overwriting a variable because we don't need the old value anymore, or something like that, I'm much less charitable.

A big benefit of functional programming, for me, is to learn the safe roads as explicitly as possible, so that you can then identify (and use) them when they aren't so well signposted.


When I have a flag that is bouncing through a bunch of conditionals only to return at the end, I prefer to change it to multiple return paths instead of multiple spots to modify the flag that is being returned.

If you are doing a lot of work inside those branches it may be worth a refactor to simplify the branching logic.


It wasn't THAT bad, but still annoying to figure out; and I would argue that this is never definitive. At least by basic inspection it's so easy to set an error.

The fact that it set off the static analyzer is very annoying too. Now I have to somehow debug the static analyzer, or just accept the risks. I've had some uncomfortable experiences assuming the analyzer is wrong and ignoring or suppressing it... theoretically, the static analysis will do a much better job than a human, so I can't help but doubt, and feel I missed something.

Just as an aside, some teams try to avoid multiple return paths. I believe there's some studies indicating a higher incidence of bugs... for a solid code quality reason you could theoretically deviate.


I've never heard arguments against multiple return paths from a bug standpoint. It's usually a performance optimisation standpoint.

I don't really see how it could increase bugs over setting a flag. With a function that sets a flag there's always a risk that you change the flag accidentally after you already hit the value you want. Which is the same risk as returning earlier than you want, I suppose.

I just find the multiple return paths easier to read, debug and understand, rather than stepping through to trace the value of the flag, you just figure out which return is firing.

The more I write, the more I lean into immutability though. Bugs happen when values can change that shouldn't.


You wrote: <<multiple return paths>>

From experience, I have never once heard anyone from higher-level languages (above C & C++) complain about "multiple return paths". That said, for C, I definitely understand the need for reduced return paths due to lack of auto-free / destructor mechanics.


> If you are using containers, always, always, always use immutable containers from Google Guava unless you have an exceptionally good reason.

I actually prefer pcollections: https://github.com/hrldcpr/pcollections

AtomicReference + immutable data types is a really nice way to program in Java, and is basically the way most Clojure programs are written.


Read the example about “producers” - If I chain a whole bunch of producing together across many different Pcollections, I’m essentially forming a graph behind the scenes? Since no copying is happening

Surely this would cause some strange performance outcomes with, say, a gradually built up immutable list.


What do you mean by “strange performance outcomes”?


It's much more important to make global variables (object fields) final, not locals. Locals are much much easier to test and reason about, and functional languages lie Haskell and Clojure have great support for mutable locals (ST / transient)


Java 11+ has factory methods for unmodifiable collections: https://docs.oracle.com/en/java/javase/11/docs/api/java.base...


There's a short list of reasons to factor out a method. The most popular one is "if you see the need for inline comments for a block of code, maybe the code shouldn't be inline", but feeling the need to redefine a variable, especially in the middle of a function, is often a good indicator of a new scope starting. Sending variable foo with an increment, a merge, or a concatenation to another function is fairly readable. Having foo and foo' inline is quite a bit harder on working memory.


I agree with the other comments about it often being a hint to break apart a method, but there's still valid reasons to reassign.

I'd rather focus my energy on adding unit testing, and refactoring to be testable. If you have easy-to-read unit tests that cover all possible edge cases, I stop caring (as much) about how the actual method is written. Be as optimized/clever/concise as you want. I don't care if you reassign your local variables.


I rarely reassign local variables too. (Other then the set to null as a fallback then set again in a branch “pattern”).

The only excuse is when I am doing something algorithmic and that would improve the performance.

By extension I prefer local variables over class state and static state and other “self managed lifetime” stuff like that.

And I am a GC blub programmer, I don’t use Rust or C.


You wrote: <<GC blub programmer>> This is a great expression! Hat tip to Joel Spolsky for his blub programming language paradox. I know the feeling. When you read too much HN, you think the whole world is writing C & Rust.


Thanks… I was being tongue in cheek of course.


> Yes, the memory overhead will be higher, but big enterprise corps do not care about adding 4/8/16 extra GB RAM to host your Java / C# app.

Stuff like this is the reason why functional programming is synonymous with inefficiency and why Real Programming™ is still done in languages like C.


It is interesting that you wrote <<why functional programming is synonymous with inefficiency>>. Most people would say <<why GC blub languages [Java, C#, etc.] are synonymous with inefficiency>>. I actually agree with my rewrite, but I deliver value much slower with non-GC languages. The IDE uplift from Java & C# is just so crazy good. It is hard to beat for many enterprise scenarios. Plus, the developers are much cheaper than C, C++, and Rust programmers.


Agreed, I see this as a code smell. It's already hard from a semantic point-of-view, normally you have then the same name for two different meanings.


> If/When possible, never (ever ever!!!) re-assign a local variable.

Neat rule, but, it won't work. Firstly, as an example, Scala which allows you to declare a read only val, you still see vars used. There is no way you're going to get any traction enforcing this across the developer spectrum.

The benefit of these concepts are realized when they are the only option, hence FP and why mixed paradigm languages are half-assed. Java isn't even a mixed-paradgim. You're wishing.


> Neat rule, but, it won't work.

You'd get the benefit of easier to reason about code everywhere you used it, even if others within the codebase don't. Using that argument, we would argue that it's never worth trying to find a cleaner way to implement something because maybe some intern some day will do something weird in a different part of the code base. We don't have to drop down to the lowest common denominator for code quality and we benefit every time we simplify things, even if not everyone does.


> Scala which allows you to declare a read only val, you still see vars used.

You see them very rarely, and usually with a narrow scope.

I agree that you probably can't enforce an absolute rule of no mutable variables. But making it the exception rather than the rule (e.g. require it to be justified in code review) makes a huge difference.


Local mutability is perfectly fine, and depending on the algorithm can be much more readable.

Use the best tool for the job.


Can you provide an example algortithm?


I'm sure this is a wonderful intellectual exercise in computer science and math, but if this is to be the advertisement in favor of functional programming, I wouldn't consider myself a customer.

You start out with code that will not win any beauty awards but it gets the job done. It's easy to understand by programmers of any seniority, it's simple to debug and reasonably easy to put in an automatic unit test.

Next, you add all kinds of vague, poorly named meta utilities that don't seem to solve any tangible real world problem. You even mess with built-in JS methods like map. The code is now harder to understand and more difficult to debug.

A massive pain is added for some theoretical purity that few even understand or will benefit from. I'll double down on my rebellion by stating that the portability of code is overrated.

Here we're writing code to format notification objects. In our overengineering mindset, we break this down into many pieces and for each piece consider reusability outside this context to be of prime importance.

Why though? Why not just solve the problem directly without these extra self-inflicted goals? The idea that this is a best practice is a disease in our industry.

Most code that is written prematurely as reusable, will in fact never be reused outside its original context. And even if it is reused, that doesn't mean that outcome is so great.

Say that the sub logic to format a user profile link turned out to be needed outside this notification context. Our foresight to have made it reusable in the first place was solid. Now two completely different functional contexts are reusing this logic. Next, the marketing department goes: well actually...you need to add a tracking param specifically for links in the notification context, and only there.

Now your "portability" is a problem. There's various ways to deal with it, and I'm sure we'll pick the most complicated one in the name of some best practice.

After 20 years in the industry, you know how I would write the logic? A single method "formatNotification". It wouldn't weirdly copy the entire object over and over again, it would directly manipulate the object, one piece at a time. Error checking is in the method as well. You can read the entire logic top to bottom without jumping into 7 files. You can safely make changes to it and its intuitive to debug. Any junior would get it in about 2 minutes.

Clear, explicit code that requires minimum cognitive parsing.


You can apply some FP notions to any code base:

1. Don't use globals. Zero global variables should be the goal, that way, you avoid "spooky action at a distance" where some code here changes a global that changes the behavior of code over there. A function that avoids global variables is easier to deal with than one that doesn't. If you feel you need to use a global, think about why you need one before adding it. Maybe you can avoid it.

2. Parameters to functions are immutable. That way, you won't have to worry about data changing during a function call. If the parameter can be mutated, can you signal the intent in the function name or signature, so the programmer knows that to expect? Generally, try to avoid changing parameters in a function.

3. Separate I/O from processing. Do input, do your processing, then output. God, I wish the major component I worked on did that (concurrent DB requests)---it would make testing the business logic so much easier as it can be tested independently from the I/O.

Those three things can be done in any language, and just by doing those three things, you can most of the benefit of FP without the mathematical jargon (Monads? How do they work?). There's no need for over-engineered code, and you can be as explicit as you like while keeping the cognitive aspects to a minimum.


Well done. Your comment shows actual FP best practices and their immediate, significant advantages. And you did so succinctly.


Gods I hate the term monads. I've read the wiki at least 20 times in the past year and it sounds like such bloated rubbish.

I love your answer.. And I love how you can use it in every language.


You cannot really understand monads just by reading about them. Or I couldn’t, at least.

This is a very good article about the problems with monad tutorials: https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...

> “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.

TL;DR There’s no beautiful explanation that will help you understand, you need to get your hands dirty with code.


Functional Programming Jargon is great at explaining these concepts. Monad: https://github.com/hemanth/functional-programming-jargon#mon...

Purists will say it's not entirely correct, but we don't care about purism :)


My totally informal definition of monads is: "a monad is an island of stateful code in a sea of stateless FP code. On top of that we had some ".

Someone correct me if I'm wrong.


It is indeed wrong.

For example, the Maybe monad has no state. It sequences operations that can fail. The Error monad aborts the computation and returns an error. Similarly, Min, Max, Product, Sum, Dual don't have an state.


Code readability should be the top most priority when writing code. Even more than portability and performance.

Anyone can rewrite a piece of good, succinct, clear code. Even if its performance is poor.

That cannot be said for clever hacker code that -was- fast until business requirements changed over time and now the guy that used to understand the code is long gone.


You're really fighting the wrong fight here. FP is not about prematurely writing reusable code.

It's about recognizing patterns in code that exist all over the place and reusing those existing patterns for the new code you're writing. This way you can break your new feature into say 20% newly written domain specific code and 80% reusing existing patterns. Patterns that help with composition, enforcing invariants, simplifying writing test-cases, and more clearly signaling intent.

For example, if you provide a monad instance for your "formatting notification objects" I will probably understand how to use your library without even reading a single line of implementation at all. Just by scanning a few type signatures.

This way you and your team mates have a huge head start in understanding your new addition to an existing code base. This is a great win for everyone!


I'm not fighting, only offering an opinion. We simply disagree at a fundamental level.

My first and main point is that 80% of software developers, the silent majority of so-called "bread programmers", do not grasp these concepts. So even if the concept is great, it doesn't matter.

Second, and this is purely based on experience, I do not subscribe to the idea that your envisioned way of writing a new business feature is very realistic or useful. It leads to dependency hell, difficult to understand code, which is difficult to debug. I don't even subscribe to the idea (anymore) that if you're asked to implement a business feature, that it is implied that you should write it as if it can be torn apart and used for some imagined other feature. Nobody asked for that, we just made that up.

One can say that I've become a DRY infidel. I don't expect you to agree, very few would agree. Yet I've arrived here for a reason: 20 years of headaches looking at "best practice" codebases.


> very few would agree I, for one, would agree.

IMHO, DRY is only useful when the underlying problem structure has a fundamental overlap with an existing problem. Incidental overlaps should be kept separate. That's where the premature abstraction problem comes from. Often in times of changing requirements, you can't predict where things will lead to.

I suspect it's something to do with how OOP is taught. Universities start with abstract concepts and apriori principles (eg. encapsulation, polymorphism, etc.). So juniors fresh out of school tend to feel compelled to make up abstractions to be a "good citizen" so to speak.

OTOH I actually started by encountering issues with duplicate code and repeated patterns (which was a problem mainly because I had to copy and paste and retype too much code), and only afterwards I learned about how OOP techniques could help solve these problems. For me personally they were just practical tools to solve practical problems, instead of a moral imperative. I happily write very specific code to do specific things instead of pretending there's some apriori abstraction waiting to be discovered.

IMHO there's a humility in admitting that there's more than one way to look at (i.e. perceive in abstract manner) things.


> My first and main point is that 80% of software developers, the silent majority of so-called "bread programmers", do not grasp these concepts. So even if the concept is great, it doesn't matter.

Right… but is that not the point of these articles? To teach those programmers these concepts so they can get the benefits the GP commenter is talking about?

Your comment there seems to presuppose that they can never learn, unless I’m misunderstanding!


I too have become a DRY infidel. You are not alone, and I don't think few would agree.

There is nothing I love more when editing a feature than having 10 files open. :)

Or having to later "de-nomalise" previously DRY work because feature X doesn't need what feature Y now does, but they share code, so now I've got to dance.


...except that has nothing to do with FP and everything to do with... actually reading code.


One of the things that you imply is another issue: inheriting someone’s functional code is brutal. I’d rather inherit mediocreville php than decipher the “clever” code that someone who got excited by scalaz polished for weeks.


I think this is key to maintainability. Provably maintainable does not equal maintainable in the real world. Unfortunate? For some. When I wrote software in company X I wrote code simply with old idioms so that anyone could understand, step through, change and fix. Code that can be inspected, stepped through and debugged line-by-line by a graduate is infinitely more useful to a business than an opaque one-line piece of beauty. Unfortunate? For some. But provably correct code is not as important as maintainability in a world of quickly changing requirements. Unfortunate? Not really an issue for the people who pay you.


Thank you for writing this. I've come to exactly the same conclusion after a decade of building and delivering complex tangible applications which others have to maintain after I'm gone.


Thanks for the support.

I used to live all the best practices, but over time learned that the number one enemy in a codebase is complexity. 90% of software development is maintenance. People doing maintenance are lowly productive because the vast majority of time is spent on trying to understand how things even work, debugging, and making changes in a way that is safe.

The reason code is complex are abstractions and dependencies. Many of which are of a questionable or negative value and never deliver their imagined benefits. Hence, instead of "abstract by default" and seeing DRY as some religion, I abstract when value is real, proven and outweighs the massive downsides of doing so.

Imagine that we'd need to write a recipe. Sane and pragmatic people would write it as a sequential step of instructions. If a software engineer were to write one, just to follow the recipe you'd need to have 23 additional papers each describing a sub step referred to from the main instructions. Next, we would abstract each substep into "Motion" and "Activity" generalizations. This way, you can reuse "Motion" when you walk your dog later that day. Quite obviously, motions can fail so we extend it with a MotionError, which is a sub class of Error. MotionError has a Logger interface. These 3 super abstractions are useful, instead of just directly saying that you failed a recipe step, we can now apply failure wherever we want, even onto life itself. Since recipes are written in a language, we should abstract Language, itself a composition of sentences, words, characters and glyphs with a Dictionary interface, which itself is an array of Name and Value objects, that we might as well generalize as Thing.

Anyway, those were recipe steps. Next, let's do ingredients, also known as immutable multivariant singletons. But not really, because as we follow the instructions, the ingredients morph into something else, which we'll solve with a Morp object, which holds an interface StateChange, consisting of two State objects, which are Things. A ThingsIterator takes care of handling multiple things but not in a way you'd expect, we'd override map() because I really feel my Thing is special. Thinking of recipes, they can potentially come from any source so we'll implement Verbal (an abstraction of Language), Database and the jungle that comes with it, all neatly wrapped into MoreThings.

Next we redo all of his by writing unit tests, so that our software that by now implements most of the universe is qualitative. It's math.

Or...we can just write the damn recipe.


It’s funny because I completely agree with you principles but completely disagree with, what I think is, your conclusion, that functional programming should be eschewed. On the contrary, pure FP, specifically in Haskell, is the only way I’ve ever found to reliably put your principles into practice.


I honestly didn't even realize that I had principles. I find them too abstract.


Your beef appears to be with over-abstraction and not functional programming. FP generally makes debugging much easier because the state of the system isn't changing in unexpected ways underneath you, so you only need to look at local state.

A "functional" recipe is just as simple and straightforward as an "imperative" recipe.

Over-abstraction on the other hand, can be introduced with imperative object oriented programming just as easily as with functional programming.


agreed with your first post, much less so than this one, because it is based on analogy. recipes are really not like programming. when i try a new recipe, i normally read the description to see what i am going to get out of it, and if it appeals, give it a bash, without any accuracy.

for example, elizabeth david (never a one for accurate measurements) has a great recipe for beef with olives, which comes down to onions, garlick, bay leafs (other herbs to taste) olive oil, lemon peel, beef, black olives, red wine. then just cook them. any experienced cook will have no probs with this - the interesting thing is the combination of beef and olives. all the quantities are somewhat irrelevant and can be adjusted to taste.

whereas a computer program must be pin-point accurate in all details

analogies == bad. imho

sorry i went on a bit there


Tell me you've never cooked for your mother-in-law without saying you've never cooked for your mother-in-law

:)


Hahaha, this is delightful :)


> It wouldn't weirdly copy the entire object over and over again, it would directly manipulate the object, one piece at a time.

I think this is the only part I disagree with. Immutable data types make it much easier to understand code, because everything is local. You don't have to worry that some other part of the code base might have a reference to this object, and could be manipulating it out from under you.


In practice this doesn't really happen in the context of a function call.

Like calling do_something(object) might call something inside it that changed the object, but not likely unknown..

Now if you're talking about threading. Honestly, I worked as a c++, java, rails and now elixir dev for > 20 years. I can't tell you how many times I built something that threaded, but not all that often. It's just doesn't come up all that often.


In all likelihood whatever web frameworks you're using in the languages you used will use concurrency in order to be able to serve different requests at the same time. In some languages, that will mean some sort of event loop, but more often than not it will also be done via threads.

Just because you create them yourself doesn't mean they aren't there.


I agree - problem is loads of people think they should write framework and not a simple CRUD app.

I remember one meeting where clueless business person told developers (including me) that we need an *abstract* solution for a problem.

It took me quite some hours to explain to other devs that for this business person abstract solution is not what they think it is. Business person wanted CRUD app where they could configure names for database entries where developers wanted to make some fancy data types to be used with parametric polymorphism.


You nailed it! Language is different in different parts of an organisation or life. A programmer says 'or' to a lay person. A lay person thinks 'xor'.


Sounds like Go is your ideal language.


False. The best language is logo turtle: http://blog.core-ed.org/files/2014/08/logo_turtle-21.png


"Instead, the most intelligent coders I knew tended to take it up; the people most passionate about writing good code."

Ah, yes, the most intelligent people tended to take it up, therefore if I take it up I, too, am intelligent.

That single line in the article encapsulates everything wrong with FP zealots' approach to their advocacy.


People often confuse intelligence with complexity.

Yes you have to be intelligent to understand complex problems and complex solutions. But just because you are intelligent enough to understand a complicated solution, does not mean the solution is "intelligent".

True intelligence is about coming up with solutions where complexity is minimized.

Even though an intelligent person can understand a complicated solution the complexity of a solution strains even the most intelligent programmer. It takes time to ensure yourself that the solution is correct. Whereas with a simple solution that is easy. It takes intelligence to find a solution that is simple and thus easy.


Completely agree. My favorite example of this is Douglas Adams' description of flight (or orbital mechanics): "Throw yourself at the ground and miss."


That’s a great take, but I have to mention that I disagree with a statement I may have only read into the thread: FP is not more complex just for the sake of it.


I think FP is simple, uniform and very powerful. Because of that it becomes easy to express very complicated things in it, and thus write complicated programs. Like Y-Combinator. FP is simple but FP programs (often) are not.

FP is a bit like Assembler. The building blocks are very simple. The resulting programs are not. In a more conventional language the building blocks are not so uniform there are many different things you build your program out of. Think COBOL. Not very abstract code at all. But easy to read.

Conventional languages are more like "Domain Specific Languages" where the domain is building business etc. software. FP gives you the full (?) power of mathematics, it is fully general, not domain specific. Yet DSLs are often a better fit for the task at hand.


I think FP's niche popularity is primarily because of it's complexity. The complexity of it makes certain types of people feel smart, so they love it.


Oddly I don't find FP any more complex than OOP. It's not that I'm all that smart, it's just FP and OOP just govern how you pass around data, not much else.

I do tend to find FP does attract people who've really embraced jargon.


Completely disagree. Once you finish unlearning the patterns you're used to from other languages, there's nothing simpler than FP patterns. The hard part is not learning the thing, it's unlearning what you've already learned.


Do you mean that talent is required to leverage FP effectively?

Kind of like how a faster car in the hands of a less talented driver won't make them faster, but might just cause them to crash? Whereas a talented driver would pick a faster car and actually benefit from it?

Personally I think it's not so much about intelligence or talent, but passion. In think the developers who demonstrates that they've learned and tried more development techniques and styles, that shows a passion and interest towards software development, engineering and comp-sci. That passion is often key to success and long term performance.

There's probably a cutoff between talent and passion, where passion alone probably has some limits, but also where talent alone has limits as well. If you find someone with both though, that's a pretty good sign.

The question is, are talented developers drawn to FP, or is it passionate developers that are? Or those that have both?

And the other question is, can FP make someone more talented? Can it make them more passionate?


Isn't that a bit of an ad hominem?

You dismiss or disagree, because you don't like the author's attitude?


Eh, maybe. But when the argument itself is a bit of an ad hominem what is one left with? The usual implication of the “argument” is that if one doesn’t think FP is all that great, it’s because they don’t understand it (and, usually, that they don’t understand it because they are not intelligent).


I don't think that's what the author wanted to say... And in any case it's certainly not a very generous interpretation of what the author wrote.

IMO, the author makes an observation of familiarity bias and resistance to change. These are just how humans in general tend to behave, and it's not unreasonable to think that these factors are in play.

Moreover, there is an overlap between intelligence and curiosity / openness to new experiences.

If someone is not particularly curious about FP, they shouldn't take it personally: nobody is suggesting they're not intelligent-- that's not what's being said here.


The gazillion of lines of imperative C/C++ code out there that every OS runs on, are probably written by less than intelligent people.


Like Linus Torvalds. Lispers tend to make similar claims. So does Alan Kay about Smalltalk. Yet most of the world runs on top of C/C++, or COBOL (or Java) for financial transactions and Fortran for heavy duty numerical calculations. Not that FP, Lisp and ST don’t have their advantages. They clearly do. But their supremacy is overstated when it comes to code that runs the world.


My whole career I've worked with companies that used FP approaches and consistently beat their competition by doing so. If you look at how programming languages have evolved over the last ten or twenty years, the direction is clearly towards FP, because while the benefits take a while to trickle down, everyone recognises them. Less and less people are using C/C++ (and yet it's still responsible for most security bugs); people are indeed still using "Java", but the Java of today is closer to the Haskell of 1996 than it is to the Java of 1996.


That doesn't mean the imperative languages are "better", though, just more popular. I recommend the following video titled "Why isn't Functional Programming the Norm?": https://www.youtube.com/watch?v=QyJZzq0v7Z4


If you want to say that popular isn't better, you have to say that with a lot of people are either making bad choices or lack agency.

Neither of which makes sense when you look at something like Git or Linux, where someone decided to make a whole new thing without dependencies, and it displaced the previous thing, and the people who use it don't need to care what language it was written in.


I want to say popular isn't _necessarily_ better. Your argument is a counterexample if I had said (which I didn't) that more popular were _always_ worse.

In the world of programming (and I guess elsewhere, too), there are simply many more aspects at play when "choosing" the programming language than purely how good the language itself is. Easiest example is Javascript were it's simply widespread because it was the only language available in the browser. I recommend again that you watch the video :-).


Sure, I'm skeptical that any paradigm or PL is superior in general, it probably all just depends on various factors.


Ah yes, OS development, a domain famous for attracting stupid people.


FP is only part of the equation when working with meaningfully-complex systems. Having a clear data + relational model is the other big part.

I would strongly recommend reviewing the functional-relational programming model described in this paper - http://curtclifton.net/papers/MoseleyMarks06a.pdf

The overall theme goes something like: FP is for handling all your logic, and RP is for arranging all of your data. Some sort of relational programming model is the only way to practically represent a domain with high dimensionality.

For real-world applications, we asked ourselves: "Which programming paradigm already exhibits attributes of both FP and RP?" The most obvious answer we arrived at is SQL. Today, we use SQL as a FRP language for configuring our product. 100% of the domain logic is defined as SQL queries. 100% of the domain data lives in the database. The database is scoped per unit of work/session, so there are strong guarantees of determinism throughout.

Writing domain logic in SQL is paradise. Our tables are intentionally kept small, so we never have to worry about performance & indexing. CTEs and such are encouraged to make queries as human-friendly as possible. Some of our most incredibly complex procedural code areas were replaced with not even 500 characters worth of SQL text that a domain expert can directly understand.


Oh god, no! Please no!

Okay maybe it works for you, but I've worked on one of these systems for a financial engine with over 10m SQL LoC. This was a big product that was ~15 years old and used by dozens of companies having bespoke feature flags that changed significant parts of the calculation engine. Everyone except a couple grey beards who'd joined straight out of university left that place after a few years because of how insane it was to work on and we all became way too interested in good architecture design from that experience. My friends from that time who I still keep in touch with are almost entirely working on FP-based environments now.


When done poorly that is the result. And it is very easy to do poorly.

But I think the basic premise of combining functional and relational is valid. At the very least it avoids the object relational impedance mismatch.


Nearly every system I worked on that had significant business logic in SQL turned out to be a maintenance nightmare. Used to be a lot of this in the 90s or early 2000s. Where dB venders encouraged it for obvious reasons.

SQL isn't built with sensible decisions. Delete/update forget a where clause? Whoops. Lots of traps like that. Also it's not easily testable. Domain concepts don't always easily map.


Part of the reason is that SQL is simply a bad language (the JavaScript of databases).

Part of the reason is also the thinking and philosophy behind DBMSs as interactive systems. Especially with regards to code.

A function doesn't just exist. You create a function and it lives in the database until you alter it or drop it.

This creates a different (temporal?) impedance mismatch with standard code management tools like version control. The result is most often a maintenance nightmare.


A few points:

- I can understand why this might appear as hell to many. SQL is unwieldy and doesn't play nice with many tools we use to make life easier.

- I agree with the fundamental premise that a combination of functional and relational principles can be highly effective for data heavy scenarios

- i disagree that pure SQL is the solution to achieve this. If using MS SQL Server, there are great opportunities to leavrage the CLR and F# (with caveats because F# gets less love than C#). You can write functional logic once and use it both outside of the database and inside the database. PostgreSQL has extensions for other languages.


Does SQL CLR support .NET Core/.NET 5+ or just framework?


Unfortunately, for now, it is just .Net Framework, even on SQL Server on Linux. This is another caveat.

I am sure in the future, .Net Core and Standard will be supported.


SQL is not functional. But otherwise I support the desire for an FP and relational language.


> SQL is not functional

I agree that you are technically correct. It is simply declarative in nature. I've got a habit of conflating these terms. Functions aren't first-class citizens in SQL. We also have some UDFs that are impure (because the real world unfortunately exists).

I'd be perfectly happy to rename this a "Declarative-Relational" programming model if that sits better with everyone.


(Pure) SQL is a Set-theoretic language.


GOOD: having large chunks of code that runs (mostly) without side-effects. As projects get bigger global state can trip you up, so try to keep it in check. That's where the real value of functional programming is.

BAD: copying data needlessly and turning trivial stuff into a functional pipeline that is impossible to debug. You want your code to read top to bottom like the problem you're trying to solve, if at all possible. If your call stack looks like map, fold, reduce you've introduced a ton of complexity, and why exactly?

Every programmer should understand functional programming. Higher order functions can be super useful. Immutable objects are invaluable for preventing subtle bugs. But if you feel the need to transform simple linear code into a map of higher order functions because of some abstract benefits you're probably doing stuff that's too easy for you and you should get your mental challenge from solving a more difficult problem instead.


I would rather inherit someone's stateful JavaScript or Java than a complicated codebase by a seasoned Clojure, Haskell, Scala developer.

I stand a better chance understanding the sequential code than a complicated recursive (edit: nested) monad. Depending how it was written.

I can probably understand Kafka pipeline's or a functional map, reduce pipeline.

My parser for my programming language which I'm building is a recursive Pratt parser but it's written in Java.

I want to understand functional programming better. I am interested in software transactional memory and parallelism. I am aware Erlang and Go use message passing but only Erlang is functional.

In theory functional code can be parallelised due to immutability but you need some kind of communication to indicate transference of values. Imagine a pipeline where you need to share a value to a few other threads. I'm aware Haskell has software transactional memory


> I would rather inherit someone's stateful JavaScript or Java than a complicated codebase by a seasoned Clojure, Haskell, Scala developer.

Isn't that simply because you're a Java developer (as I understand from the rest of your post)?


Agree. The first version of the example (the one using only `map`) is way easier to understand and maintain than the rigmarole the author ended up with (writing their own functors, pipes, peakerror and what not). It's like the author didn't find the first version "complex" enough, so they had to make it more complicated for the sake of "functional programming".


Also all "functional" programs have their state somewhere.. just usually much stricter isolated and manipulated - so no difference in statefulness.

> I would rather inherit someone's stateful JavaScript or Java than a complicated codebase by a seasoned Clojure, Haskell, Scala developer. I stand a better chance understanding the sequential code than a complicated recursive monad. Depending how it was written.

True, but you add here the "complicated" attribute deliberately .. usually it is the other way round. The functional approach code is simpler to read and reason ( and usually not with much recursive monoids), while the believed sequential program has so many "concurrent" paths (even if there is no real concurrency, but just in the many hidden ways how the super distributed statefulness is manipulated).. not?


Your comment is very useful to me.

It spurned a chain of events that you might find interesting.

I am interested in parallel software and multithreading, so that's the kind of software I enjoy writing.

The branches of a single threaded software can think of potential future interleavings.

Your post inspired me to think of programs as "circular" or not. That is, they return to a steady state and can function properly no matter how many times they run and whether or not the execution is interrupted at any point.

A program that is interrupted can leave the program state in an invalid state that cannot be recovered.

So I started writing a parser to parse an assignment language. I plan to track object identities membership over time and see if objects get stuck in processing - they stop moving.

  program {
      available = {};
      assigned = {};
      main {
        program = programs {
           current_available = program.available {
                available += (current_available, 1);
                assigned - current_available;
            }
            item = program.requests {
                available (=) item {
                    program.answers += item;
                    available - item;
                    assigned + (item, 1);
                }
            }
        }
      }
    }

  program {
      requests = [];
      requests += (add, 1);
      main {
        item = answers {
            new_requests = [(add, 1)];
            available += (item, 1);
            new_request = new_requests {
                requests += new_request;
            }
          
        }
      
        answers -= answers;
      }
  }


Functional programmers are just as likely to make overly complicated code the same as a developer in a conventional language.

I'd rather inherit a complicated Java program than a complicated Scala program. But I'd take a well written Scala program over a well written Java program any day of the week.


Is there even such a thing as a “recursive monad”? That sounds made up.


No there is no such thing as a recursive monad. A monad is a type for which strictly defined operations are provided. It can’t be recursive by definition.


But hey, any strawman in the storm to make FP look bad.


I've just left a startup company who's entire backend was written in a functional style using Typescript by an agency. The only reason I can fathom why is 'why not, functional programming is cool right!'. A new dev team was created to take over and it was a disaster. It was an absolute mess of piped, mapped, reduced functions everywhere and completely unreadable and unmaintainable. I remember getting lost in a hierarchy of about 30+ piped (non db framework) functions to write a JS object to a database. I didn't stay long.

Since I quit, the entire new engineering team quit and it looks like the company is going under. Functional programming is a big mistake for some real-world code.


> Functional programming is a big mistake for some real-world code.

Generalizing much? I write C# for a living, Elixir in my free time, I would take Elixir codebase any day of the week. If you try to write C# in a strictly functional fashion it's going to be shitshow as well. Moderation is the key, using immutable data structures, getting rid of side effects if possible etc.

You had a bad experience because you and/or people you worked with simply tried to fit a square peg into a round hole, you didn't get it in, threw a tantrum and now you're blaming all the squares for being bad.

I on the other hand, after learning functional language, have trouble looking at most code written by pure 'OOP developers', most of it is a spaghetti shitshow of hierarchy classes, dependencies and jumping across 20 different files of factories and providers because DUHH sOlId and ClEaN. That doesn't mean that OOP is a 'mistake for real-world code'.


> I've just left a startup company who's entire backend was written ... by an agency. ... A new dev team was created to take over and it was a disaster.

I honestly think these are the important bits of this story. The startup outsourced their backend to an agency, then tried to replace the agency with a brand new internal dev team.

There's no way that story ends well, regardless of the paradigm the agency chose or how skilled they might have been (probably not very). Every codebase is unmaintainable when the team that built it is suddenly replaced.

Peter Naur had a lot to say about this in Programming as Theory Building [0]:

> The extended life of a program according to these notions depends on the taking over by new generations of programmers of the theory of the program. For a new programmer to come to possess an existing theory of a program it is insufficient that he or she has the opportunity to become familiar with the program text and other documentation. What is required is that the new programmer has the opportunity to work in close contact with the programmers who already possess the theory, so as to be able to become familiar with the place of the program in the wider context of the relevant real world situations and so as to acquire the knowledge of how the program works and how unusual program reactions and program modifications are handled within the program theory.

[0] https://gist.github.com/onlurking/fc5c81d18cfce9ff81bc968a7f...


You can make code in any paradigm suck. You can do horrible unmaintainable things in any language in any paradigm: Java with twenty layers of abstractions, Python with immense amounts of spaghetti, C with its hard to control safety. You can also do awful, abysmal imperative or OOP code in Typescript. So I just don't really see how you can single out FP here at all. Your codebase sucked, and whoever was hired to write it in a FP style just sucked at doing so. Sorry.


> So I just don't really see how you can single out FP here at all.

Not OP but imho, it’s because FP is "sold" as the perfect solution for readability and code maintainability. Just use FP and nothing can go wrong. That’s at least the impression I get when I read about FP.

The fact that one can write abysmal OOP code is nothing new.


> Just use FP and nothing can go wrong. That’s at least the impression I get when I read about FP.

To be fair that was the kind of nonsense that was being talked about OOP in the late 90's and early 2000's.

There are no silver bullets, and anyone who claims otherwise is flat wrong.

However most techniques have their advantages, when used well - and I'd say FP has more to offer than OOP in this context.


I generally agree with you. That said I do think the GP has a point w.r.t. Java and OOP. It's somewhat analogous in the sense that Java's main selling point was its ability to overdo OO abstractions.

What I don't agree with the GP is specifically the claims on other languages:

""" Python with immense amounts of spaghetti, C with its hard to control safety, abysmal imperative or OOP code in Typescript ""

These are not sold as advantages for the respective languages. Python doesn't say it's easier for the programmer to write spaghetti. C's unsafe constructs are a known side effect of being closer to the metal (and optimizing for speed at expense of safety). Typescript's selling point is type checking, which is orthogonal to abysmal code.

But FP is just sold as making it easier to make abstractions all the way down. And for many of us FP-skeptics, that's not a selling point, that's a turn off. (And to reiterate my original point - this is the same for Java, OOP and OOP-skeptics)


> it’s because FP is "sold" as the perfect solution for readability and code maintainability. Just use FP and nothing can go wrong. That’s at least the impression I get when I read about FP.

That’s because it is. FP is not immune to incorrect implementation. Both statements are true.


> Java with twenty layers of abstractions

Only twenty? That'd be a reduction.

If your stack trace doesn't have at least 30 calls of "thingHandler.invoke()", you're not abstract enough.


> Functional programming is a big mistake for some real-world code.

The emphasis on "some" should be stronger in your comment, otherwise it reads, on a quick pass, as a broad dismissal of functional programming.

Functional programming concepts and ideas have been steadily incorporated in most mainstream languages in the last 10+ years. However, when people move past the language's functional programming primitives, it's when the project enters potentially dangerous territory. Your, and the article's, example of pipes, for one.

Personally, I'd like more languages to incorporate ADTs (algebraic datatypes) for me to be able to "de"layer programs back to a mostly procedural + functional programming. And based on the current adoption rate of FP concepts, I'm not sure we're that far away from having proper ADTs and pattern matching in the most popular imperative programming languages of today.


Inexperienced developers that don't understand what they're doing, armed with a language known best for being worst-in-class at everything except for "running everywhere", is a recipe for disaster, no matter how you spin it.

It feels egregious to implicate "the entire surface area of functional programming" here, when there are other obvious issues at play.


> copying data needlessly and turning trivial stuff into a functional pipeline that is impossible to debug. You want your code to read top to bottom like the problem you're trying to solve, if at all possible. If your call stack looks like map, fold, reduce you've introduced a ton of complexity, and why exactly?

Hard to follow code is mostly a consequence of point-free style in Haskell which is an heresy to be avoided at all cost.

ML typically uses a piping operator to compose code instead and that leads to top to bottom code which is extremely easy to read.

There is zero difference then between a map and a loop. Loops are not read top down anyway nor are function calls. It seems to me you are just more used to these indirections than other ones and are therefor blind to them. This leads to an argument which I consider a straw man personally.


I'm reminded of the Kernighan quote / trope: "Everyone knows that debugging is twice as hard as writing a program in the first place. So, if you're as clever as you can be when you write it, how will you ever debug it?"

This phenomenon seems to happen more frequently with FP or related tools and languages (RxJs abstraction soup comes to mind).


This sounds really cool, except it was written in 1978 and has nothing to do with modern realities of functional programming or tools.

> RxJs abstraction soup comes to mind

Merely a side effect of being implemented in a language that wasn’t really designed for it.


In my opinion it's just as true today as in 1978, if not more so, but functional programming can make code simpler to reason about and therefor debug.


> Loops are not read top down anyway nor are function calls

But their scope can be limited.


I tried googling for an example[1] and had difficulty finding a good one.

Do you have a link to a good one?

[1] https://elixirschool.com/en/lessons/basics/pipe_operator


Here is an example in ocaml, where |> is the pipeline operator:

https://cs3110.github.io/textbook/chapters/hop/pipelining.ht...


> simple linear code

If you're talking about a sequence of statements, I've rarely come across such a thing.

Firstly, the mere existence of statements in a language is an incredible complication:

- It forks the grammar into two distinct sub-languages (expressions and statements)

- Expressions compose with each other, and with statements; but statements don't compose with expressions.

- It introduces a notion of (logical) time into the semantics ('before' a statement is executed, and 'after' a statement is executed)

- The times of each statement need to be coordinated. Chaining one-after-another is easy (usually with ';'), but concurrency gives an exponential explosion of possible interleavings.

- The computer often cannot help us spot mistakes, and compilers have very little ability to optimise statements.

This is especially silly in Python, where many common tasks require statements, which often requires rewrites (e.g. we can't put statements in lambdas; hence we often need separate function defs (which themselves are statements); pulling those out may lose access to required lexically-scoped variables, requiring even more plumbing and wrapping; urgh.)

Compare this to functional code, where there's only a single language (expressions) where everything is composable; where there is no control flow (only data dependencies); there is no notion of time; mistakes are more obvious (e.g. 'no such variable' errors at compile time); compilers have lots of scope to optimise; and code is trivial to parallelise.


Worth noting Haskell ended up reinventing the statement/expression distincion in the do-notation. So apparently the distinction do have value.

“Logical time” actually matters when you have side-effects. Of course we can agree code without side effects is simpler to reason about. Unfortunately you need side effects to do anything useful.


> “Logical time” actually matters when you have side-effects

"Time" doesn't matter; causality/dependency is what matters. That is modelled quite well by passing around values. For example:

  let hostname = lookup("hostname")
      username = lookup("username")
      password = lookup("password")
      connection = connect(hostname, username, password)
   in query(connection, "SELECT foo FROM myTable")
The 'query' call depends on the result of the 'connect' call, so it must "happen afterwards". The 'connect' call depends on the result of the three 'lookup' calls, so it must "happen afterwards". The 'lookup' calls don't depend on each other, so they're concurrent (note that concurrent does not necessarily imply parallel).

This form of data dependency is no different than, say, arithmetic: to calculate '3 × (2 + 4)', the multiplication depends on the result of the addition, so it must "happen afterwards". (The imperative equivalent would be 'push 2; push 4; +; push 3; ×;')

> Worth noting Haskell ended up reinventing the statement/expression distincion in the do-notation

Yes, do-notation "desugars" into plugging return values into arguments, like above. This gets a little tedious for things like 'IO', where we pass around a unit/null value to represent "the real world".

Still, a nice lesson from Haskell is that there's value in making things opt-in; e.g. many languages have Maybe/Option, but it's less useful in languages which allow 'null' anywhere; many languages have IO/Effect/Task/etc. but it's less useful in languages which allow I/O anywhere; etc.


Still, you're showing the simple case of code without (global) side-effects.

What if instead we said

  let connection = connect(hostname, username, password)
      _ = query(connection, "INSERT INTO t1 VALUES ('abc')")
      data = query(connection, "SELECT * FROM t1")
    in print data
What is the order of the two calls to `query` actually matters, but how could the language know?

The order of execution of code is actually important every time you have such a side effect, and languages that don't strictly define it make this type of thing very error prone.


> What is the order of the two calls to `query` actually matters, but how could the language know?

The language can't "know the order" of these calls, since they are not ordered. No information is passed from one call to the other, hence neither is in each other's past light cone.

If you want to impose some order, you can introduce a data-dependency between the calls; e.g. returning some sort of value from the "INSERT" call, and incorporating that into the "SELECT" call. Examples include:

- Some sort of 'response' from the database, e.g. indicating success/failure

- GHC's implementation of IO as passing around a unit value for the "RealWorld"

- Lamport clocks

- A hash of the previous state (git, block chains, etc.)

- The 'connection' value itself (most useful in conjunction with linear types, or equivalent, to prevent "stale" connections being re-used)

- Continuation-passing style (passing around the continuation/stacktrace)

> languages that don't strictly define it make this type of thing very error prone

On the contrary, attempting to define a total order on such spatially-separated events is very error prone. Attempting to impose such Newtonian assumptions on real-world systems, from CPU cores to geographically distributed systems, leads to all sorts of inconsistencies and problems.

This is another example of opt-ins being better than defaults. It's more useful and clear to have no implicit order of calculations imposed by default, so that everything is automatically concurrent/distributed. If we want to impose some ordering, we can do so using the above mechanisms.

Attempting to go the other way (trying to run serial programs with concurrent semantics) is awkward and error-prone. See: multithreading.

See also https://en.wikipedia.org/wiki/Relativistic_programming

Note that you haven't specified the database semantics either.

Perhaps the connection points to a 'snapshot' of the contents, like in Datomic; in which case doing an "INSERT" will not affect a "SELECT". In this case, a "SELECT" will only see the results of an "INSERT" if we query against an updated connection (i.e. if we introduce a data dependency!).

Perhaps performing multiple queries against the same connection causes the database history to branch into "multiple worlds": each correct on its own, but mutually-contradictory. That's how distributed systems tend to work; with various concensus algorithms to try and merge these different histories into some eventually-consistent whole.

PS: There is a well-defined answer in this example; since the "INSERT" query is dead code, it should never be evaluated ;)

PPS: Even in the "normal" case of executing these queries like statements, from top-to-bottom, against a "normal" SQL database, the semantics are under-defined. For example, if 'query' is asynchronous, the second query may race against the first (e.g. taking a faster path to a remote database and getting executed first). This can be prevented by making 'query' synchronous; however, that's just another way of saying we need a response from the database (i.e. a data dependency!)


In most programming languages, the order of the two statements would be well defined, and neither would be dead code.

Trying to make all statements implicitly concurrent unless they have explicit dependencies is a terrible way to complicate your life. That in some cases you can optimize the code (or the CPU will do it for you) by executing it in certain other orders where it is safe is supposed to remain an invisible optimization.

It is obvious to everyone that distributed code, eventual consistency, and other similar non-totally-ordered examples are much harder to get right than procedural code. Even simple print-based debugging/logging becomes excessively complex if you get rid of the local total ordering.

Even most network-based computing is done over TCP (or, more recently, QUIC) exactly because of how enormously useful having a total order is in practice (even if it's just an illusion/abstraction).

> PS: There is a well-defined answer in this example; since the "INSERT" query is dead code, it should never be evaluated ;)

It's only dead code if you assume Haskell's bizarre lazy execution model. In virtually every other language, unless the compiler can prove that query() has no side effects, the INSERT will be executed.


> In most programming languages, the order of the two statements would be well defined, and neither would be dead code.

There are no statements in the above examples, just expressions (composed of other expressions). Evaluation order of expressions is not well defined in "most popular languages", e.g. consider this expression:

  printSecondArgument(
    query(connection, "INSERT INTO t1 VALUES ('abc')"),
    query(connection, "SELECT * FROM t1")
  )
> Trying to make all statements implicitly concurrent unless they have explicit dependencies is a terrible way to complicate your life

I agree, that's one reason why I dislike statements (see my list in a parent comment)

> It is obvious to everyone that distributed code, eventual consistency, and other similar non-totally-ordered examples are much harder to get right than procedural code.

This is a category error. All code is distributed; the world is partially-ordered. "Procedural code" (i.e. serial/sequential execution) is a strategy for dealing with that. It's particularly easy (give each step a single dependency; its "predecessor"), but also maximally inefficient. That's often acceptable when running on a single machine, and sometimes acceptable for globally-distributed systems too (e.g. that's what a blockchain is).

Forcing it by default leads to all sorts of complication (e.g. multithreading, "thread-safety", etc.). Making it opt-in gives us the option of concurrency, even if we write almost everything in some "Serial monad" (more likey, a continuation-passing transformer)


In most popular languages, the order of evaluation of both statements and expressions is specified. For your example, the insert query call is guaranteed to happen before the select query call in Java, C#, JS, Go, Python, Ruby, Rust, Common Lisp, SML. It is indeed unspecified in C, C++, Haskell, Scheme, OCaml.

While C and C++ are extremely commonly used, I would still say that the majority of popular languages fully define evaluation order. Even more so since most of these languages considered they are fixing a flaw in C. Rust is particularly interesting here, as they initially did not specify the order, but then reversed that decision afelter more real-world experience.

> "Procedural code" (i.e. serial/sequential execution) is a strategy for dealing with that. It's particularly easy (give each step a single dependency; its "predecessor"), but also maximally inefficient.

> Forcing it by default leads to all sorts of complication (e.g. multithreading, "thread-safety", etc.). Making it opt-in gives us the option of concurrency, even if we write almost everything in some "Serial monad" (more likey, a continuation-passing transformer)

You yourself admit that serial code is a strategy for dealing with the complexity of the world - it doesn't complicate anything, it greatly simplifies things.

Threads and other similar constructs are normally opt-in and used either to model concurrency that is relevant to your business domain, or to try to achieve parallelism as an optimization. They are almost universally seen as a kind of necessary evil - and yet you seem to advocate for introducing the sort of problems threads bring into every sequential program.

Thinking of your program as a graph of data dependencies is an extremely difficult way to program, especially in the presence of any kind of side effects. I don't think I've ever seen anyone argue that it's actually something to strive for.

Even the most complete and influential formal model of concurrent programming, sir Tony Hoare's CSP, is aimed at making the order of operations as easy to follow as possible, with explicit ordering dependencies kept to a minimum (only when sending/receiving messages).

> That's often acceptable when running on a single machine, and sometimes acceptable for globally-distributed systems too (e.g. that's what a blockchain is).

It's not just blockchain: TCP and QUIC, ACID compliance and transactions, at-least-once message delivery in pub-subs: all of these are designed to provide an in-order abstraction on top of the underlying concurrency of the real world. And they are extremely popular because of just how much easier it is to be able to rely on this abstraction.


> You yourself admit that serial code is a strategy for dealing with the complexity of the world - it doesn't complicate anything, it greatly simplifies things.

Serial code greatly simplifies serial problems. If you want serial semantics, go for it. Parent comments mentioned Haskell, which lets us opt-in to serial semantics via things like `ST`. Also, whilst we could write a whole program this way, we're not forced to; we can make different decisions on a per-expression level. Having rich types also makes this more pleasant, since it prevents us using sequential code in a non-sequential way.

However, there is problem with serial code: it complicates concurrency. Again: if you don't want concurrency then serial semantics are fine, and you can ignore the rest of what I'm saying.

> Threads and other similar constructs are normally opt-in and used either to model concurrency that is relevant to your business domain, or to try to achieve parallelism as an optimization. They are almost universally seen as a kind of necessary evil

Such models are certainly evil, but not at all necessary. They're an artefact of trying to build concurrent semantics on top of serial semantics, rather than the other way around.

> and yet you seem to advocate for introducing the sort of problems threads bring into every sequential program.

Not at all. If you want sequential semantics, then write sequential programs. I'm advocating that concurrent programs not be written in terms of sequential semantics.

Going back to the Haskell example, if we have a serial program (e.g. using `ST`) and we want to split it into a few concurrent pieces, we can remove some of the data dependencies (either explicit, or implicit via `do`, etc.) to get independent tasks that can be run concurrently (we can also type-check that we've done it safely, if we like). That's easier than trying to run the serial code concurrently (say, by supplying an algebraic effect handler which doesn't obey some type class laws) then crossing our fingers. The latter is essentially what multithreading, and other unsafe uses of shared-mutable-state are doing.


The real world type is a bit of an implementation detail in GHC. As a user, you shouldn't really have to deal with it. IO could be implemented differently under the hood, though real world works well in GHC.


But then the real world rudely intervenes because connections are flaky, services are down, and you need to add logging to weird and unexpected parts of your program because a call on device A is resolved in a weird state on device B, and the backend reports that everything's a-okay.


It might be easier to explain things to oneself in terms of time though - we are not necessarily functional in thinking. e.g. I imagine trying to explain f(g(x)) versus g(f(x)) without using the word "first" or "then"


> e.g. I imagine trying to explain f(g(x)) versus g(f(x)) without using the word "first" or "then"

Use "from" and "to", which talks about data dependencies rather than temporal dependencies. Most people assume dependency order implies temporal order, and then you introduce them to lazy evaluation to decouple even that.


That's data flow, which is fine: the dependency is explicit, checked by the compiler, and maintained regardless of where the code lives.

For example, if 'foo = g(x)' is defined in one module, and another module does 'f(foo)', then the data flow is preserved. If we try to force things to be the wrong way round, we'll get compiler errors like 'No such variable foo'.

Compare that to temporal ordering of statements; if one module executes 'g(x)' and another executes 'f()', how do we ensure the latter occurs after the former? How could the compiler tell us if they're the wrong way around? Very difficult.


> pulling those out may lose access to required lexically-scoped variables

The only situation for this that I can think of is inside a list comprehension / generator expression. You are aware that you can define functions at any scope in python?


> You are aware that you can define functions at any scope in python?

  >>> lambda myVar: (def foo(): return myVar + 1, foo)[-1]
    File "<stdin>", line 1
      lambda myVar: (def foo(): return myVar + 1, foo)[-1]
                     ^
  SyntaxError: invalid syntax


I didn't mention lambdas because this is obviously transitive

  def f(myVar):
      def foo():
         return myVar + 1
      return foo


Here's some made up code:

  pruned = map(
    # Remove all elems whose foo is less than the number of elems
    lambda elems: list(filter(
      lambda elem: elem.foo < len(elems),
      elems
    ))
  )
Now let's say we want a running total of all the foos. We can insert an expression to do this:

  total_foos = [0]
  pruned = map(
    # Remove all elems whose foo is less than the number of elems
    lambda elems: list(filter(
      lambda elem: (
        # Add elem.foo to our total_foos accumulator
        total_foos.append(total_foos.pop() + elem.foo),
        elem.foo < len(elems)
      )[-1],
      elems
    ))
  )
  total_foos = total_foos.pop()
This is rather awkward and "non-Pythonic"; ideally we would use 'total_foos += elem.foo', but that can't exist in a lambda. Hence:

  total_foos = 0
  
  # Have to define this outside the map, since it's a statement
  def checkAndAccumulate(elems):
    """This checkAndAccumulate function is just a wrapper, for closing-over
    the elems variable (since it's not in-scope outside the map call).
    Returns the actual checking+accumulating function."""
    def checker(elem):
      """The actual function we want to filter with"""
      total_foos += elem.foo
      return elem.foo < len(elems)
    return checker

  pruned = map(
    # Remove all elems whose foo is less than the number of elems
    lambda elems: list(filter(
      checkAndAccumulate(elems),
      elems
    ))
  )


I can only quote OP on this:

> But if you feel the need to transform simple linear code into a map of higher order functions because of some abstract benefits you're probably doing stuff that's too easy for you and you should get your mental challenge from solving a more difficult problem instead.

This code is only complicated because you insist on following some abstract ideal. The actual way to solve this in python is:

  total_foos = sum(elem.foo for elem in elems)
  pruned = [e for e in elems if e.foo < len(elems)]
Which is shorten than even your first code sample. If you directly translate your last example into sensible python, you get a nice example of some "simple linear code":

  total_foos = 0
  pruned = []
  for elem in elems:
      total_foos += elem.foo
      if elem.foo < len(elems):
          pruned.append(elem)
The existence of statements in python clearly stands in the way of achieving ideals of pure functional programming. But I think aiming for such ideals is the exact opposite of the point OP was making.


Part of the problem with "functional pipelines" as in the article is that you are in essence creating a domain-specific language to describe the problem your program is modeling and solving.

Problem is that is not a programming language supported by anybody else than you. It is not part of the standard library. If it was it would probably be of high quality, highly tested-in-practice code.

But if you constructed your pipeline-framework (as in the article) by yourself you now need to understand not only the solution in terms of your created framework, but also how the framework itself exactly works. If this is a one-off problem-solution what looks like simpler code in the end hides its complexity inside its framework-code. And as you keep programming your framework is a moving target. You will improve it from time to time. Understanding it becomes a moving target too. It is a real problem for maintenance for you or anybody else who wants to use and adapt your existing code in the future.

Think about having the problem and a solution and being able to describe both in English. But then you say hey if I invent this new language Klingon and express the problem and solution in Klingon, then both the problem and solution become simpler. Cool. But now you must also program the Klingon interpreter for yourself. And be aware of which version of Klingon your code is assuming is used.

This is the tendency and cost of over-engineering. It is fun and natural to look for the "ideal solution", but it comes with a cost.


100% agree, the monstrosity that author proposes is really hard to swallow. Simple made hard.

The idea to handle validation (we get HTML instead of expected JSON) by passing some Nothing object down the flow is horrible, if we expect JSON and get something unparsable, well, the best strategy is to fail fast and let know client side that we have a wrong data. Instead we show off with some fancy code structure with zero value for the user of the software.


Agree, especially with having as much part of the code as pure functions.

I think where functional programming really shines is in combination with a good type system, that can enforce those restrictions. In languages like js I find it not so useful (even if using map, filter, etc.) because I have no guarantees that there is a hack somewhere in the function that I did not see.

When there are a lot of pure functions the cognitive load is reduced so much I can use the "extra capacity" to focus on the task.


In Javascript Object.freeze() and Object.seal() are wonderful to prevent accidental mutation.


I did not know that. Another one is const to declare simple variables. There are some features, I agree. But how do I declare that a function returns a specific type and have it enforced? IMO, only by switching to Typescript or similar.


VSCode can enforce return types with jsdoc annotations, even when using plain javascript. VSCode uses typescript internally for all type inference, so you get that for free. Type inference works flawlessly 95% of the time, even for iterators/generators/inheritance/containers.


Avoiding side-effects is usually lumped toghether with async-style interfaces into the term "functional programming". However, the two are separate properties with separate benefits and downsides:

- No side-effects makes reasoning about logic easier, code more deterministic, and tests more reliable. However, certain problems become harder to solve (e.g. simple caching or memoization).

- Async-style interfaces allow for waiting on slow operations in a non-blocking fashion, meaning those don't occupy threads, thus a more economic use of resources. However, async-style code can be harder to read, write, and debug, though this varies heavily between languages and frameworks.


Some solutions are much easier to express in an imperative way.

You can still write pure functions and avoid state in imperative code. Use consts, immutable objects, etc. You get some of the benefits of functional programming while still writing readable code.


Map, reduce and fold are much easier to reason about than imperative loops, what’s your problem exactly?


I don't get the point of "immutable objects". Why have an object at all? Why not just hardcode the values if you want them not to change?


The point is that if you have a reference to such object, you know for sure it'll stay looking the same way at all times, no matter what happens in the program. E.g. take this pseudocode:

  let foo = MakeMeAnObject();
  let bar = MakeSomethingElse(foo);
  DoSomethingForSideEffects(foo, JoinThings(bar, baz));
  return foo;
With immutable objects, you can be certain that neither foo nor bar were modified by any of this code. So, when e.g. debugging a problem with that DoSomethingForSideEffects() call, you don't have to worry about values of foo and bar having been changed somewhere, by someone, between their introduction and the point you're debugging.

Neither of them can be modified by it in the future - e.g. someone else can't change MakeSomethingElse() to also modify its inputs, thereby accidentally breaking your code that's using this function.

Another way of looking at it: a lot of problems with reasoning about code, or parallelism, can be drastically simplified by assuming the data being passed around is only copied, and not modified in-place. "Immutable objects" as a language feature is just making this assumption the default, and enforcing it at the language/compiler level.

In terms of use, it isn't that much more inconvenient over mutable objects. You can always do something like:

  foo = DoSomeTransformation(foo);
It's just that DoSomeTransformation() is not modifying the object, but instead returning a new instance of the same objects, with relevant fields having different values. The obvious drawback here is, for large data structures, there will be lot of copying involved - but that's where the languages with immutable objects are usually "cheating", e.g. by using sophisticated data structures masquerading as simple ones, as to only ever copy things that have actually changed (i.e. "distinct" objects end up sharing a lot of their data under the hood).


> It's just that DoSomeTransformation() is not modifying the object, but instead returning a new instance of the same objects, with relevant fields having different values

I can't think of anything where I'd want that. Don't you end up needing infinite amounts of memory? Isn't it absolutely slow as balls copying all that stuff around?


Not really. You can copy the new parts and reference the old, garbage collect dead things, etc. At least in languages where immutability is a core tenet vs stuck on as an after thought.

Is it as fast mutable everything? No, some copy must happen, but is a mutable-standard languages as fast as raw hand tuned asm? Also (probably) no, but the trade offs are worth it. It likely matters a lot less than you think unless you're writing actually performance critical tight loop code vs just thinking about performance.


If immutability is a part of the language, then the compiler and the runtime know about it.

This way:

- passing an object around can always be by reference, since no one can change it

- depending on structures used, for changed bits you don't need to copy the entire structure, but simply shift pointers to old data or new data

- garbage collection can become trivial, and granular, since you know exactly what's new and old, and shifting data around becomes just moving pointers around (again, depends on implementation and optimisations)

There are downsides if you are not careful, of course:

- creating a bunch of new data that references old data will run out of memory, but this doesn't happen as often as you would think.

- sending data between processes/threads may result in copying that data (depends on implementation)

However, the upside is quite good: your data never changes under you. That is a call to func(data) doesn't sneakily change data. And all data becomes trivially thread-safe without mutexes or race conditions.


I guess I just don't get it.

Why would I want to have a thing in memory, copy it to more memory, modify the copy, then free up the original memory every time? That just seems like a waste.


> Why would I want to have a thing in memory, copy it to more memory, modify the copy, then free up the original memory every time?

You don't do that, runtime does that for you.

The main benefit is better assumptions about code you write. The prime example is how different languages handle object construction and modification:

   var date = new Date(2022, 11, 21)

   var other = date.addDays(10)

The question is: what do `date` and `other` contain?

Depending on the language the answer may surprise you. Some languages modify `date`. Others don't. And you never know which of the methods exposed on an object modify it unless you read documentation. Neither the compiler nor the type system can help you.

However, if objects are immutable, you are guaranteed that the original object, `date` is never modified. And if you need to use it again, you know it contains the same data without it suddenly changing.

This gives rise to another important property: you can send this to another thread without mutexes or without creating weird synchronization points like Java's AtomicInteger. Since the object cannot be mutated, threads don't have to fight for exclusive access to read it.


The reason I like it is that I have spent a lot of time debugging bar code like:

Var object=object() If (isValid(object)) // Do something

In which the isValid function modified the object in unexpected way and caused the issue. With mutability, I loose confidence in the code and literally have to read implementation of everything the object touches to be sure i understand what's going on. Much more relevant in bigger projects.


Suppose you want to draw a six sided star. You could do it by drawing a triangle and then invert that triangle and draw it again. But suppose you want to cache the two triangles so that the next time you draw it will happen more quickly. You have some "Triangle invert(Triangle t)" function. It takes a triangle and returns the inverted version of that triangle. If that function modifies the triangle in place then you will first have to make a copy of it to make sure that you have the original triangle. If the code is written in a functional language you can assume that the funtion will always return a copy. If it is not functional then you may not know whether the function returns a copy or not and it may come as a "surprise" that the function modified the triangle in place.

In theory the functional approach is more stable and predictable. But whether this really causes a lot of bugs in practice is another question.


immutable means that you set them only once, typically at runtime. Not that they have the same values always, everytime you run.


"immutable" is kind of overloaded term, but what it means in this context is an object itself doesn't mutate when you want to change a value (basically value semantics[0] instead of reference semantics[1])

for example (pseudocode)

  var newUser = Person(name: "some guy", age: 0)

  // newUser is replaced with a new object that is
  // initialized with field.name and previous age (0)
  // the old object is discarded
  newUser.name = field.name

  // object is passed as copy-on-write, assuming a 1 sec delay
  // it will print the objects values at this point in time
  // (age: 0) even if altered later
  async(after:1) { print(newUser.age) } // prints 0

  // age was changed to 32 a nanosecond later,
  // but now a new object again is initialized
  // instead of mutated
  newUser.age = 32 
  print(newUser.age) // prints 32

  ----------
  output:
    32
    0
[0] https://en.wikipedia.org/wiki/Value_semantics

[1] https://stackoverflow.com/questions/27084007/what-is-value-a...


is something i wrote ↑ here incorrect or off-topic?

would appreciate a correction!


It makes code easy to understand as you can easily identify what is input and what is output and can easily figure out the flow of data i.e what is being computed based on what. Without immutable objects a function taking object can mutate those objects and now they are acting as input and output, which leads to complexity.


Those are often records from a database. There is a way to change it, it’s to make a copy of it.

I like being able to trust that something is what I think it is because it cannot be something else. Meaning: if I know that something can’t change, I don’t have to check for eventual accidental change.


But now you've got two versions of it that aren't the same. Why is that supposed to be good?


Because of aliasing. Some other functions/objects/threads may have a reference to the old version. If they were in the middle of a computation using that object, they don't have to add a whole bunch of checks to ensure that the data they're holding didn't suddenly change from underneath them. This happens a lot in concurrent programming, but even in single-threaded programs it makes reasoning about the current state of the system easier.


So you've just made the internal state of your program inconsistent? Why is this supposed to be good?


No, it's not inconsistent. If you want to call this inconsistent, then the mutable version of the program is also inconsistent.


It can't be inconsistent. You've got one copy of the data in memory, everything uses that, everything sees the same thing.

What's the use case of all this copying and shunting stuff around and consuming massive amounts of memory?

What kind of programs would you write that would benefit from this?


> It can't be inconsistent. You've got one copy of the data in memory, everything uses that, everything sees the same thing.

Wrong. For instance, state can be stored in locals. Two threads accessing the same structure, one reading and one writing, the reader loads some state into locals and the writer then invalidates that state in the middle of the reader's computation, and the reader proceeds to computing a result mixing old state and new state thus leading to inconsistency. This ABA problem is well known and it just can't happen if the structure is immutable. And this doesn't even go into cache coherency issues.

I frankly don't think you appreciate the number of hazards just making data structures immutable actually addresses, and given you don't seem aware of the hazards implicit to mutable data structures, I suppose that's not surprising.

> What kind of programs would you write that would benefit from this

Pretty much every single program that accesses a relational database benefits from this. There are a few of those around n in case you didn't know. Perhaps you've heard of multiversion concurrency control?


Concurrent programs where several threads read the same stream of data. For instance.

But a more common case is how I get my records from my db. Only the ORM can create those DB record for me ( because I set it up that way )

When I return a response; I compose another immutable response object.

And only my service layer can don that. ( it won’t compile elsewhere)


> everything uses that, everything sees the same thing.

Don't you use scope? I don't think I use more than 1 or 2 global variable per system.


The idea is that a given object never changes, but it's straightforward and efficient to create a new object from the original one with some of the values changed.

This is "safer" for any code still operating on the original object, as it will not be changed unexpectedly.


It's about trusting a function call. Imagine the following C functions:

    extern result fubar(struct foo *);
    extern result snafu(struct foo const *);
Just by looking at the function prototype, I know that calling snafu() won't change the struct foo I pass it. I don't know what fubar() will do to it. Maybe it won't change, maybe it will. I'd have to check the implementation of fubar() to find out.


I'm a fan of functional programming but I'm pretty sure this post would do a terrible job of convincing anyone to try FP out. There's a very bad pattern of replicating very specific language features and control flow structures just to make them more similar to point-free Haskell, which is not going to win anybody over.

The author begins by replacing a language feature, the . operator, with a pipe() function. After that they swap out exceptions for Result, null/undefined for Maybe, Promise with Task, and the final code ends up becoming an obfuscated mess of wrapper functions and custom control flow for what you could write as:

    fetch(urlForData)
      .then(data => data.json())
      .then((notifications) =>
        notifications.map((notification) => ({
          ...notification,
          readableDate: new Date(notification.date * 1000).toGMTString(),
          message: notification.message.replace(/</g, '&lt;'),
          sender: `https://example.com/users/${notification.username}`,
          source: `https://example.com/${notification.sourceType}/${notification.sourceId}`,
          icon: `https://example.com/assets/icons/${notification.sourceType}-small.svg`,
        })
      )
      .catch((err) => { console.log(err); return fallback });
...which is just as functional because doesn't involve any mutation and it doesn't require several pages of wrapper functions to set up, you can tell what it does at a glance without having to look up any other pieces of code, it's gonna run faster, and it uses standard control flow which you can easily debug it using the tools you use for any other JS code.

This post has nothing to do with functional programming, this is a poor monad tutorial.


FP is the best thing since sliced bread, don't get me wrong!

But the seemingly arising "Haskell religion" is at least as mislead as the OOP-religion that held mainstream in stranglehold for a long time.

Haskell's syntax isn't anything to imitate. It's hostile to IDE features, at least. Alone that should make it a no-go.

The next thing is that Haskell's features may make sense in the Haskell context, but they don't make any sense at all in almost any other language.

When you don't have strong types, no lazy evaluation by default, and it's not mandatory to use IO-wrappers, it makes not sense to mimic solutions that arose out necessities that are again results of constrains of Haskell's feature design. You need to do some things in Haskell the Haskell way because Haskell is the way it is. In a different language the chosen solutions are at best bonkers (even they may make sense for Haskell).

As always: It's a terrible idea to start doing something because "it's cool" and the "new hot shit", without actually understanding why exactly things are (or should be) done the way they are. The "Haskell religion" is by now imho already mostly only cargo cult… The more worrisome part is that it seems it attracts more and more acolytes. This will end up badly. It will likely kill the good ideas behind FP and only leave a hull of religious customs that the cult followers will insist on; exactly like it happened to OOP in the past.

That's my personal opinion, speaking as a big FP Scala fan.


> But the seemingly arising "Haskell religion" is at least as mislead as the OOP-religion that held mainstream in stranglehold for a long time.

I wanted to say the same but you were faster than me. The article reminded me a lot of the design patterns fad in the OOP world: that compulsion to make everything abstract and reusable even if there's no use for it yet. But hey, at some point it might be needed and then it would be great!

Of course there are some cases where you really want to be ahead of what might come , e.g. public library api's, but those are rare.


I have heard is said that both OOP and functional design are best applied only about 80% of the time. More than that and you are overfitting the method to things where it applies poorly.


Every big enough system has a few parts that would fit OOP, and a few parts that would fit FP. If your language supports those, then great, use those for them.

The overwhelming majority of problems do not fit OO, and using OOP on them makes a mess. Similarly, FP.


Haskell's syntax isn't anything to imitate. It's hostile to IDE features, at least.

Can you elaborate on this? I haven't used Haskell in years but I recall enjoying some of its really cool IDE-friendly features, such as typed holes and the ability to press a key and have whatever is under my editor's cursor show me its type and another key to insert that inferred type into the document.

I have heard that more recent versions support nice record syntax using the '.' operator (but without whitespace, to differentiate it from composition). That's also very IDE friendly since type inference in the editor should be able to resolve the members of a data type that's been defined with record syntax.


The main problem is discovery.

When I hold some object I can easily discover the things I can do with that object—just by typing "." after the object reference.

In something like Haskell I need to know upfront what I may do with some "object". The IDE can't help me discover the methods I need. All it can do is to show me all available functions in scope. Most of this functions aren't what I'm looking for. But the IDE can't know that as it's missing the "context" in which I want to call my functions / methods.

And regarding records in Haskell: First of all you will get beaten when you use them (that's part of the religious movement and has nothing to do with comprehensible reasons). The other part is: Records are no substitute for objects. Not even close. Those are just quite primitive structs. (To learn how unpractical is it to try to build objects from structs without proper language support just ask someone who was forced to use C after coming from a more complete OO language).


> In something like Haskell I need to know upfront what I may do with some "object". The IDE can't help me discover the methods I need. All it can do is to show me all available functions in scope.

Sorry, but this just isn't true. Hoogle <https://hoogle.haskell.org/> searches function by type, fuzzily: ask for functions whose first parameter is the type of the object-like thing, and you'll get just what you're looking for. And it's perfectly possible to run hoogle locally and integrate it with your editor.

Now, the tooling for a language like Java have had several centuries more of aggregate development work done on them compared to Haskell's tools, and if that polish is a difference-maker for you, that's fine! But it's not a fundamental limitation, and claiming it is is just fud.


The main point is that the noun-first syntax of OO languages means that by just starting to write the code that you know you need, you've already given the IDE enough information to give you a list of options for which function to use. That kind of tight integration directly into the editor is hard with Haskell because you have to write out the function name first.

I could imagine an editor having some sort of "holes" capability, where I can hit a key combo to insert a hole where a function should go, provide the function arguments, then double back and fill in the hole with search results from Hoogle. Done right, such a system would be marginally harder to use than Java-style IDE completions, but not enough to be a problem. The main difficulty is that the implementation and UX complexity of such a system is far greater than with a noun-first syntax.


The approach you're describing sounds a lot like proof-search in idris and agda (coq too, I thing), and it's nicer than you think: you're very often working with definition-stubs created by automated case-splitting, which adds holes for you. Jumping around between holes becomes how you program, not an irritating interruption.

But even aside from that, I don't think verb-first needs to be a showstopper: given that we've got type signatures, there's a lot of local context to work with. You're probably calling a function on one or more of the parameters, or else you're typing the leftmost-piece of a composition chain that gives you your result type. So throw Param1Type -> a, b -> ResultType etc at hoogle and populate a completion list. Completion doesn't have to be perfect to be useful. The hard part would be performance: if completion isn't fast, what's the point?


I've seen this point before, but something I've never seen acknowledged is the (incorrect) presumption that it's _possible_ to have an exhaustive list of all the verbs you might want to use with a given noun.

The point of data-oriented (functional) programming is that we believe it's fundamentally unknowable what sorts of verbs a noun should support, and that it's a mistake to try to enumerate them (OOP).

I don't deny that it's very convenient to type . and get IDE autocomplete. And we should absolutely have better solutions than we do for writing functional code in IDEs. But it's a misunderstanding of your problem domain to suppose that a noun's verbs are inherently enumerable.


> But it's a misunderstanding of your problem domain to suppose that a noun's verbs are inherently enumerable.

This presumes to know what my problem domain is, and for many domains I can think of it's simply not accurate.

If my problem domain is traffic signal control, Signal has exactly one verb: transition.

If my problem domain is calendar appointment scheduling, Appointment has easily enumerable verbs: create, reschedule, delete, move. Maybe you could throw a few more in there, but there certainly aren't infinite things you can do with a calendar appointment.

If my problem domain is running a forum, Post has a few easily enumerable verbs: create, delete, edit, reply-to. As with Appointment, you could conceive of more, but quantifiably so.

I'll grant you that if my domain is something abstract like list processing, quantifying the things someone might want to do with a list becomes very difficult. But in the world of business software, it's really not.


> This presumes to know what my problem domain is, and for many domains I can think of it's simply not accurate.

> If my problem domain is traffic signal control, Signal has exactly one verb: transition.

Interesting, so you never define new "compound verbs", such as "transition and then transition again"?


There is no such presumption that it's possible to have an exhaustive list of all the "verbs" you might want to use with a given "noun".

In a C++ / Java / C# like language classes are open by default (even this creates some issues). The reasoning behind that is exactly the one that it's impossible to know all "verbs" upfront.

On the other hand it's a fundamental concept in software engineering to define interfaces. Haskell calls them "type classes". (And Haskell's type-classes are coherent; which by the way creates the mirror problem to open classes).

> I don't deny that it's very convenient to type . and get IDE autocomplete. And we should absolutely have better solutions than we do for writing functional code in IDEs.

I see here the exact same misunderstanding as in the article we're discussing: Whether code is functional or not is not defined by mere syntax. You can use "method syntax" and write perfectly fine functional code.

More modern FP languages even embrace this. The `|>` syntax is quite popular by now. That's a syntax that's actually not so far away from the C++ token `->` for (virtual) method calls.

It's also not only the IDE issue. English, and a lot of western languages at least, use S-P-O as primary word order. Method call syntax mimics that. Whereas P-O sentences actually denote an imperative in English. "Verb noun" syntax is therefore, quite obviously, a tradition form imperative programming: `print("sentence")`, `process(data)`, `do something`… If it were written in declarative form it would look more like `"sentence".printed`, `data.processed`, `something.done()`, I guess.

Anyhow, FP and OOP is on the fundamental level even the exact same thing. ;-)

http://www.javiercasas.com/articles/codata-in-action (You should read the paper, too!)

But there's clearly a difference in practice.

The main point about FP is imho not data as such (and data oriented design is anyway an orthogonal topic) but the primary ideas are immutability of data and, of course, the usage of (mostly) pure functions. This yields the greatest benefits, makes code more robust and simpler to reason about. Syntax isn't relevant to this.


> That kind of tight integration directly into the editor is hard with Haskell because you have to write out the function name first.

Sort of. But you can just write

    _ x
       ^
and get the same sort of benefit as you get from

    x.
      ^


> And it's perfectly possible to run hoogle locally and integrate it with your editor.

Interesting.

Could you point to some demo of such IDE integration? Never seen it.

I'm still struggling to imagine how something like "IntelliSense" would look like in such a system. Do you have e.g. any demo video you could point to? Curious to learn more!


If I made it sound like there's something like IntelliSense today, apologies! We've got <https://github.com/haskell/haskell-mode/blob/master/haskell-...>, but it's type-a-command-and-do-a-search: it's not linked in with completion directly in the setups I've seen.

(In practice, I'm usually starting from a slightly different place: I know I want a Frob and I've got a This and a That, so I do :hoogle This -> That -> Frob and get some options. The thought-process is working backwards from the goal more than forwards from one key object in focus. A different way of working, but I'm not convinced it's less effective.)

My point though was that it's an engineering issue, not a fundamental language limitation. ie not a reason all future languages should shun haskell features. The building blocks to do better at completion than haskell curently does are there.


to quote grugbrain.dev:

>grug very like type systems make programming easier. for grug, type systems most value when grug hit dot on keyboard and list of things grug can do pop up magic. this 90% of value of type system or more to grug

>big brain type system shaman often say type correctness main point type system, but grug note some big brain type system shaman not often ship code. grug suppose code never shipped is correct, in some sense, but not really what grug mean when say correct

>grug say tool magic pop up of what can do and complete of code major most benefit of type system, correctness also good but not so nearly so much


Haha, I love this.

I wrote up something similar [0] but not quite as creative. Discoverability and Locus of Control are two of the best reasons to use OOP that rarely comes up in the academic discussion of OOP as a practice.

Honestly, OOP is many cases is simply more practical.

Even imperative code has its merits; case in point: it's often much easier to debug.

[0] https://medium.com/@chrlschn/weve-been-teaching-object-orien...


> https://grugbrain.dev/

How could I miss that?

Gold! Thanks!


>but grug must to grug be true, and "no" is magic grug word. Hard say at first, especially if you nice grug and don't like disappoint people (many such grugs!) but easier over time even though shiney rock pile not as high as might otherwise be

>is ok: how many shiney rock grug really need anyway?

... I actually needed to see this today


Great, but anybody who says correctness is the point of types has completely failed to understand what types are for.

Good types do work for you, so that you don't need to write the code to do that work. Correctness is a side effect, because being wrong would take more work.


> The other part is: Records are no substitute for objects. Not even close. Those are just quite primitive structs. (To learn how unpractical is it to try to build objects from structs without proper language support just ask someone who was forced to use C after coming from a more complete OO language).

I agree with you about C and Haskell's records, but I'm curious if you'd say the same about Rust's struct + impl system? Are there problems that can only be solved with real, Smalltalk-inspired objects, or can lightweight structs do the job if supported by enough syntactic sugar?


The answer lies in the answer to why Rust added `dyn traits`…

Indeed Rust striked some kind of sweet spot because it got the defaults right.

Syntactic it mimics mostly the OOP approach, but without the conceptional burdens behind it.

Still Rust could not do without dynamic dispatch (which is the—wrong—default in OOP).

In the (admittedly) seldom cases where you need dynamic dispatch & runtime polymorphism you just need them.

You could build this stuff by hand (and people did in the past in languages without this features built-in) but having dedicated language level support for that makes it much more bearable.

Structs + the impl system is static dispatch. This just can't replace dynamic dispatch. If it could nobody would have ever invented v-tables…


Your first point is also true for procedural programming languages, but at least in Haskell you know that if something is, say, a functor then you know what sort of things you can do with that value. I think it should be possible for an IDE to suggest functions if you used the reverse application operator:

  [1, 2, 3] & 
              ^-- suggestion here
Unfortunately this wouldn't be very idiomatic in Haskell, maybe in Idris with |> this would make more sense?


> if something is, say, a functor then you know what sort of things you can do with that value

Sure. Because you memorized all the functor methods and remembered that there is a functor instance defined for the data type at hand… ;-)

> Unfortunately this wouldn't be very idiomatic in Haskell, maybe in Idris with |> this would make more sense?

Or you could just admit that the "OOP style syntax" is superior. (I'm not saying that OOP is superior, though)


With Haskell, you learn to discover in a different way. Just think about the shape of the function you’d want, then type that into Hoogle. I learned Rust before Haskell, but prefer Hoogle to Rust’s searching via dot chaining.


> Just think about the shape of the function you’d want, then type that into Hoogle.

Do you really think this is a adequate replacement for proper IDE support?


All you work with is method calls? No free functions? No library functions?

I perform operations on stuff, if I don't know the name of the operation I intend to perform I clearly have some reading to do. Integrated look up for one specific class of operation which isn't even omnipresent in Class-oriented languages isn't a killer feature for me. Nor is structuring every module even if it's completely stateless as a class, just to benefit from IDE driven development a reasonable price to pay.


> Nor is structuring every module even if it's completely stateless as a class, just to benefit from IDE driven development a reasonable price to pay.

That's a question of language.

  object SomeModule:
     def someMethod(someRandomNumber: 42) =
        s"The answer is: $someRandomNumber."
     
     def someOtherMethod = someMethod(42)


  @main def entryPoint =
     
     val someTheory = SomeModule.someMethod(23)
     
     import SomeModule.someOtherMethod
     import StringExtension.*
     
     (someOtherMethod :: "No!" :: someTheory :: Nil)
        .fold("")(_ + _ + "\n")
        .printLine()


  object StringExtension:
     extension (s: String) def printLine() = println(s)
     // Because methods compose better in a fluent style…
     // You don't need to read your code backwards!

https://scastie.scala-lang.org/Zt6xTucJRyK1IEQQgFNRLQ

That's Scala.

Two cute stateless modules. Nicely wrapped in singletons.

Not quite sure why I had to add the handy `printLine()` method to the String class myself, after the fact…

But as extending arbitrary classes with new methods without touching them is trivial, who cares.

(Frankly it does not compile. It says something like `Found: (23 : Int); Required: (42 : Int)`. No clue)


I think a lot of this (to quote the Dude) “is just like, your opinion, man”.

Haskell is great but it’s not for everybody. It really gets you thinking about software engineering in a different way.

I wrote Haskell full time for five years, with some of the best in the industry and I can tell you that there is no real religion. I will admit the language attracts ideologists, theorists and folks who like to pioneer (which I like), but everybody is just trying to make things work - these are patterns that solve problems and guidelines that avoid problems.

Now, it’s really hard to take the good parts of Haskell and bring them to a language like JavaScript and have it feel “natural” - especially to someone who isn’t a native Haskell writer! And especially a concept as reliant on higher kinded types as effects!


JavaScript et al do have IO wrappers, they just call them promises and only force some effects through them.


The key word here was "mandatory usage".

BTW: Haskell uses its IO wrapper also only for a very narrow set of effects.

One of my favorite examples: `head []` will result in a crash at runtime. IO won't safe you. (Besides that the whole "effect wrapping" story in Haskell is anyway shallow at best. There is nothing like "none observable effects" at all as every computation needs to happen in space-time, leaving observable traces therein; you can't define away "side channels" and call the result "pure"; that's mocking reality).


> One of my favorite examples: `head []`

This is realllly unidiomatic in real world Haskell. Even removed from Elm and PureScript.

Idris maybe has better effect handling for your taste. Also see: Koka, Frank (mainly a paper)


> This is realllly unidiomatic in real world Haskell.

Whether idiomatic or not does not matter. It proves my point:

IO won't save you, and even very mundane effects are not part of the game…

Idris is the "better Haskell" sure, but the effect tracking is still part of the uncanny valley (still IO monad based).

Koka is a toy, and Frank mostly "only a paper" (even there is some code out there).

The "Frank concept" is to some degree implemented in the Unison language, though:

https://www.unison-lang.org/learn/fundamentals/abilities/

Having a notion of co-effects (or however you please to call them) is imho actually much more important than talking about effects (as effects are in fact neither values nor types—something that all the IO kludges get wrong).

I think the first practicable approach in the mainstream about this topic will be what gets researched and developed for Scala. The main take away is that you need to look at things form the co-effects side first and foremost!

In case anybody is interested in what happens in Scala land in this regard:

https://www.slideshare.net/slideshow/embed_code/key/aLE9M37d...

https://docs.scala-lang.org/scala3/reference/experimental/cc...

But also the development in OCaml seems interesting:

https://github.com/ocaml-multicore/eio#design-note-capabilit...

Look mom, "effects", but without the monad headache!


> Idris is the "better Haskell" sure, but the effect tracking is still part of the uncanny valley (still IO monad based).

Issit? I thought they track effects in a special part of the type system as to not do it monadically. Possibly both are possible.


> This is realllly unidiomatic in real world Haskell.

Yet it is the actual behaviour in the stdlib Prelude.


Yes. Because it's hard to remove things from the stdlib. More languages suffer from that.

What's nice is that the Haskell community offers several alternative Preludes; mostly fixing the problem you describe.

Hence I find the example a bit disingenuous.


Yep. I think it is needed to satisfy some category theory thing but I never got a good answer on that. I like to use nri-prelude since I went from Elm to Haskell and it has the same idioms, to wit `head : List a -> Maybe a`.


From GHC 9.6 there will be a warning against using head or tail. We don't actually want to remove them (yet) since that would break a lot of code.


I'm biased but I find the pattern for FP in LISP-like languages easier to understand. I think that and other languages with FP facilities are better to emulate than Haskell which seems more focused on mathematicians.


This opinion is also biased. We have no theoretical method for determining which design philosophy is better than the other.

We can't know whether the OOP religion is better, we also can't know if the Haskell religion is better, and we can't know whether NEITHER is better. (this is key, even the neutral point of view where both are "good" can't be proven).

We do have theories to determine algorithmic efficiency. Computational complexity allows us to quantify which algorithm is faster and better. But whether that algorithm was better implemented using FP concepts or OOP concepts, we don't know... we can't know.

A lot of people like you just pick a random religion. It may seem more reasonable and measured to pick the neutral ground. But this in itself is A Religion.

It's the "it's all apples and oranges approach" or the "FP and OOP are just different tools in a toolbox" approach.... but without any mathematical theory to quantify "better" there's no way we can really ever know. Rotten apples and rotten oranges ALSO exist in a world full of apples and oranges.

You can't see it but even on an intuitive level this "opinion" is really really biased. It seems reasonable when you have two options to choose from "OOP" and "FP", but what if you have more options? We have Declarative programming, Lisp style programming, assembly language programming, logic programming, reg-exp... Are we really to apply this philosophy to ALL possible styles of programming? Is every single thing in the universe truly apples and oranges or just a tool in a toolbox?

With this many options it's unlikely. Something must be bad, something must be good and many things are better then other things.

I am of the opinion that normal Procedural and imperative programming with functions is Superior to OOP for the majority of applications. I am not saying FP is better than imperative programming, I am saying OOP is a overall a bad tool even compared with normal programming. But I can't prove my opinion to be right, and you can't prove it to be wrong.

Without proof, all we can do is move in circles and argue endlessly. But, psychologically, people tend to fall for your argument because it's less extreme, it seemingly takes the "reasonable" mediator approach. But like I said even this approach is one form of an extreme and it is not reasonable at all.

I mean your evidence is just a bunch of qualitative factoids. An opponent to your opinion will come at you with another list of qualitative factoids. You mix all the factoids together and you have a bigger list of factoids with no definitive conclusion.


> without any mathematical theory to quantify "better" there's no way we can really ever know. Rotten apples and rotten oranges ALSO exist in a world full of apples and oranges.

So you believe that the only way things can be compared is on quantitative measurements? Not with how they impress their users within whatever context they're in?

> I mean your evidence is just a bunch of qualitative factoids. An opponent to your opinion will come at you with another list of qualitative factoids. You mix all the factoids together and you have a bigger list of factoids with no definitive conclusion.

This is the process in which we gain knowledge in an uncertain world. I guess you could take the nihilistic stance and ignore it, but what's the use of arguing with nihilists?


>So you believe that the only way things can be compared is on quantitative measurements? Not with how they impress their users within whatever context they're in?

No but I believe that quantitative measurements are the ONLY way to definitively verify certain things.

>This is the process in which we gain knowledge in an uncertain world. I guess you could take the nihilistic stance and ignore it, but what's the use of arguing with nihilists?

I'm not ignoring anything. I'm saying especially for programming, nobody knows anything. Which is actually better OOP or FP? Nobody knows. This isn't philosophy, there is no definitive proof for which is better.


> Computational complexity allows us to quantify which algorithm is faster and better. But whether that algorithm was better implemented using FP concepts or OOP concepts, we don't know... we can't know.

The CPUs code runs on are imperative, with a lot of complexities and details hidden from programmers by magic the CPU does involving things like reordering and automatic parallelization.

However, none of the current languages are great at writing code that maps to how the CPU works. One can comment that functional programming does a better job of breaking up data dependencies, but imperative code can also do that just fine.

The problem with mapping paradigms to performance is that none of the paradigm purists care about performance, end of the day they care about theoretical purity.

CPUs don't care about paradigms, they care about keeping execution units busy and cache lined filled up.


>The problem with mapping paradigms to performance is that none of the paradigm purists care about performance, end of the day they care about theoretical purity.

It's not theoretical purity. It's more tech debt. How do I code things in a way where there's zero tech debt. Such that all code can be re-used anywhere at anytime.


Separate problem!

Answer is good code review and design practices. Real CRs early on in the process, not just before signing off on feature complete.

I've seen horribly unusable code that was "good examples" of both OOP and FP. The OOP peeps have so much DI going on that tracing what actually happens is impossible, not to even get started on debugging.

The FP purists have so many layers of indirection before stuff actually happens (partial function application all over the place and then shove everything through a custom built pipe operator and abuse the hell out of /map/ to do the simplest of things).

Meanwhile some crusty old C programmer writes a for loop and gets the job done in 10, obvious, easy to read, lines.


I am from the camp that FP code produces much less tech debt then other forms of programming.

But the problem here is that no one here can prove or disprove what I just said. And that is the point of my thread.

In fact I believe tech debt is a fuzzy word that is is ripe for formalism such that we can develop a theory around what tech debt is and how to definitively eliminate it through calculation,... the same way you calculate the shortest distance between two points. I believe that the FP style is a small part of that theory.

But that's besides the point. Because since this theory doesn't exist yet, you and I have no way of verifying anything. You can leave me a code review and I can disagree with every qualitative opinion you leave in it.


> We have no theoretical method for determining which design philosophy is better than the other.

We do have a theoretical method. It's the scientific method. Other than that, I'm largely of the same thinking. Also, confusing language implementation with overall design is a major source of confusion (eg Java vs OOP vs Erlang vs FP vs Haskell, etc)

How to measure "better" and how the data is interpreted, are the major stopping points to improving software language usability. There have been some attempts (re: Quorum). Classic OOP (inheritance, et al) is simpler to use than mixins for many projects. So now we have to worry about project size as another axis. Then we have to address the issue of median developer effort. What about memory? How do you weigh FP allocate-stack-as-a-for-loop vs reduced mutability? It's more complex than FP good OOP bad.


[flagged]


[flagged]


Breaking the site guidelines like this will get you banned on HN. We've already had to ask you once not to do that: https://news.ycombinator.com/item?id=23334274.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


[flagged]


We've banned this account for repeatedly posting flamewar comments - not just in this thread but in many threads - and ignoring our requests to stop.

Please don't create accounts to break HN's rules with. It will eventually get your main account banned as well.

See also https://news.ycombinator.com/item?id=33662814


Nobody argued for any "better".

The point is: When things become a religion with cargo culting acolytes even the "best" approaches stop making sense.

That's completely independent of the concrete religion at hand.

I did not argue to "pick sides"!

In the end all the approaches are just tools. You need to wield them wisely for optimal results.


This is the problem. You didn't even read my argument. Go read it again, carefully, instead of skimming through it.

My point is:

Maybe one of these religions is right. Maybe something is the best. Maybe a side must be picked.

You didn't argue for better. You argued that everything is the same, that all things are good and nothing is bad and that every single thing in the programming universe is a tool in a toolbox.

I disagree. Violently.

The point is neither the culting acolytes OR people like you can prove it either way.

But calling people who don't share your opinion as "culting acolytes" is manipulative. The words have negative connotations and it's wrong. Extreme opinions in science and logic are often proven to be true, they are often validated. To assume that anyone without a neutral opinion is a cultist is very biased in itself.

Here's a good analogy: I believe the world is round. I'm an extremist. You on the other hand embrace all theories as tools in a toolbox. The world could be round or it could be flat, your the reasonable neutral arbiter taking neither the side of the flat-earther or round-earther.

The illusion now is more clear. All 3 sides are a form of bias, but clearly our science says only one of these sides is true, and this side is NOT the "neutral arbiter" side


I did not argue for "everything is the same".

Quite the contrary.

I've said: Everything depends on context.

What makes sense for Haskell does not necessary make sense for other languages.

Also there is no "side" that needs to be picked. What's a good idea in one context could be a terrible idea in some other context.

But people are copying blindly Haskell lately.

The issue is that this happens blindly — again without questioning anything about the underlying premises.

Doing so is called "cargo culting". And that's something done by acolytes. (The words are loaded for a reason, btw.)

I'm coming from a language (Scala) where it took almost 15 years to recognize that Haskell isn't anything that should be imitated. Now that most people there start to get it people elsewhere start to fall for the exact same fallacy. But this time this could become so big that this will end up like the "OOP dark ages" which we're just about to finally leave. People are seemingly starting to replace one religion with the other. This won't make anything better… It's just equally stupid. It makes no difference whether you replace one "hammer" with another but still pretend that everything is a nail.


You did argue for everything is the same. Basically by "same" I mean everything is "equally good" depending on context. The whole hammers are for hammering and screwdrivers are for screwing thing... I explicitly said your argument was that everything was a tool in a toolbox and you exactly replicated what I said.

My point is: something can be truly bad and something can be truly good EVEN when considering all possible contexts.

You can't prove definitively whether this is the case for FP or OOP or any programming style for that matter. You can't know whether someones "side" is a cargo cult or not when there's no theoretical way for measuring this.

The cultish following may even be correct in the same way I cargo cult my belief that the world is ROUND and not flat.


> My point is: something can be truly bad and something can be truly good EVEN when considering all possible contexts.

No, that's impossible. "Truly good" or "truly bad" are moral categories. Something closely related to religion, BTW…

> You can't know whether someones "side" is a cargo cult […]

Of course I can.

If it objectively makes no sense (in some context), and is only blindly copied from somewhere else without understanding why there things were done the way they were done, this is called "cargo cult". That's the definition of this term.

How can I tell whether there is no understanding behind something? If the cultists would understand what they are actually copying they wouldn't copy it at all. ;-)

Replacing methods with free standing functions is for example on of such things: In Haskell there are no methods. So free standing functions are all you have. But imitating this style in a language with methods makes no sense at all! It complicates things for no reason. This is obviously something where someone does not understand why Haskell is like it is. They just copy on the syntax level something that they think is "functional programming". But surface syntax should not be missed for the actual concepts! Even it's easy to copy the syntax instead of actually adapting the ideas behind it (only where it makes sense of course!).


>No, that's impossible. "Truly good" or "truly bad" are moral categories. Something closely related to religion, BTW…

Wrong. Good and bad is used in a fuzzy way here, I'm OBVIOUSLY not talking about morality OR religion. What I am talking about are things that can be potentially quantified to a formal theory. For example we know the shortest distance between two points is a line. We have formalized algorithmic speed with computational complexity theory. O(N) is definitively more "good" then O(N^2).

Right now we don't have optimization theory or formal definitions on logic organization. We can't quantify it so we resort to opinionated stuff. And the whole thing goes in circles. But that is not to say this is impossible to formalize. We just haven't yet so all arguments go nowhere. But the shortest distance between two points? Nobody argues about that (I hope some pedantic person doesn't bring up non-euclidean geometry because come on).

All we can say right now is because there is no theory, nothing definitive can be said.

>Of course I can. >If it objectively makes no sense (in some context), and is only blindly copied from somewhere else without understanding why there things were done the way they were done, this is called "cargo cult". That's the definition of this term.

You can't. The definition of bias is that the person who is biased is unaware of it. You can talk with every single religious person in the world. They all think they arrived at their beliefs logically. Almost everyone thinks the way they interpret the world is logical and consistent and it makes sense. They assume everyone else is wrong.

To be truly unbiased is to recognize the possibility of your own fallibility. To assume that your point of view is objective is bias in itself. You ask those people who "blindly" copy things if they did it blindly, they will tell you "No." They think they're conclusions are logical they don't think they're blind. The same way you don't think your blind, the same way I don't think I'm blind. All blind people point at other blind people and say everyone else is blind except for them.

The truly unbiased person recognizes the possibility of their own blindness. But almost nobody thinks this way.

Nobody truly knows who is blind and who is not. So they argue endlessly and present factoids to each other like this one here you just threw at me:

"Replacing methods with free standing functions is for example on of such things: In Haskell there are no methods. So free standing functions are all you have. But imitating this style in a language with methods makes no sense at all! It complicates things for no reason. This is obviously something where someone does not understand why Haskell is like it is. They just copy on the syntax level something that they think is "functional programming". But surface syntax should not be missed for the actual concepts! Even it's easy to copy the syntax instead of actually adapting the ideas behind it (only where it makes sense of course!)."

I mean how do you want me to respond to this factoid? I'll throw out another factoid:

Forcing people to use methods complicates things for no reason. Why not just have state and logic separated? Why force everything into some horrible combination? If I want to use my method in another place I have to bring all the state along with it. I can't move my logic anywhere because it's tied to the contextual state. The style of the program itself is a weakness and that's why people imitate another style.

And boom. What are you gonna do? Obviously throw another factoid at me. We can pelt each other with factoids and the needle doesn't move forward at all.


> Forcing people to use methods complicates things for no reason.

No, it doesn't.

All functions are in fact objects and most are methods in JavaScript, and there is nothing else.

Methods (== properties assigned function object values) are the natural way to express things in JavaScript.

Trying to pretend that this is not the case, and trying really hard to emulate (free) functions (which, to stress this point once more, do not exist in JavaScript) makes on the other hand's side everything more complicated than strictly needed.

> Why not just have state and logic separated?

That's a good idea.

This is also completely orthogonal to the question on how JavaScript is supposed to be used.

JavaScript is a hybrid langue. Part Small Talk, part Lisp.

It's common in JavaScript since inception to separate data (in the form of objects that are serializable to and from JSON) from functionality (in the form of function objects).

JavaScript was never used like Java / C++ / C#, where you glue together data and functionality into classes, and still isn't used like that (nevertheless they've got some syntax sugar called "class" at some point).

> Why force everything into some horrible combination?

Nobody does that. At least not in JavaScript.

Still that does permit to use methods.

Functions themself are objects. Using objects is the natural way for everything in JavaScript as there is nothing else than objects. Everything in JavaScript is an object. And any functionality the language provides is through methods.

Working against the basic principles of a language is a terrible idea! (In every language, btw). It complicates everything for no reason and has horrible code abominations as a consequence.

> If I want to use my method in another place I have to bring all the state along with it.

No, you don't. You need only to bring the data that you want to operate on.

The nice thing is: You get the functionality right at the same place as the data. You don't need to carry around anything besides the data that you work on.

The alternative is needing to bring the modules that carry the functionality that you want to apply to the data you need also to have around… As an example: `items.map(encode)` is nicer to write and read than `List.map items encode`.

You don't need to carry around the `List` module when the method can already be found on the prototype of the data object. Also it's more clear what's the subject and what's the object of the operation.

> I can't move my logic anywhere because it's tied to the contextual state.

That's just not true in JavaScript.

Nothing is easier then passing functions objects around, or change the values of properties that reference such function objects.

JavaScript is one of the most flexible languages out there in this regard!

You can even rewrite built-in types while you process them. (Not that I'm advocating for doing so, but it's possible).

> The style of the program itself is a weakness […]

You did not present any facts that would prove that claim.

> […] that's why people imitate another style.

No, that's not the reason.

You don't need to imitate Haskell when you want to write functional programs in a Lisp derived language… ;-)

People are obviously holding some cargo cult ceremonies when trying to write Haskell in JavaScript.

Your other opinions are based on wrong assumptions. I'm not going into that in detail, but some small remarks:

> For example we know the shortest distance between two points is a line.

In Manhattan¹? ;-)

> O(N) is definitively more "good" then O(N^2).

Maybe it's more "good"…

But it's for sure not always faster, or even more efficient, in reality.

Depending on the question and your resources (e.g. hardware) a brute force solution may be favorable against a solution with a much lower complexity on paper.

Welcome to the physical world. Where practice differs from theory.

> But the shortest distance between two points? Nobody argues about that (I hope some pedantic person doesn't bring up non-euclidean geometry because come on).

You don't need to look into non-euclidean geometry.

Actually, even there the shortest distance between two points is a "straight line". Only that the straight line may have some curvature (because of the curved space).

But you didn't even consider that "distance" is actually something² about that one can actually argue…

> You can't. The definition of bias is that the person who is biased is unaware of it.

No, that's not the definition³.

> Nobody truly knows who is blind and who is not.

Nobody truly knows anything.

What's the point?

What was actually the point of your comment, btw?

---

¹ https://en.wikipedia.org/wiki/Taxicab_geometry ² https://en.wikipedia.org/wiki/Distance ³ https://en.wikipedia.org/wiki/Bias


>No, that's not the definition³.

It is, I just worded it differently. See the "cognitive biases" part on your citation. They use "reality" in place of what I mean by "unaware". If you think something incorrect is reality, then you are "unaware" of how incorrect your thinking is, because you think it's reality. These are just pedantic factoids we're throwing at each other.

>What was actually the point of your comment, btw?

The point is that FOR PROGRAMMING, nobody truly knows which camp is the cargo cult. Everyone is blind. Stick with the program.

>Welcome to the physical world. Where practice differs from theory.

This is called pedantism. Like did you really think you're telling me something I'm not aware about? Everyone knows this. But the pedantic details of the optimizations the compiler and the CPU goes through to execute code is besides the point, obviously I'm not referring to this stuff when I'm trying to convey a point.

>In Manhattan¹? ;-)

Your again just being pedantic. I'm perfectly aware of non-euclidean geometry, but you had to go there. I had a point, stick to the point, pedantism is a side track designed to muddy the conversation. Why are you muddying the conversation?

Is it perhaps that you're completely and utterly wrong and you're trying to distract me from that fact?

>Trying to pretend that this is not the case, and trying really hard to emulate (free) functions (which, to stress this point once more, do not exist in JavaScript) makes on the other hand's side everything more complicated than strictly needed.

Bro, my little paragraph arguing against you was just a random factoid. I don't care for your argument and I don't even care for mine. The whole main point is to say that we can endlessly spew this garbage at each other the needle doesn't move forward at all. Nobody can win, because we have no way of establishing an actual winner. Thus with no way of knowing who's right there's no POINT in it.

All I am saying and this is the heart of my argument, is that YOUR topic, your team of "don't be a cargo cult" is no different from all the other teams.

I thought I made that obvious that my factoid was sarcastic, but you fell for that quick and instantly retaliated with your own factoid. Man Not gonna go down that rabbit hole.


> Is it perhaps that you're completely and utterly wrong and you're trying to distract me from that fact?

I think you miss the whole point of this website!

I've had never before to flag here something. Now you force me to do it again. That makes me really sad right now.

But you're constantly only trolling. Continue elsewhere, please.


The point is to learn. You know I'm not trolling.

What I said was true. You were going off on tangents. It's the only logical conclusion.


"You did argue for everything is the same."

I do not see where he did that. He argued simply that context matters. (And yes a "bad" tool can be the right tool, if it is the only tool avaiable.)

"My point is: something can be truly bad and something can be truly good EVEN when considering all possible contexts."

And diving deeper into philosophy here, can you name one example?


Not deltasevennine, but giving the CPU fewer things to do sounds good to me (in any context), even if it is currently unpopular. Some cults are popular and some aren't.


"Not deltasevennine, but giving the CPU fewer things to do sounds good to me (in any context)"

Ah, but what if you are freezing and that CPU is your only heat source ...


>I do not see where he did that. He argued simply that context matters. (And yes a "bad" tool can be the right tool, if it is the only tool avaiable.)

Well I see it. If you don't see it, I urge you to reread what he said.

A bad tool can be the right tool but some tools are so bad that it is never the right tool.

>And diving deeper into philosophy here, can you name one example?

Running is better then walking for short distances when optimizing for shorter time. In this case walking is definitively "bad." No argument by anyone.

Please don't take this into a pedantic segway with your counter here.


"Running is better then walking for short distances when optimizing for shorter time"

Yeah, but then the context is optimizing for shorter time. You said context does not matter. But it always does. And depending on the greater context, there would be plenty of examples where running is not adequate, even when optimizing for short time, because maybe you don't want to raise attention, you don't want to be sweaty when you reach the goal, or then your knee would hurt again, etc.

Is this pedantic, well yes, but if you make absolutistic statements then this is what you get.

But again, no one here ever said, it is all the same. It was said that it is always about context, to which I agree.

When you only have a low quality tool avaiable, or your people only trained in that low quality tool(and no time to retrain), than this is still the right tool for the job.


What's wrong with absolutist statements? Nothing.

I made an absolutist statement which is definitely true. You failed to prove it wrong. Instead you had to do the pathetic move of redefining the statement in order to get anywhere. You weren't pedantic, you changed the entire subject with your redefinition.

As for context I am saying I can make absolute statement about things and this statement is true for all contexts.

My point for this entire thread is that I can say OOP is horrible for all contexts and this statement cannot be proven wrong. Neither can the statement OOP is good for all contexts or OOP is good for some contexts. All of these statements are biased.

If you were to be pedantic here you would be digging into what context means. You might say if the context was that everyone was trained on OOP and not fp then oop is the superior choice. To which I respond by context I mean contexts for practical consideration. If you can't figure out what set of contexts lives in that set for "practical consideration" then you are just too pedantic of a person for me to have a reasonable conversation with.

There are shit paradigms out there, shit design patterns and shit programming languages. But without proof this is an endless argument. You can't prove your side either, you're just going to throw me one off examples to which I can do the same. No point. I'm sorry but let's end it here, I don't want to descend further into that rabbit hole of endless qualitative arguments.


"I made an absolutist statement which is definitely true."

Well, if you think so, then consider yourself the tautological winner.


In your universe nobody wins. Nothing is true, nothing is false. Consider yourself the ultimate loser.


If you feel the need to for personal attacks over a philosophical debate, where you consistently insist of understanding the other side wrong, then you might want to check your tools of communication. They are clearly not working optimal but granted, they might be the best, you have avaiable - but you still could improve them.

"Nothing is true, nothing is false."

No one ever claimed that in this debate, except you.


>If you feel the need to for personal attacks over a philosophical debate, where you consistently insist of understanding the other side wrong,

No personal attack was conducted here. It's just you're sensitive. I mean I could take this: "where you consistently insist of understanding the other side wrong" as an insult.

I could also take this: "Well, if you think so, then consider yourself the tautological winner. " as a sarcastic insult as well.

But i don't. Because I'm not sensitive. Nobody was insulted here. You need to relax. Calling you a loser was just me turning your sarcastic "tautological winner" statement around and showing you how YOU are at the other side of the extreme. I'm not saying you're a "loser" anymore then you were sarcastically calling me a "winner."

Put it this way, the "loser" comment is an insult IF and ONLY if you're "winner" comment was an insult too. If it wasn't we should be good.

>No one ever claimed that in this debate, except you.

You never directly claimed this, but it's the logical consequence of your statements. You literally said my statement was flawed because it was "absolute". You're like "this is what you get when you make absolute claims." And my first thought was, "what on earth is wrong with an absolute claim?" We do not live in a universe where absolute claims are invalid because if we did then "Nothing is true and nothing is false" and everybody loses.

If this isn't the case then go ahead and clarify your points.


> I believe the world is round

This isn't an opinion or a belief, it's a verifiable fact. We know the world is round because: if you travel east you'll eventually arrive at your starting destination; if you stand on a ship's crow's nest you can see further than the crew on the deck because of the curvature of the earth; if you fly a plane and make two 90 degree turns and travel an equal distance, you will end up at your starting point due to the curvature of the earth; if you go to space in a space station, you can visually verify that the earth is round.

Cargo culting acolytes will believe the earth is round with no explanation as to why. Just because you believe the right thing doesn't mean you're not cargo culting. If you can't explain why you believe what you believe, you're simply believing to follow a particular crowd, regardless of the validity of the belief.


Sure.

I simply use this "world" example because everyone here is in the same cult as me: "The world is round cult." When I use it, they get it. If I were speaking to a different cult, I would use a different example.

You will note, both flat earthers and non-flat earthers have very detailed and complex reasoning for why they "believe" what they believe. But of course most members of either group have Not actually went to space to verify the conclusions for themselves.


My point was most people do cargo cult and that's bad, no matter what. And the notion that you have to go to space to know the earth is round is flawed, as I tried to illustrate using several examples that didn't necessitate traveling to space to infer that the earth is not flat.

After all, Eratosthenes was able to calculate the circumference of the earth with approximately 0.5% margin of error[0]. Since they didn't have rockets in 250 B.C, it should be clear that there are other empirical methods to test these hypotheses.

To reiterate, cargo culting is always bad. If you don't have a reason for what you believe, then there's a chance your belief is flawed and it would behoove you to research your question and prove to yourself the validity or invalidity of your belief.

[0]: https://oceanservice.noaa.gov/education/tutorial_geodesy/geo...


Yeah I get it. But What I'm saying is that there are many times when nobody can truly prove which whether they're in the cargo cult or the other people are in the cargo cult.

So programming is one such thing. There are stylistic camps everywhere and nobody knows which one is the cargo cult, INCLUDING the neutral camp where people say everything is a tool depending on the context.


Nope. You're the one who isn't listening or understanding. still_grokking's point is that even if one approach is better than the other, a slavish cargo-cult adherence to that approach will still produce bad results.


Thanks! Couldn't say this better.

I'm not arguing about the approaches as such, I've stated that cargo culting around any of them is a bad thing by itself.

Blindly imitating something for which you didn't fully grasp all pros and cons will at best not help you and in most cases make things worse.


Nope. You aren't listening to me. I am getting exactly what he's saying. What I am saying is that the cargo culters could be right. You don't know. Nobody knows.

Additionally he DID say that the approaches were all tools in a toolbox.


yeah but sometimes its useful to use a flat earth model when for instance the ground youre going to build something like a shed on is relatively flat. in the big picture sense i agree but in different contexts an alternative abstract model can suffice and actually be more efficient if the aim is to build the shed in this case


Software engineering is mostly a practical discipline. Religions are bad if dogma trump practical concerns.


> ends up becoming an obfuscated mess of wrapper functions and custom control flow

This right here is the reason I don't like pure functional programming. Any engineer should be able to pick up your code and understand it pretty close to immediately. And with pure FP, even when they understand functional programming, they end up wasting valuable time tracking down what it is you're trying to do.

* I guess I need to edit to say I mean pure FP in a language not explicitly built for pure FP. The article is about implementing in JS, this is what I'm addressing.


There's a difference between "pure FP in an imperative-oriented language" and "pure FP in a pure-FP-oriented language". In Haskell, this isn't an obfuscated mess of wrapper functions and custom control flow, it's the most straightforward and sensible way to do things. Functions have very thin syntax and don't get in the way, laziness means you lose all the function wrappers that imperative languages add just for that, etc.

Personally I find trying to write Haskell in an imperative language to make exactly as much sense as trying to write imperative code in Haskell. Same error, same reasons, same outcome.


There's also a difference between a Scottsman and a "True Scottsman".

There is a common story of a programmer who has journeyed like an itinerant martial artist looking for functional programming enlightenment but never finds it... Our industry is way too susceptible to snake oil stories like "If I wrote all my tests first my programs would never have any bugs" or "If I wrote my programs in Erlang they wouldn't have any bugs..."

There are certain tasks for which functional programming idioms are highly effective, other ones where you will twist your brain into knots and end up writing completely unmaintainable code.


This is absolutely not a "true scotsman" argument. I am talking about very distinct classifications of languages, with wildly divergent feature sets, runtimes, libraries, and communities. The differences are real and objective.

This is just a special case of the more general principle, "Don't write X in Y". Don't write BASIC in C. Don't write C in Python. Don't write FP in imperative. Don't write imperative in FP. Don't think you've solved whatever problems you have with Y when you're forced to use it by writing X in Y. Writing X in Y inevitably leads to the worst of both worlds, not the best.


There is a triangle of promise, hype and results. When the promise is there and the hype is there but the results aren't there that's different from promise without hype. Haskell has a lot of promise and a lot of hype and could use some "tough love" in terms of understanding why it hasn't been adopted in industry. (Hint: most of the time when somebody is writing a program that whole point is that they want to create a side effect. A real revolution in software might come out of that insight.)

"X in Y" is a common programming motif in advanced systems. Large systems in C, for instance, tend to contain some kind of runtime and DSL-ization if not an embedded interpreter. I think it's an absolute blast to write DSLs in Java and unlike other languages that get DSL hype (Scala) you have IDEs that work and cooperate with this use case.


I agree completely, but since the article is about writing JS in a pure FP manner, I was addressing that and not anything else.


That's fine. I just couldn't tell. I agree with you.


Hot take here: Haskell is also a better imperative language.


Bob Harper, from CMU, has a really fun section of his 2021 OPLSS lectures where he talks about Haskell being the best version of Modernized Algol, so your hot take has, at least, some measure of reasoned support for precedent. However, the syntax for imperative programming in Haskell is not quite as simple as it could be and so makes it a tough fit for a ‘better’ imperative language.



> Any engineer should be able to pick up your code and understand it pretty close to immediately.

So a webdev should almost immediately pick up a mix of C++ and assembly?

No. There's a reason we specialize. That is not a good argument against functional programming.


You're hitting a strawman here. I shouldn't have to explicitly say that I meant someone with at least some experience in the language you happen to be writing in.


I'm not convinced that previous poster was hitting a strawman.

Your argument (if I understand correctly) is that given some language (here javascript) that one ought not program in such a way that other people who use that language are incapable of understanding it. This example of FP involves a bunch of junk that makes javascript incomprehensible to javascriptors therefore it's bad.

Now, previous poster replies with "Hey not everyone understands c++ and assembly ..." and this does sort of sound like a strawman, however we can extract the core argument as "there exist things which you haven't seen before which you don't understand, but that doesn't make them bad" and that does not sound like a strawman. That sounds like a legit argument.

For example, perhaps there exists a library which performs some domain specific task using FFT, linear algebra, and distributed computing load balancer algorithms. All of it in javascript. An arbitrary javascript developer is probably not going to be able to understand what's going on. However, that wouldn't be a legit argument against FFT, linear algebra, or distributed computing load balancing algorithms.

Specializations exist. Pure FP in javascript might not be a great idea, but it possibly confusing someone who otherwise knows javascript doesn't feel like a real argument against it to me.


But is the article/book actually advocating the idea that this non-idiomatic paradigm should only be used in very specific places in a code base where it’s objectively and demonstratably better? Or is the article/book advocating wide spread usage? I think it’s the latter and so I think the comment was a straw man.

As an aside I think this a problem with kitchen sink languages that allow all these different paradigms. Maybe not so much with JavaScript because its roots are more functional but with Java where they’ve continually bolted on functional capabilities. With a language like Smalltalk, there is no idiomatic Smalltalk, just Smalltalk. I imagine something like Clojure is the same way. With these kitchen sink languages you can end up with code bases that vary wildly in style, especially if there’s been a lot of turnover in the project.


With respect to your first point:

Wide spread usage is a relative term. Like, if the a program just happens to be 99% FFT code, then you'll have "wide spread usage" of incomprehensible FFT code in the code base. This is a question of scope of the project. So I don't think that an argument becomes strawman just because most projects will probably be bigger than their specialized component.

With respect to your second point:

Hard agree. I think languages should have much more constrained focus that largely disallows an impedance mismatch between what you're trying to do and how you go about doing it.

[Worth noting is that I largely agree with the points against this type of FP in javascript. However, I'm opposed to the line of reasoning that's being used against it. Something not being readily understood is not a good argument for rejection. Because anything worth doing that hasn't already been done is likely to be incomprehensible until you've spent some time with it.]


Okay, I think your edit made it more clear. I would also not try to turn a javascript codebase into pure functional style. On that one I agree with you. :)


> > Any engineer should be able to pick up your code and understand it pretty close to immediately.

> So a webdev should almost immediately pick up a mix of C++ and assembly?

There's a reason assembly has been relegated to very specific & specialized use cases. I'd almost want to say the same about C++.

The best production code is very simple, very understandable.


True, but I can extend my example to "javascript -> Haskell" or "Haskell -> Rust" or "Rust -> Prolog" - and vice vera. In every case, you can't just simply understand the code because you will have to learn new concepts first.


I have to agree with you as far as pure functional programming goes. I also don't like the Haskell approach of of overgeneralizing concepts just because you can; it's the FP equivalent of stuffing every design pattern you can into your OO code. I'd argue that not every pure functional programming language has to be like that but I'm pretty sure all the ones that exist are.


Please don't mix the two things. There are a lot of abstractions in Haskell (monads, monoids, ...) but those are 100% orthogonal to pure functional programming. You can do the latter without any of those abstractions even existing in the language.

And if you want to see an example for a language that is not like that, look at Scala and ZIO (www.zio.dev). It is a library for pure functional programming, but look at the examples - those abstractions you mention are not there. The library really aims at making concurrent/async programming easier by leveraging pure functional programming but without crazy category theory stuff.


Two other things that shouldn't be mixed are a programming language's features and what programmers are using them for. As noted in other comments, Haskell is good at defining all sorts of functions, but the card castles of excessive and inappropriate algebraic abstraction are a tradition of rather uniformly extremist Haskell users.


I did acknowledge that they are orthogonal when I said that not every pure FP language has to be like that. Thanks for pointing out ZIO, I didn't know about it.


I know, but you also said "but I'm pretty sure all the ones that exist" so I wanted to show you a counter-example. I'm also not surprised by your thought, since you are right - most of the time it gets mixed, which is actually sad.

We need more practical pure functional programming libraries that focus on tackling problems with concurrent updates, streaming, state-changes etc. without forcing you to take a course in category theory beforehand.


I took a look at ZIO (https://zio.dev/guides/quickstarts/hello-world) and just the first example explains a monad.

That first example shows that the for syntax desugars to nested flatMap. Which is analogous to Haskell’s do notation, which desugars to nested bind.

The abstractions you said are not there, seem to actually be there in zio!


> the first example explains a monad

Strictly speaking that's wrong. Yes, ZIO is a monadic type. But you really don't need to understand monads to understand the example. In fact, the word monad does not even appear at all. Or would you say javascript also "explains a monad" when someone explains how to map/flatmap over an array? I doubt it.

> That first example shows that the for syntax desugars to nested flatMap. Which is analogous to Haskell’s do notation, which desugars to nested bind.

> The abstractions you said are not there, seem to actually be there in zio!

Again, those concepts are even in javascript. In fact, they are in every language. The question is if it is decided to make them explicit and reusable or not. And ZIO chose to not require any knowledge of them to be able to use the library - and that's what we are talking about here no?

Let's look at the code of flatMap that you are complaining about:

  /**
   * Returns an effect that models the execution of this effect, followed by the
   * passing of its value to the specified continuation function `k`, followed
   * by the effect that it returns.
   *
   * {{{
   * val parsed = readFile("foo.txt").flatMap(file => parseFile(file))
   * }}}
   */
  def flatMap[R1 <: R, E1 >: E, B](k: A => ZIO[R1, E1, B])(implicit trace: Trace): ZIO[R1, E1, B] =
    ZIO.OnSuccess(trace, self, k)

Where are monads, monoids, functors, ... here? They are not to be found. Replace "flatMap" with "then" and you pretty much have javascript's promises.


What I meant to say is that the abstractions are definitely there in zio, which was meant to address your first comment where you said that zio did not have them.

By reading the docs I could easily recognize monads. If you replace “for” with “do”, they even have the same syntax! I could also see clear examples of things that were modeled as monoids and functors. I would claim that zio took inspiration from Haskell to model the composable parts.

Hey, I’m not complaining about any code, and specially not flatMap. I think it is a great design. It is the bind operator in the monad class in Haskell, and there is also syntax sugar to hide it and make it look like imperative code, just like Haskell. I like that!

It seems to me that it is fair to say that the abstractions are useful as you recognize in your previous comment. Maybe you just have an aversion against the abstraction names, or the way many Haskell-related articles go on and explain them… which is fair enough.

I learned Haskell without having to understand those concepts. I could just used them by experimenting with what was possible and seeing examples from other people. It was not particularly difficult. So, as you also figured out with zio, understanding the abstractions is not a requirement for using them.


This is clearly about wording, so let me ask again:

> would you say javascript also "explains a monad" when someone explains how to map/flatmap over an array?


No, but Haskell does not need to explain monad for that either. All examples and docs for map/concatMap over a list are free from having to mention any type class at all.

On the other hand, if you have to explain how the for/yield in zio and why it works for so many different types, you have to basically explain Monad using another name.


> On the other hand, if you have to explain how the for/yield in zio and why it works for so many different types, you have to basically explain Monad using another name.

But again, that's not happening.


Or post ES2017, without all the .then()s:

  const doThing = async (url, fallback) => {
    try {
      const response = await fetch(url)
      const notifications = await response.json()
      return notifications.map((notification) => ({
        ...notification,
        readableDate: new Date(notification.date * SECONDS).toGMTString(),
        message: notification.message.replace(/</g, '&lt;'),
        sender: `https://example.com/users/${notification.username}`,
        source: `https://example.com/${notification.sourceType}/${notification.sourceId}`,
        icon: `https://example.com/assets/icons/${notification.sourceType}-small.svg`,
      }))
    } catch (error) {
      console.log(error.message); 
      return fallback
    }
  }


Is this code really equivalent?

Wouldn't it block on the `await` calls, whereas the original code would instantly pass control back to the caller?

(Sorry if this question is odd. My JS is a little bit rusted).


It's an async arrow function, which means that it returns a `Promise` when called, much like the original code that constructs a `Promise` explicitly. This `Promise` resolves once the function returns.


Thanks for the prompt answer!

I've overlooked the `async` on the first line… My fault. :-|


I’m JS illiterate but I think control is returned to the main thread until the await call returns, at which time the execution continues.


JS is single threaded.

Async / await is just a code transformation that introduces Promise wrappers.

The code should be equivalent to the original. (I've just overlooked the top-level `async`…).


And now we’ve gone full circle to writing imperative code


That’s a pure function, asides from the HTTP call (which it would seem reasonable to mock). Is there a better way to handle it?


I’m not disparaging it; I think it’s great that languages have new constructs making certain FP patterns obsolete


Totally agreed. Coworkers will hate you if you start mixing in these styles into an existing code base. If you want to do FP, go switch languages (and likely jobs/teams) to something where the ergonomic are clear, concise, and idomatic—then the benefits become more obvious. It's a bit like teaching object-orientation through OCaml… just because you can, doesn't mean the community recommends it generally.


Javascript is a multi-paradigm language as are most others people are mentioning in these threads. It shows in the specification for the language too: higher-order functions, anonymous functions, Array.map/filter/reduce, etc. Like it or not the language has facilities to enable programming in a functional style.

There are advantages to this approach like enabling different styles when appropriate.

There are disadvantages as well: you rely on the discipline of the programmers or tools to catch mistakes.

But there's nothing inherently wrong about thinking of programs in terms of FP ideas. There are other ways to think about programming than in terms of program counters, control flow statements, and procedures. It's not even idiomatic Javascript to write code that way!

JS is a more functional language than most people seem to think.

The tragic thing about TFA is that it's not an article but a chapter in a long series of posts and this is just one step along the way to showing how an FP style can manage the complexities of a non-trivially large JS code base.


I won’t disagree that JavaScript isn’t multi-paradigm and you can write in a functional style, but you have to look at where the ergonomics are:

• dot, `.`, syntax on objects vs. composition/pipe operators

• no autocurry

• no ADTs or basic pattern matching

• has null + undefined but no Maybe/Option (there is null coalescing and null chaining, but it’s easy to accidentally skip)

• const can still mutate the on the object, freeze incurs a massive performance cost

• no tail-call optimization outside Safari despite being in the ECMA2015 spec

• `class` keyword existed and private methods and values now, FP is pretty anti-encapsulation

• no immutable data structures in the stdlib

• no way to prevent side effects (/* __PURE__ */ in code generation is a hack)

Can you get benefits from avoiding state/mutation and choosing composition over inheritance? Sure, but it’s nothing like being in a language where these aren’t something you can do or the default, but the only way to write code and having the tool to express it.


It's a mess for sure and you'll never get a reasonable FP language from it. You won't get a Smalltalk out of it either. Instead we get a mish-mash of ideas hashed together by a committee of people with differing backgrounds, experiences, and ideas.

... but some of those FP ideas are pretty neat and when used well can make code that is easier to maintain (for people who are familiar with the concepts).


It's actually fairly easy to get most of a Smalltalk out of Javascript. Prototypes and first-class functions form a really powerful basis for a higher language. Though, to your point, it wouldn't really still be Javascript.


Do you know of an example snippet / article demonstrating how to do this?


Smalltalk in Self: nearly complete Smalltalk implementation on similar prototypical language. http://www.merlintec.com/download/mario.pdf

Mootools: ancient class system for Javascript. https://mootools.net/core

Also, here's a very old Crockford article on implementing classes in Javascript. https://crockford.com/javascript/inheritance.html


Thank you!


You’re implying that those things are necessary to write reasonable functional programs. They’re not.


True. Reasonable to me is a pretty high bar. You can write in a functional style in JS and feel it is reasonable. You might feel differently after using SML/OCaml/Haskell/etc.


I love FP, but I wouldn’t let someone introduce these abstractions into our JavaScript code base, as they ramp up the complexity in understanding code that can be represented in a simpler fashion with the same results.


A loop isn't simpler than recursion or high-order functions since it introduces state and is noisy, but most examples of JavaScript/TypeScript + FP involve heavy usage of utility libraries that aren't community standards (like a Prelude). Not to say these libraries aren't valuable or of good quality, but it's a lot to learn atop the language itself for a new hire and can lead to some library lock-in as said functions will end up all over the code. The folks that preach the libraries the most are the ones that usually want their JS/TS to look like Haskell which ends up making it impossible for outsiders to follow (so now I need to know JavaScript + TypeScript + Haskell + the weird esoteric version these people write?). And those folks should be allowed and encourage to just write some ML-flavor-to-JavaScript code instead which can often be easier to for a new hire to follow since all examples will follow community standards (wouldn't be surprised if the coders were happier writing it too).


Your response describes exactly how I felt using RxJS for the first time.


I've been working with Rx for 10 years or so and I think it's a terrible model.

If you need processing of streams of asynchronous dynamic ("hot") data it's the least-bad model I know, otherwise there are much better ways, especially now that many languages have async-await keywords or at least a Future/Promise type.


Async await is way worse for serious async processing, mainly because handling cancellation sucks.


It took the author of RxJava months to understand the concept. Screenshot from the book: https://twitter.com/dmitriid/status/811561007504093184

(Jafar is the author of Rx .Net, and even he couldn't explain it to the future author of RxJava :) )


I just wanted to say, among all the negativity in the comments, that I love RxJS, and fell in love with it after Jafar's workshop.

As Ben Lesh often says, there are sadly a lot of misconceptions around Rxjs and the Observable type. The Observable type is a primitive so useful that it keeps getting reinvented over and over (React's useEffect is a weird React-only observable; a redux store is an observable); whereas Rx is a set of convenience functions to help with the use of the Observable type. If people are happy to use lodash, I can't understand what makes them so unhappy about Rx, which is like lodash, but for async collections.


I found using rxjs with redux to be a way of decoupling control flow. Instead of having a function with a bunch of thunks, you did something and then other bits of code to could effectively subscribe to the side effects of that and you built your control flow up that way. It had pros and cons, was quite powerful but you ended up with a lot of indirection and not exactly knowing what was going to happen.


I've only been using RxJs for a short while now (we're moving to react soon), I don't really get it. I mean I get it, just not why we're using it just to consume some REST APIs. 80-90% of that is boilerplate code to please RxJs, and only alll the way down through two layers of stores / states and a library another team maintains is there a single call to `fetch()` that does the actual work.

I'm pushing to use react-query with the new app, I think people will get confused when it turns out hooking up the whole API will take hours.


I read halfway through the article before my eyes glazed over and I could no longer tell whether the post was serious or a joke.


Thanks for this. I got the same growing sense of discomfort reading the OP, that decomposing everything into operations makes the code way harder to grok vs. just doing the entire transform in one step.

I think the point being made was “each transform could be reusable across your codebase”, but I think for this example you really pay a high comprehensibility cost. And duplicating a few lines of code is often actually the right call, instead of coupling two unrelated bits of code that happen to be saying the same thing right now (but which will plausibly diverge in the future).

As a new Rustacean, I do really like Result and Option, but writing idiomatically in your language is really important too.


Exactly this. I realized that a lot of how I use objects fits in nicely with FP patterns, but when I started looking into FP "best practices" the number of times the best practice is "Replicate this exact behavior from a 'more functional' language, even though your language has idioms for that" is astounding. I decided I'll just keep programming without side effects, if that's what FP is.


That example is what's called fluent code in OO in combination with the builder pattern. To experience peak builder pattern hell one should look at the Android API.


I read the whole thing waiting for the aha-erlebnis which never came. I'm a full stack JS/TS engineer with a decade of experience. I expected this article to be written for someone like me. It didn't click, even though I already love and use functional aspects like immutability and pure functions. I feel like it's the whole new set of terminology that puts me off (and I'm talking about `scan` and `Task`, not even `Functor` or `Monad`). I have confidence I can learn and apply this in a few weeks, but I can't realistically expect junior/medior devs to quickly onboard into a codebase like that.

Maybe I'm biased against "true" functional programming because I've been on Clojure and Scala projects in the past (as a backend dev) and both experiences have been good for my personal/professional development, but a shitshow in terms of long-term project maintenance caused by the enormous learning curve for onboarding devs. The article talks about how great it is for confidently refactoring code (which I bet is true) but doesn't talk about getting people to understand the code in the first place (which is still a requirement before you can do any kind of refactoring).

My only hope is for ECMAScript (or maybe TypeScript) to introduce these OK/Err/Maybe/Task concepts as a language feature, in a way which befits the language rather than trying to be "complete" about it. We don't need the full spectrum of tools, just a handful.


> I read the whole thing waiting for the aha-erlebnis which never came. I'm a full stack JS/TS engineer with a decade of experience. I expected this article to be written for someone like me. It didn't click

Don't worry, it's not about you. The article is genuinely underwhelming.

It walks you up the abstraction tree to the building of higher-kinded types, but then just handwaves it with 'and now you can do a lot of things!' but doesn't show them.

It needs a final part where the flexibility is displayed. Something like 'if (debugMode) runpipeline(synchronousDebugWriter) else runpipeline(promise)'.


I couldn't agree more. I feel that some thought leaders debating intellectual concepts in computer programming have no idea how real world software development takes place these days.

Developers are under enormous time pressure to deliver. They face an exponential skill curve as their scope is massively broad (i.e. devops). Things need to be shipped fast, time for "proper" engineering is compromised. Team members working on the codebase are of various skill levels and ever changing. Finally, a lot of products being worked on have a limited shelf life.

For 80% of developers worldwide, the concepts discussed in the article are too steep, and therefore unusable.


I've occasionally watched colleagues give presentations on functional programming over the years, and while I can see why certain people are drawn to it the stated benefits of functional programming have never seemed that significant to me. The advantages that FP provides aren't likely to be needed by developers that are capable of learning it.


Always makes me sad that Scala got sucked into the pure-functional priesthood type culture rather than the "better Java, by being mostly functional and immutable and then practical as hell when appropriate" pathway. I really like coding using Scala but the way I like to do it feels totally non-idiomatic.


So true. I was involved in two very different Scala projects. One was the sensible "better Java" way, which was mostly great. The other was a big enterprise project with a core group of "hardcore" FP enthusiasts which was very stressful because of imposter syndrome and troubles to onboard new folks. I have been against Scala ever since, exactly because of this FP cult.


> The other was a big enterprise project with a core group of "hardcore" Spanish enthusiasts which was very stressful because of imposter syndrome and troubles to onboard new folks. I have been against Spanish ever since, exactly because of this Spanish cult.


> "better Java, by being mostly functional and immutable and then practical as hell when appropriate"

That's very much what Kotlin is aiming for.


Scala tried to be too much. Too many paradigms. Too much flexibility and power. Many people might think they want that, but a subset are probably going to have an easier, happier life choosing a less powerful language...


Maybe I'm wrong, but I think all the talk around weird ideas like Functors, Monads, etc., are mostly red herrings and aren't that applicable to most everyday software engineering tasks.

Just use functions, avoid state, use map/reduce if it's more readable, avoid OOP most of the time (if not all of it), avoid abstracting every single damn thing, and what you're writing seems functional enough even if it doesn't totally satisfy the academic view of what functional programming is.


> Maybe I'm wrong, but I think all the talk around weird ideas like Functors, Monads, etc., are mostly red herrings and aren't that applicable to most everyday software engineering tasks.

They are a red herring. In most cases, all you need to know about a monad is that it defines some kind of transformation of the enclosed data by applying a function onto it that returns a Monad of the same kind.

e.g. the list monad [a] says that if you bind it with a function f: a -> [b] (ie a function that takes a value and returns a list of b), the monad will transform to [b] by concatenating the lists.

the maybe monad Maybe[a] says if you bind it with a function f: a -> Maybe[b], if Maybe has type Some(a), the data of the monad is replaced by the result of the function. If the monad has type Nothing, then it retains nothing. It's no different to

a = f(a) if a is not None else a

So a monad is just an object that defines the transformation of the underlying data when applying a function that returns a monad of the same type, nothing more.


OK/Err/Maybe can be trivially implemented with TypeScript, if the project development team wants them. We have it in the current project I work on and it works well with GraphQL.

For OK/Err, in my experience it kind of depends on "how happy is your dev team with using exceptions for general purpose errors"? The orthodox school of thought says "exceptions only for exceptional errors", in which case things like OK/Err give you a nice way to structure your control flow and its typings.

`Maybe` is used by `graphql-code-generator` to explicitly mark optional typings in generated TypeScript types for a GraphQL schema. I don't think it's necessary (TypeScript has `?` after all) but some people prefer it.


I've used patterns like that in Scala; I see their value in building a correct system etc etc etc, but only if it's consistently used throughout the codebase.

As it stands, most JS/TS projects aren't very consistent to begin with; error handling is either not done at all (let it fail), or a mix of exceptions, failing promises, error responses / types / states, etc.

But that's not really down to the language, more the underlying culture.


> My only hope is for ECMAScript (or maybe TypeScript) to introduce these OK/Err/Maybe/Task concepts as a language feature

When using these concepts the need for do-notation comes up pretty quickly. It would be like using JS promises without the async keyword!

Of course, follow this to it's conclusion and you will have a functional language.


I mean they could ADD it, just like nowadays individuals can choose to implement it themselves, but it wouldn't supersede any existing error / result implementations (success/error callbacks, throw/catch, promises which use both, etc).

To improve or change a language, I think you should get rid of another feature if it solves the same problem, instead of add another option.


> just like nowadays individuals can choose to implement it themselves

I don't think this is possible with JS right now?


"I read the whole thing waiting for the aha-erlebnis which never came."

This is increasingly my go-to metaphor for this: This article and many of its kind are talking about bricks. They really like bricks, because they're square, and they come in several nice colors, you can really bash a clam with them, they're cheap, they're quite uniform, they make great doorstops and hold down stacks of paper really well, and they have a personal preference for the texture of bricks over other possible building materials. These people think bricks are great, and you should incorporate them into all your projects, be it steel bridges, mud huts, a shed out back, a house, everything. Bricks should be everywhere.

Then they build you a tutorial where they show how it looks to build a mud hut, and how nice it is to put some random bricks in to it. Isn't that nice. Now your mud hut has bricks in it! It's better now.

But that's not what bricks are about. Bricks are not about textures or being good at bashing open clams. Bricks are about building walls. Walls that may not be the solution to every wall, but certainly have their place in the field of wall building because of their flexibility, easy of construction, strength, cheapness, etc. Trying to understand bricks out of the context of using them with mortar to build walls is missing the point.

Contra the endless stream of tutorials that make it look like functional programming is essentially mapping over arrays and using Result/Option instead of error returns, that is not what functional programming is about. That is a particular brick functional programming is built out of. It isn't the only brick, and if you scan a real Haskell program, isn't even necessarily one of the major ones in practice. They turn out to be a specific example of a very simple "recursion scheme". These simple "bricks" show up a lot precisely because they are so simple, but generally the architecture layer of the program is built out of something more interesting, because "map" turns out to be a very small and incapable primitive to build a real program out of.

In my considered opinion and experience, if you spend time with "functional programming" and come away thinking "oh, it's about 'map' and 'Result'", the point of functional programming was completely missed.

And stop telling people that's what it's about! You're putting a bad taste in everyone's mouth, because when all the imperative programmers look at your so-called "functional" code in imperative languages and say, "That's a nightmare. There's all this extra stuff and noise and it's not doing anything very useful for all that extra stuff."... they're completely right. Completely. It is a net negative to force this style in to places where it doesn't belong, quite a large one in my opinion. And especially stop being sanctimonious about they "don't get it" when people object to this style. It is the one advocating this style where it does not belong that does not "get it".

The worst thing that can happen in one's education is to think you've been exposed to some concept when you in fact haven't, and come away with a wrong impression without realizing there's a right one to be had. I still encourage curious programmers to clock some serious time with real functional programming to learn what it is about. This style of programming isn't it, and your negative impressions of this style don't necessarily apply to real functional programming. (It does some, perhaps, but probably not the way you think.)


Part 1: The request

Do you have a writeup that you can point me towards that goes into detail about why functional programming isn't about map/reduce/filter and is instead about reconceptualizing your entire program as consisting of recursion schemes[1]?

I'm asking because I've been working with FP languages for 15 years now and the first time I've seen this point of view is from your comments. [Although, I suppose you sort of see a half formed version of this in the little schemer and seasoned schemer books. But just not enough so that I would consider it the point they were trying to make sans your comments.]

Part 2: Furthering the discussion

Of course personally, FP isn't a single well formed idea or philosophy any more than a hurricane is a well formed entity. Just a bunch of dust and wind going in the same direction. As with all other programming paradigms. I'm perfectly happy with the true soul of FP being some reconceptualization of a program into recursion schemes because my plan, as with all paradigms, is to pick and choose the individual conceptual motes and mix them together in a way that allows me to best solve my problems.

I actually dislike what I think you're saying recursion schemes are for a similar reason as to why I dislike excess shared mutable references and loops with excess mutation. It places the programmer into a sea of dynamic context that must be mentally managed in order to understand the meaning of the program. Meanwhile, map/reduce/Result, places the programmer into a static reality where all meanings have computer verifiable proofs associated with them.

My version of FP doesn't have recursion or loops. Just map/reduce/ADT and functionality that allows you to convert recursive data into lists and lists into recursive data. Maybe that doesn't make it 'true' FP. Which doesn't bother me.

[1] - https://news.ycombinator.com/item?id=33438320 > Reconceptualizing your entire program as consisting of recursion schemes and operations that use those recursion schemes, what I think the deep, true essence of functional programming as a paradigm is


I’ve had similar experiences with scala and clojure professionally. I now actively oppose people attempting to add functional code to projects I work on.

…because when they say “more functional” most people mean:

I want less code.

I want the code to be shorter, because I’m lazy and I want it to be all on one screen.

…but that’s actively harmful to almost any code base.

You want simple code, not dense complicated code. Dense complicated code is for people who wrote the code, and a few smart talented people. Other people have to work on the code too. They cannot.

Actual functional code doesn’t strive for code density, it strives for code purity and algebraic structures.

That’s fine. Do that.

Dense map reduce reduce flow reduce functions can die in a fire.


I think you're conflating "readable" and "uncomplicated" with "familiar". I'm equally infuriated by OO code with dependency-injected-everything from some hidden framework configured by fourteen XML files somewhere in a different file tree, interfaces for every single class even if only instantiated once, factories to create builders that make adapters.

Maybe if I stared at it for twelve years it would become familiar and I would begin to think it was simple, readable and maintainable.


Yeah. Sure, "simple" and "complex" sorta have an objective definition. But colloquially "readable", "simple", "complicated", etc have a tendency to track with "things with which I'm familiar/comfortable (or not)".

Over the decades I've come to the conclusion that there's no such thing as a one size fits all sweet spot on this stuff. Different people are going to have different experiences with what they find straightforward or not. They will have different backgrounds, mental models, ways of perceiving the world. It all adds up. As a profession we need to understand this reality and find ways around it instead of getting into dogmatic arguments as if there's One Right Answer.

Common example I give - the GP complained about FP advocates wanting code to take up less screen space. I have come across many devs who struggle with concise code, and many others who struggle when code is not concise. Similarly, I have come across plenty of devs who start having trouble when code is spread out (sometimes that means within a file, across files, both, etc). I have also come across plenty of devs who have trouble when it's all pulled together.


I think this is about the level of abstraction. As React component extraction is to tag soup, so named functions composed are to fp primitives. In code reviews, if I see a big swamp of pipe/fold/cond etc at the top level, I'd kick it back and ask that to be wrapped in a named function that explains what it does, rather than exposing it's guts.

Writing concise, clear code is a skill that straddles any paradigm.


Pretty poor attitude to just adopt so generally. I've seen 'actively harmful' qualities from all paradigms. Once peoples start adopting attitudes like yours they've just become the mirror of the condescending FP type and just kill outright any of the really cool features that are useful, as well as any discussion of them.


Do whatever you want with your own code bases; my responsibility is to make the ones I work on maintainable by other people.

/shrug


Some of us don't find OOP code maintainable.


It doesn’t matter if it’s FP or OOP.

Here’s the real question: do you think dense code is more maintainable?

Generally yes? More than a verbose multilayered abstraction, probably?

…but where do you draw the line? Map statements instead of for loops? Collapse all the white space onto a single 200 character line? Make everything one big regex?

Dense code means that every change has more impact, because there’s less code; it’s unavoidable: less code to do the same work means more impact from changing any part of that code.

That is why it’s difficult to maintain; because you can’t touch it without making side effects; you can’t reason about a small part of the code, because the logic is dense and difficult to unpack into small parts.

Certainly OOP can often be verbose and annoying, but that’s a different thing.

Code density and being functional are orthogonal; some OOP is too dense too. …but generally I’ve found that inexperienced people see density and strive for it, believing this makes it functional; but the truth is the opposite.

Good functional programming is often naturally concise, but most people don’t actually seem to understand FP.

They just seem think it means to put more “reduce” statements in the code, remove for loops and generally make the code harder to debug and denser.

…in my, limited experience, working with lots and lot of different folk at many different organisations.


For me it's not density - it's the OOP class abstractions and all that. I'm not smart enough to keep up with it vs the FP approach of just doing data transformations.


I think of OOP done well at a high level as "structural logic". Whereas in FP, one might use `map()` and `bind()` to replace complex imperative logic flows, in OOP, this is done with object hierarchies and structures.

When you have a abstract base class or an interface that defines some contract, it's creating a "shape" that defines how an instance of the class can be used.

I think that this might be why some folks have an affinity for OOP and some have an affinity for FP. OOP affinity might be tied to more visual thinkers. For me, I see the "shapes" of the code defined by the contracts.


Perhaps the snark caused the down-votes, but your point is legitimate. 'Pure FP' languages encourage code that is nearly unreadable and unparseable without any additional context (and sometimes, unreadable even with said context). There is some strange desperation for extreme terseness in pure FP languages like Haskell, Ocaml, Idris, etc.

Single-character variables and functions, point-free style... Coming from an imperative background, this just seems like flexing for the sake of flexing.

Why not make life a little easier by clearly naming variables? This isn't maths or physics where variables (for some equally-inane reason) must be one character. We have 4K screens today; I can accept lines that are significantly longer than 80 chars.


When your code is sufficiently abstract, there often really aren't better variable names than a or x. My experience is that it's about the scope for that variable. If it's in a one-line lambda, then it'll be one letter. If it is going to be used in the next 10 lines or so, make an abbreviator. And it's longer, or particular unclear, spell it all out. Adding extra words don't make BusinessAbstractFactoryIBuilder more readable.


> BusinessAbstractFactoryIBuilder

While I understand and agree with this meme[1], I think that's the other extreme, where everything is a Factory Builder thing.

Even so, I would rather too much information than too little, which is what FP programs tend to do. Over-abstraction is also a problem, in my view. Even in a LINQ lambda, for instance, I might write

  someEnumerable.Select(what_is_actually_inside => doSomething(what_is_actually_inside))
rather than

  someEnumerable.Select(x => doSomething(x))
[1]: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


Funny example since currying requires no intermediate variable to speak of

    someEnumerable |> select doSomething


even funnier when you realize that doesn't even need to be curried so it also works in c#:

    someEnumerable.Select(doSomething)
to be fair, i guess doSomething was supposed to be an actual function body instead of a helper function from somewhere else


Fwiw, OCaml doesn't chase extreme terseness or point-free programming. It's not really equipped for that.

OCaml is designed for straightforward code centered on functions, modules, and a mix of imperative and immutable data structures; all with a _really_ simple execution model and a pretty nice type system. The core language has very few quirks and it's easy to predict what it does, and how efficiently.

There's not really any comparison with Haskell, which emphasizes heavy optimizations to make its terseness, based on lazy evaluation, work well in practice.


There's nothing wrong with single character variables if you're not using them like a complete idiot. A line like reports.map(r => whatever) makes it blatantly obvious that r is a report.


there is zero desperation for extreme terseness in ocaml.

some very obvious examples:

- many, if not most, functions have sensible names instead of abnormally terse ones

- it's possible to make named parameters mandatory, and many do that - e.g. the base library


It's a bit of a canard.

The benefits of FT are mostly things like statelessness, no side effects.

And it's almost better to talk about it in those terms, and introduce concepts into imperative programming.

When people say 'functional core (aka libs) and imperative for the rest' it's ultimately what they mean.

Applying FP universally creates weird code and it becomes pedantic and ideological.

I wouldn't even use the example in the article almost nothing is gained.

Try to reduce state, use immutable objects as much as possible / where reasonable, make reusable function which naturally have fewer side effects etc..

I think that the material benefit of FP is in partial application of some elements of it, and unfortunately that doesn't create nice academic threads for people to talk about. It's more like applied know-how, than theory.


What do you mean by “canard”?


false report


Before you come at me with pitchforks: I built commercial software with Erlang. I use functional(-style) programming in all languages that support it.

I skimmed through this, and came across this gem:

> If there happens to be an error, we log it and move on. If we needed more complex error handling, we might use other structures.

Yes. Yes, we always need more complex error handling. So, show us the "other structures". It may just turn out to be that the regular chain of `.map` calls with a try/catch around it is easier to understand and significantly more performant than any convoluted pile of abstractions you've built.

And then you'll need to handle different errors differently. And then you'll need to retry some stuff, but not other stuff etc.

> this is a sample chapter from my upcoming book: “A skeptic’s guide to functional programming with JavaScript.”

The problem is: it does a poor job of convincing skeptics why this is great, or even useful. I use, or used to use, functional programming, and I am not convinced by this chapter.


show us the "other structures"

“Left as an exercise to the reader”. Reminds me those well-known tutorials with todo lists and other trivial nonsense, because a fully-fledged example would seed doubt of selling points instantly.

Sometimes I think that our area is one big IKEA store. You look at nice pictures, buy into the crap, and still feel okay because you’ve built most of it yourself. Not getting that this built-yourself part makes you relate to it much more than the initial advertisement or the practical value.


> “Left as an exercise to the reader”.

Or "draw the rest of the owl" :)


I really dislike when programming books are overly terse with their code examples. This is a problem I've struggled with since I was a child learning to code.

When I'm learning a new concept, I need to be able to clearly see the steps. Even if I do understand how to read and write in the abbreviated coding style, the extra step of mentally decoding it into it's more verbose form takes mental energy away from the absorption of new knowledge.

Clearly, this book is written for an advanced audience, but at the same time it's trying to teach an unfamiliar concept to that advanced audience.

Does anyone else here share my sentiment?

It makes me think of this software I saw once that would take a math-problem, and solve it for you, showing you the steps along the way to the solution. I want something like that for this kind of code.


The code examples shown here are highly confusing, and would benefit greatly from code comments. One of the most nonsensical trends I've seen in programming is the notion that 'code should explain itself' and if it's well-written, extensive comments are unnecessary.

The best way to improve this post would be to append comments to each discrete code section, explaining clearly what other code sections it interacts with, what it's intended to do, and any other notes needed to clarify the functionality.


I completely agree with you.

This was the main thing slowing me down when I started to learn Clojure last year.

I'm now learning Emacs and and Elisp and I find the documentation even more difficult to comprehend somehow.

Two examples: Literate programming with org-babel: almost none of the documentation that I can find shows "real" examples with multiple header properties or variables being set for an org-babel file. I don't know how to mix and match them. The docs tend to show the most minimal example and I've been spinning my wheels for days trying to figure out how to integrate into my workflow.

Another elisp example: using auth-sources.el for handling credentials. I'm trying to enhance a few major modes to add support for storing server credentials, and none of the examples show how to actually prompt a user for a username and password to save it for later in the secure key store. I've checked other projects, and people do it in different ways, and they all seem needlessly complicated, with core auth logic embedded directly in unrelated function calls.

Compare this: https://www.gnu.org/software/emacs/manual/html_mono/auth.htm...

To this: https://pypi.org/project/keyring/

Edit: I'd like to add, I am loving Emacs and Clojure. I think it's worth the slog, but there is room for improvement for making a lot of these systems more approachable to the complete newcomers.


`sanitizeMessage` is literally just `message.replace(/</g, '&lt;')` but it has to go through a ton of abstractive scaffolding just so that it can fit into a single call to map(). This is like jumping through a ton of generic classes in Java only to find a single one-liner implementation at the end of the call.


I believe that his example for sanitizeMessage() was just to give an example. An actual sanitizer would be much more complex, and thus having a separate function defined to do that one thing would be more obviously reasonable.

But you DO want these single responsibility methods like sanitizeMessage()! Having single responsibility functions like this means you have more Legos you can use where you need. You get improved code reuse. Perhaps more importantly, you also get functions which are easy to test and test more thoroughly than if you have a big procedure which does a lot of things.


Every FP advocate I know comes from an imperative/OOP background… None of them are juniors. None of the “FP skeptics” (or people parroting an obvious variant of uninformative “right tool for the job”) I know come from an FP background.

People only seem to convert one way and not the other. And is it really a surprise that the vast majority of people resist change / doing things differently?


Hear, hear.

When I program, I like to use:

  { functions, types }
When FP gets bolted onto a mainstream language after the fact, you get to program with:

  { classes, mutability, side-effects, nulls, functions, types }
Then the FP skeptics think it's too complicated, and just want to go back to:

  { classes, mutability, side-effects, nulls }


Why not just use COBOL and Gotos while we’re at it? It was “the right tool for the job” for decades until the OOP religion took over! /in_jest


Sure /s.

But seriously I dislike the notion that FP is somehow a successor to OOP, like you have to keep piling abstractions onto OOP until it becomes FP!

If you do it that way, you end up with map() implementations like:

    public final <R> Stream<R> map(final Function<? super P_OUT, ? extends R> mapper) {
        Objects.requireNonNull(mapper);
        return new StatelessOp<P_OUT, R>(this, StreamShape.REFERENCE, StreamOpFlag.NOT_SORTED | StreamOpFlag.NOT_DISTINCT) {
            Sink<P_OUT> opWrapSink(int flags, Sink<R> sink) {
                return new Sink.ChainedReference<P_OUT, R>(sink) {
                    public void accept(P_OUT u) {
                        this.downstream.accept(mapper.apply(u));
                    }
                };
            }
        };
    }
Ain't nobody got time to read that. Step back from OOP before putting in the FP and you can get map() looking like:

    map _ []     = []
    map f (x:xs) = f x : map f xs


Well your second example relies for speed and efficiency on Haskell being lazy. If you did e.g. a map -> filter -> sum chain, the performance is not that great without lazy evaluation guaranteed by Haskell's implementation, which can complicate code in other scenarios or in the complexity of the compiler. OCaml is also an FP language, and the second example would be almost the same text, but with much worse performance.

Also, Python is an OOP language, but a simple map implementation there is not much more verbose:

    def map(function, items):
        if len(items) != 0:
            return [function(items[0]), *map(function, items[1:])]
        return []


I think the more pythonic way would probably be to use a list comprehension:

    def map(f, items):
        return [f(i) for i in items]
These differences are all more about specific languages than about FP per se, however.


True, but I tried to make it as close to the Haskell example as possible. I just think the OOP -> verbose/complex argument is really annoying, especially if it applies mostly to Java code.


Either way, in both our examples, what we wrote was very much in a functional style, and Haskell also uses list comprehensions.

The only thing unique about the Haskell example, I think, was pattern-matching... And while that's nice, you don't really need that for FP.


The problem of the Java code above is that it tries to serve many masters, one of them being the JIT, i.e. the code is written that way so the JIT can fully inline the whole code.

The first "final", the "StatelessOp", the "StreamOpFlag.NOT_SORTED | StreamOpFlag.NOT_DISTINCT", the "int flags" are all here because of performance.


Personally I do see FP as a “successor” to OOP, but not in a strictly constructive or chronological sense.

Building FP functionality with classes sounds just as fun as building java classes with assembly.


Yes, this rings a bell :)

But usually the issue is that people talk about OOP vs FP without specifying the scale, by example the Java stream API hide the mutations / side effects (not unlike Haskell monads do), so you can use

    { record, stream, functions, types }
But internally it's still

    { mutability, side-effects }


Without going into the core of whether FP is better or not, if OOP/imperative is what 99.9% of people start with, you'll expect to have both FP advocates and skeptics coming from that background. It's just that FP skeptics have tried it and gone back.

In other words, people only convert one way because it's the only way most people can convert.


I understand what you're saying, but my point is that the near totality of FP criticism comes from people who don't know what they're talking about.

ie.: you need to be very familiar with something before you can reasonably criticize it.


When I go back to my old map/zip/method-chaining Rust code from a year or two ago, I regret doing so and wish I had written code as imperative sequential operations (looping over numeric ranges and container items, branching, and matching) like I write and understand today.


I was 3 years hardcore Scala, but recently went back to Java for a role.

I didn't really find returning to Java to be an encumbrance at all, and really enjoyed how productive Spring Boot made me. I still love Scala and if you don't fight it and use it long enough, the crazy sounding FP/category theory stuff starts to come up naturally and seem very comprehensible.

My takeaway is always avoid known bad practices, but otherwise to cut with the grain. Write Java like Java, JavaScript like JavaScript and Scala like Scala.


Its greatest strength I see is its simplicity. OOP gives a lot of latitude to overcomplicate solutions, and I've found it tends to obscure more obvious solutions to certain problems, e.g. reaching for inheritance to modify behaviours instead doing it by passing functions as arguments, especially among less experienced team members.

The biggest problem in software development is the slow down of value output over time caused by accumulating complexity. Anything that can help keep complexity down is a big win for long term value output.


If OOP is over-complicating your code, you're using it wrong. OOP is a tool to manage complexity.

Functional programming, in my experience, created much more complicated code because it seems like the authors like to be a clever as possible.


Once you zoom out to the bigger picture of dozens of devs writing the code rather than yourself, its more a matter of "statistically, if I recommend functional over OOP, how simple will the code be a year from now".

But granted it needs to be part of a broader pressure to keep things simple, otherwise devs will like you say, find a way to overcomplicate.


Indeed. Although I have mostly stopped using OOP in most of my code, I sometimes encounter certain classes of problems that are exceedingly complicated to solve in a purely functional way, but that are a breeze to solve using OOP, believe it or not.

One can mix functional and OOP code well by compartmentalizing the parts of the code that are in OOP style, and situating it in an overall functional context. For example, the methods of a class can certainly be written in a very functional style, and classes themselves can be instantiated within functions.

The mark of a skilled developer is to know which paradigm is the best to solve a particular problem, just like the choice of a data structure can greatly affect simplicity of code to address a problem.


> I sometimes encounter certain classes of problems that are exceedingly complicated to solve in a purely functional way, but that are a breeze to solve using OOP, believe it or not.

What kinds of problems, if I may ask?


Are you confusing "complicated" with "unfamiliar"?


I agree. As an example having to create a class for bags of methods that don't fit naturally anywhere is bad. I also despise all the manager, factory, service, command things that emerge more often in OOP projects than in functional ones. I know that they denote well known patterns (if used in that way) but it's clutter, boilerplate, noise around the code that does the job. However functional only is not necessarily better in my experience.

Some context. I'm sure I'm biased by a number of factors. One is that I started programming a long time ago in C and I was passing around structs with data and function pointers, so objects. Another one is that I used a number of different languages, currently Elixir, Python, Ruby, some JavaScript, often in the same week for different customers: I like easy to understand code because I this kind of projects with sparse teams is easier to maintain than layer over layer of heavily engineered one (so factories, metaprogramming, etc.) Third, I end up using a minimum common denominator of all those languages plus inescapable and/or great features: I wish all languages had Elixir's pattern matching. This approach works and it reduces the context switch between projects.

So, when I'm coding in Elixir sometimes I have to create large data structures and pass them around, piping them to Module1.fn1, Module2.fn2, etc. It's more or less the same than composing methods in OOP, just different syntax. Sometimes those modules become bags of functions too. What's more difficult to do in OOP is to engineer a class to accept arbitrary meaningful methods. In Ruby I often code methods that return a hash (Python dict, JS object) and create a pipeline of .map.map.map in their caller, like a |> pipeline in Elixir.

I prefer mixed paradigm approaches, where one can create the few classes needed to represent the few data types that are objects (Elixir's and Erlang's GenServer) and code everything else as functions. I have a couple of Django projects where the only classes are the mandatory database models and management commands. Everything else is functions in modules. That would be much more pleasant to code in Elixir (if it had classes instead of those noisy and uninformative handle_call, handle_cast, start_link.) Ruby has modules too but for some reason customers really like to create classes for everything, even two method ones like

  class Command
    def initialize(args)
      ...
    end
    def run
     ...
    end
  end

  Command.new(args).run
which are quite funny to look at. That should be

  command(args)


> Ruby has modules too but for some reason customers really like to create classes for everything, even two method ones like [...]

Encapsulation and extraction of complex functionality (yes, you should not do it for trivial one-liners).

In Ruby, classes get used a lot for this because, uh, why not, but you might as well just have a module method - it's just that the way modules are used in Ruby makes this slightly awkward because Ruby is after all biased towards "everything is an object".

But in, say, Haskell I might write instead:

  module Command

  run :: [String] -> IO ()
  run = ...

  -- some other file

  import qualified Command

  ...
  Command.run args
(not that "Command.run" looks like super idiomatic code in Haskell or anything like that, probably you'd structure your code differently, but it's the general principle)


When it comes to programming, pretty much everything boils down to two key elements:

  - Some kind of data/information, in some (digital) form of one structure or another.
  - Methods, processes and operations that transform, transport and do other things to that data.
For example, in OOP, those two elements are generally combined together in organisational units ("objects"); on the other hand, in functional programming they are strictly separated.

Each of those paradigms - and many others - are just tools in our arsenal to achieve our goals, and neither is intrinsically better than the other. Depending on the context and environment that we are operating within, each has its advantages and disadvantages; some are better suited than others in certain cases.


You can marry FP with object oriented notation, see UFCS or Scala, or lenses.


Of course you can, and that just strengthens my point. :)


(Off topic comment) With the font you're using to write code it's sometimes difficult to read some of the characters. For instance, I had some problems to read the `getKey` method, since the `K` char is pretty blurred.


It's supposed to look like an old typewriter with partly broken types, but it looks like they overdid it on the "K"... plus no typewriter I know of can write with light color on a dark background, so that kind of breaks the illusion.


I came to write the same thing. Fine for reading text, not ideal for code


The K character is completely invisible on my machine even


After reading this article, I changed my mind about functional programming. I no longer think I need to learn it asap. I now think it may actually make my code more difficult to understand and maintain, and will keep avoiding such patterns.


Learn FP with a language designed to be FP instead of one that has the tools for it but isn't primarily meant to be used that way. See: Ocaml, f#, Haskell if you want to go all the way off the deep end, etc.


Unfortunately, it didn't help to convince me either.

For me the question still remains: how is any of this better than good ol' reliable `for` loop?


John Carmack on Twitter: "My younger son said “coding is basically just ifs and for loops.”

Which inspired this great cartoon: https://twitter.com/nice_byte/status/1466940940229046273


How is the for-loop any better than good ol' GOTO?


In numbered-lines code you can’t see the start of a goto loop. In labeled-lines code it’s not that hard to infer that it is a jump target, but having a condition/iterator in one place is an obvious benefit.

That said, I believe most people who “boo GOTO” never actually used it to make sense of what could be wrong with it and how it feels really.

Anyway, I think that this analogy(?) is confusing and makes no sense.


Why is anything better than GOTO?

You use GOTO all the time, even if it's wrapped in an abstraction that makes its intentions less clear.


Conceptually, `for` loop is just syntactic sugar - it simplifies certain action, not makes it more difficult.


Conceptually, the `map` function is just syntactic sugar for `for` loops, and it's also meant to simplify actions. There are functional equivalents to many small patterns of code that occur often - for instance

    lst = []
    for element in range(10):
        lst.append(element * 2)
Is a very common pattern, that can be expressed with less typing (and less mental overhead, when you become used to it) by

    lst = list(map(lambda x: x * 2, range(10)))
Similarly, another very common mini-pattern is

    total = 0
    for el in range(10):
        total += 3 * el
Which can be more swiftly expressed by

    total = reduce(lambda acc, el: acc + 3 * el, range(10))
    
These examples are trivial, but once you start using these higher level syntactic sugar-like constructs, you often find that code tends to fit together more nicely and in ways that make more sense, even if only because you used a filter operation before the map instead of writing another level of nested `if` statements inside a loop body. Code gets easier to follow, not unlike how it's easier to reason about the behavior of a `for` loop than it is to keep a bunch of `goto`s in your mental model of what the code does while at the same time thinking about whether the business logic part of it makes sense.


Tbh, I get your imperative examples instantly but my mind struggles to autograsp these multilevel functional parts like list(map(lambda. Too much noise, I have to think through it to get it.

I’d prefer a loop with ifs and continues/breaks.

It may be a lack of experience or habit, but then I can write code like that (and worse), and it doesn’t make any sense to me. Don’t get me wrong, not that I refuse to use maps or filters or folds, but not that I want to make a whole program of them either. They have their place where factoring-out a block of code feels stupid, but if I had something like:

  [1, 2, for (x of xs) {
    if (cond) {emit x}
  }]
I’d never bother with these function-al things. It’s not FP that is rich, it’s a traditional imperative syntax that sucks. Paradigm shift is not equal to usage out of [in]convenience.


> Tbh, I get your imperative examples instantly but my mind struggles to autograsp these multilevel functional parts like list(map(lambda. Too much noise, I have to think through it to get it.

That's exactly why most FP languages have pipe operator, so it's incredibly easy to read.

  lst = range(10)
         | map(fn x -> x * 2 end)
         | map(etc...)
or

  total = range(10)
         | reduce(0, fn elem, acc -> acc + (elem * 3) end)


That doesn’t change much for my read ability. It even reads more imperatively, like: I take a range of 0 to 10, map it over x * 2, map it over… What do I get? A mapping? Maybe?

Meanwhile, a for loop is straightforward, you go from 0 to 10 and append a double or add a triple of x to an accumulator. I appended/added them, that’s what I get. It’s like my brain somehow follows {} scopes and understands their local envs with no effort.

If this syntactic style works for you without this friction, nice. But it doesn’t work for everyone and I suspect that this FP/PP is biased by this effect at both sides.


> What do I get? A mapping? Maybe?

A mapped list/enumerable it was originally given. You don't think what you get when you add two numbers, don't you? Language just works certain way. Not understanding a simple building block of a language isn't a valid argument against it. All you essentially say is that you got so used to OOP concepts that anything else is hard to read. And it's ok, it's the same way for everyone... But it's not a valid argument to say that "fUnCtIoNAL bAd". The whole thing here boils down to what you already said - lack of experience.

My honest advice is - try to learn one functional language, like honestly learn and understand it, try writing something with it. It really does expand horizons.


> I can write code like that (and worse), and it doesn’t make any sense to me

I'm learning FP and I see value with writing code with map, reduce etc as those are expressions instead of statements. Expressions guarantee you have a result of particular type (with possible side effects if you prefer), but statements DO side effects. The loop may or may not insert some value into resulting list because some branching happened or you forgot some else condition, with map you guarantee you get same number of objects of particular type back.

Plus that enables composition (like in C# LINQ) - by not extending some loop with different kind of responsibilities but just passing result to next function that modifies/filters the result.

https://fsharpforfunandprofit.com/posts/expressions-vs-state...


    [1, 2, for (x of xs) {
      if (cond) {emit x}
    }]
What are 1 and 2 doing? What is emit? Where does x go ?

Is the above code just:

    filter cond xs

?


It results into [1, 2, …(some xs satisfying cond)]. Emit does just that - emit another value into a context where for-expr was used. It emits some of x-es into that array literal.

filter cond xs

More like 1:2:filter cond xs.


For what its worth, the pythonic solution is

  lst  = [x * 2 for x in range(10)]


Indeed! I started programming in GW-BASIC when I was 13. IF and GOTO was all I needed! FOR, FUNCTION, ... why was it there?


If you start with a program, see it as a block of logic that can be separated into parts with cuts in many different places. Choose the cuts so as to leave it a composition of functional programming abstractions. Then it is easy to reason about and it is possible to teach others to code in the same way. This is a benefit for functional programming - a promise of consistency in thought and style that is easy to debug.

The argument I read in this article seems to be going in that direction, but I'm not sure it is being presented clearly. The semi-monoid functors are a fair way down in the weeds and not really what people should focus on when deciding why functional programming is great. Unless they really like abstractions in which case more power to them.


Why so much indirection and abstraction when it can be done more simply and directly?

    const newData = notificationData.map(ev => {
        return {
            username   : ev.username,
            message    : ev.message.replace(/</g, '&lt;'),
            date       : new Date(ev.date * 1000).toGMTString(),
            displayName: ev.displayName,
            sender     : `https://example.com/users/${ev.username}`,
            source     : `https://example.com/${ev.sourceType}/${ev.sourceId}`,
            icon       : `${iconPrefix}${ev.sourceType}${iconSuffix}`
        }
    });


i will say one thing about functional programming. it does feel amazing to write all those stateless functions and using all kinds of operators.

here is where one gets into trouble, reading functional code. while it may look better when you write it, try reading your own code after a few weeks. you will appreciate it much more when your control flow in a significantly large enterprise application is through bunch of ifs and imperative loops than scattered across various functions and operators.

another problem, debugging. when your inputs cross various function boundaries, no matter how smart IDEs you have, you end up messing around adding impurities to your pure functions and adding watches with conditions all over the place.

lastly, all that lux code does hide a significant problem. Performance. I don't know about javascript but all those temporary objects one keeps creating for the pursuit of statelessness is maddening. i don't know if there is anything characteristically different about a language like Haskell which handles this problem. Does it end up creating new objects every time for e.g. a map is called like in this example to update all the attributes?


> Does it end up creating new objects every time for e.g. a map is called like in this example to update all the attributes?

No. This is optimised away.

It's yet another reason why it's frustrating to see FP-like things bolted onto mainstream imperative stuff. You can't optimise away 'mostly functional' code. Wrapping a Set with an UnmodifiableSet isn't the same thing as (f x) + (f x) == 2 * (f x)


That’s interesting, because Haskell is the first language I have found in which I can write code, come back to it six months later, and still understand it. I never managed to do that in C, C++, Java, Perl or Python.


I didn't understand what was the point of the introduction with so many words like that.

Author describes functional programming zealots and seems to imply it's all annoying because of all the abstractions, then proceeds to do exactly that while calling it "the good parts".


The first code iteration in this article could have been deployed to Production before I finished reading the rest of the article.

Functional programming definitely has its use cases, as side effects can introduce bugs. Unfortunately real world business logics often does require side effects. Often you only find out about side effects later in a project, and then you suddenly need to start hacking your clean functional approach.


> real world business logics often does require side effects

Yeah, all business logic requires side effects, else it is doing nothing :).

But following some FP practices, you can push the side effects very far out to ward the edges. Instead of typical OOP, where objects get passed around and mutated anywhere and everywhere (which makes identifying where certain things started to go wrong), with pure, single responsibility functions you can write very simple but thorough tests which give you vastly more confidence about the code. And as a huge added benefit, the lines of test code actually go down because there's so much less setup and teardown (mocking, stubbing, etc.).


> side effects can introduce bugs

> real world business logics often does require side effects

> you suddenly need to start hacking your clean functional approach

These are great arguments as to why you need:

a) A type system which tracks effects

b) A language which makes refactoring safe and easy


I am a fan of mixing functional and object oriented (and also some other paradigms).

Functional is great in cutting amount of repeatable code. It is great for writing tools to process things regardless of their exact type and for writing infrastructure code.

Object orientation is great when you want to model the domain of the problem. Things that actually make sense when you are talking to non-developers at your company. Clients, contracts, appointments, employees, teams, invoices, things like that. But when you start making objects that do not represent anything meaningful you probably went a bit too far.

I think knowing various paradigms, knowing when to use which and how to marry them together is critical for maintainable implementations.


Yeah. I'm one of those who are not convinced yet about functional programming being so great. In most cases what people call functional programming are inefficient shortcuts to lengthier functions, so I never got all that hype about it :\


The nice thing about functional programming is that it allows you to shift some of the "mundane" domain thinking onto the type system. A classical example of that might be to mix units in physics code. So, nonsenses like adding up lengths and time is no longer possible.

Things also generally compose together more easily than in OO. There's other benefits too, but these are my main ones.

So, once you set up this invariant groundwork, you free your mind from thinking about certain problems, as the compiler catches them for you.

This isn't to say these things can't be done in OO (they have been done), but they're usually not enforced to the same level, as it's normally a library, not native functionality


The benefits of type systems are either entirely orthogonal or at best are themselves a prerequisite for the benefits of FP. Your example of measurement units in particular is applicable to the most imperative goto laden code we could think of.

In fact, some of the more popular functional languages are untyped - those being Clojure, Erlang and Scheme.


Exactly my point, I'm sure there are benefits, but this example is more suited to Typed languages whereas many functional languages like you say are untyped. And man, I've worked with MIT Scheme in a major commercial software for 5+ years. What a nightmare it was. :(. I mostly understood the functional aspect of the language, but the sea of misplaced brackets and a complete lack of any debugger (at least for our version of it) whatsoever made it one of the worst languages to work with. The biggest irony was, since Scheme is apparently so extensible, they had created a home-grown "Object Oriented Scheme", which implemented only some aspects of OOP and omitted some other very important ones, and it was upto the developer to find out the hard way which ones were omitted! :D


The other day I watched a self-described functional programmer implement a while loop using this custom-written conditionally recursive function.

There was nothing more elegant about it and it wasn't obvious from just looking at the function call what the function was doing. It wasn't any simpler than while(condition){}.

All because he didn't like while loops (and statements in general). It seemed tortuous and unnatural.

FP has some good parts but so does imperative programming, and I think throwing away the good parts of imperative programming and OOP for the sake of "pure" FP is a serious mistake.


> One of the people I showed this to had an interesting reaction. Their response was something like: “Hey! I like functional programming because I’m lazy and incompetent. It’s about all the things I don’t have to think about.”

I can perfectly imagine the most intelligent and smart people I've ever meet saying this same thing.


There is a Danish scientist called Morten Münster who's done research on behavioural science. He's written a book for management that's gotten very popular in my country, and this is how I became acquainted with it back when I did a stint in management (which around here includes getting a MBA type education in management).

Aaaaaaaanyway, in it he talks about two ways of how we function as human beings, and I'm likely presenting this wrong, but one is basically the best practice theoretical mode of being and the other is how we operate at 17:00 on a Thursday after a week where both our children have been sick, and the overlaying message is that everything that isn't designed for the second mode of being is likely going to fail. Now the way it's presented in the research and the educational material, this has nothing to do with programming. It's more along the lines of companies needing to formulate missions and goals that aren't corporate bullshit, because nobody understands corporate bullshit when they are faced with an angry customer some late Thursday afternoon.

After a few decades in SWE, however, I've become sort of a fan of designing software for that 17:00 Thursday EnKopVand mindset, and functional programming helps a lot in that regard because it kills soooo many of the complexity pitfalls that you really don't want to deal with when you're tired, lazy and incompetent. Of course the other side of this is that I'm not religious about functional programming either, I rarely write classes these days, but if there is a good reason to write one, I will.


After a few decades in SWE, however, I've become sort of a fan of designing software for that 17:00 Thursday EnKopVand mindset, and functional programming helps a lot in that regard because it kills soooo many of the complexity pitfalls that you really don't want to deal with when you're tired, lazy and incompetent.

It's so funny, because I thought your comment would lead to: when it's 17:00 on a bad day, I'd rather debug some Go code that is perhaps mundane but easy to follow than a chunk of Haskell code of a colleague that drank too much category theory kool-aid.

Which goes to show that what one wants to debug at 17:00 on a bad day is very personal?


I mostly write Typescript these days, and being lazy, I'll just quote wikipedia, but I'd much rather debug:

  const result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
    .filter(n => n % 2 === 0)
    .map(a => a * 10)
    .reduce((a, b) => a + b);
than:

  const numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
  let result = 0;
  for (let i = 0; i < numList.length; i++) {
    if (numList[i] % 2 === 0) {
      result += numList[i] * 10;
    }
  }
Maybe it doesn't make so much sense in this simple example, probably even less so if you're not familiar with Javascript, but it mostly comes down to the state of what you're working on. In FP you know what you get, how it looks while you're working with it and exactly what to expect as the outcome, in OOP, well, you sort of don't. Reading Wikipedia, though, maybe what I like is called Functional Programming with Higher-order functions and not just Functional Programming?

Like I said. I'm not extremely religious about it, and I do think a lot of OOP design principles and code practices are slowly heading toward a more FP way of thinking. In that way I think it's sort of interesting that you mention Go, because with Go you seem to mostly work with immutable states and functions, rather than mutable objects, which is more functional than imperative programming, but maybe I just haven't worked enough with Go to know better. If you ask me, everything should frankly be immutable by default, but retrain the ability to become mutable like they do it in Rust with the "mut" keyword. I really, really, enjoyed working with that for the brief period I did. Anyway, I'm not sure I'm ever going to get into religious FP, I may very rarely use classes, but it's not like an abstract class can't be healthy for your 17:00 afternoon self once in a while.

But basically every best practice in regards to OOP that I was taught at university back 20+ years ago, the stuff they still teach today (I'm an external examiner a few times a year), has proven to be sort of useless in the real world for me. Maybe it works in more professional or competent organisations but it sure hasn't worked well in any place that I've ever worked, and yes, it does feel sort of dirty to examine people in theories I disagree with, but it's good money and a good way to keep up with both the CS world and possible hires.


> Which goes to show that what one wants to debug at 17:00 on a bad day is very personal?

It really depends, it's possible to write mundane, simple functional code (though I think more common in OCaml and Erlang than Haskell) but much of the community is sort of very excited about all this higher-order stuff that might be great but is not quite as useful and obvious as the core primitives of algebraic data types and pattern matching. I imagine a lot of people probably felt similarly about the Design Patterns craze with OOP: it's not that OOP isn't useful, just that inheritance is maybe not what you want most of the time and not everything needs to involve design patterns.

I'd rather be debugging an OCaml program than a Go program for sure.


Right, but I think (a combination of) certain abstractions invite abstractionitis. OCaml and Erlang avoid that to some extend by being more restrained about what they add to the type system. On the other hand, these languages allow side-effects, removing the need to rely on monads, monad transformers, monad transformer stacks, etc.

I agree that algebraic data types and pattern matching lead to better code. But even though they were introduced (?) by ML, there is nothing holding imperative languages from adopting them (see e.g. Rust).


Some systems require eternal vigilance to not blow up in your face, others shepherd people towards the pit of success.


So true. Same with languages IMO. Not naming any names :-)


Wow, I was really expecting this to be an argument against FP, until it wasn't. I love the concept of designing for Thursday 5pm though. My approach to achieve that is simply different (it doesn't include FP).


Can confirm. I took Advanced Functional Programming for my master's. Of the ~15 people who took the class, the majority scored <=65% or gave up, a few people got pretty good grades because they were reasonably intelligent and willing to put in the work (me), and then there were a few people who scored >=95% because the high levels of abstraction genuinely made things easier for them.

That was when I learned that for some people, a monad really just is a monoid in the category of endofunctors.


When people call themselves "lazy and incompetent" they're not sincere. It's the equivalent of an obviously fit person saying "oh I'm so fat".

It's just fake humility. The same people would get offended if you actually suggested they were lazy and incompetent.


In a programming context lazy does not need to be a bad thing.

https://thethreevirtues.com/

Incompetence, I agree, sounds either self-deprecating or fake.


incompetence is most likely just the impostor syndrome speaking


I think it’s often Socratic laziness. Not fake, but “humans are dumb and weak, let’s make this easy for ourselves.”

For most of my coding work I take it a step further than this with the simple premise that the code you wrote professionally is not for you. It’s for whoever next has to work on it, no matter their level of expertise. Sure, maybe you ‘just know’ that the equality operator comes between bitwise-OR and logical-OR in precedence, but the code isn’t for you so maybe just use the brackets anyway.


I don't think this is true. How fat you are can be pretty objectively measured and it won't change in the short term.

Being lazy and incompetent however is always fluctuating. There are days I'm lazy, there are days when I can work 12 hours and feel energized. There is also days where everything in my mind aligns and all the problems melt away. There is also days where I ram my head into the wall on some relatively simple problem.

The point being is that programming is something that no human can do without errors. So you want a language that provides as much low overhead guard rails as possible to stop yourself from shooting yourself in the foot. Even on the days you're lazy and incompetent.

That's how I see the statement they make.


"The same people would get offended if you actually suggested they were lazy and incompetent. "

Maybe, because it is all relative.

I know my flaws and compared to my (often unrealistically) high standards, I feel incompetent quite a lot. So I can say the above and mean it.

But if I get criticized as incompetent by someone way below my level, then yes, I might take offense and depending on the mood, also hit back (or ignore it).


Maybe a key part of intelligence is merely knowing your own limitations.


Or offloading so you have more room for the useful stuff.


Just a quick heads-up, if you want to truly write functional code in the browser there is js_of_ocaml available in the OCaml realm. With Bonsai [0] a somewhat usable framework for webapps is available now, too. There are drawbacks like filesize, but if you don't have to serve the webapp on the first pageload it shouldn't be a problem.

[0]: https://bonsai.red


Or https://fable.io in the F# world, which is production ready and excellent


The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.

- Anton van Straaten


> What we care about is “can we deliver better code, faster“

Strawman! Most coders, even staunchly “imperative” ones care a lot about tech debt, tidyness and code quality. That is why things like SOLID and design patterns are popular. For better or worse good programmers are proud of their code, and I don’t meet too many who feel they need to “ship faster”. Product people might want that though.


Alice: I don't need functional programming, I am very productive in my imperative / OOP language.

Bob: What about feature x?

Alice: Oh yeah, well we can add that! It has proven useful.

Bob: Agreed! But now do you see the need feature Y?

Alice: Hmm good point. Feature X would be better with Y. Let's add that too.

(Repeat n times)

Alice: Isn't my language pretty nice now?

Bob: I have to agree, it's very productive

Alice: See I told you, we don't need an FP language after all!

Bob: ...


The main problem with functional programming is that most programmers are too incompetent to use it. It's complicated, the way calculus and music are. While it's sort of a safe assumption that anybody outside of the actually learning disabled, with enough concentration, could buckle down and learn it, most people - so most programmers - won't. Most programmers won't advance past procedural code - that's why Java programmers love the singleton pattern and the Spring "framework" so much: it lets them write and think in simplistic procedural form but pretend they're using an object-oriented programming language. (And if you can't even comprehend OO, you definitely can't comprehend FP).


I'm still waiting for a FP equivalent of the original GoF book to learn it from.

* https://leanpub.com/sofp is impressive but is in a different category

* https://www.manning.com/books/functional-and-reactive-domain... is the only one I know which was trying to discuss building real life software


Using JS to try to prove or disprove anything is pointless. I think a much better comparison is C# / F# for the exact same problem. I think Scott Wlaschin's Railway oriented programming is a prime example of a great OOP vs functional programming content.


> I think a much better comparison is C# / F# for the exact same problem

As a current reader of fsharpforfunandprofit.com, I'd like to see modern C# comparison with F#. C# has introduced decent pattern matching, records, nullables. LINQ was introduced long time ago - many things that make F# possible. Here is some comparison, but its 10 years old without the goodies C# offers today: https://fsharpforfunandprofit.com/posts/designing-for-correc...


Improved type system to me is the most valuable. Compared to OO inheritance fest, designing with types popular in functional languages can be much more natural and easier to maintain.

There are other valuable things, but IMHO the key is that, like OO programming, they are valuable sometimes, not always. Being explicit what is mutable and what can produces side effects is really good, but forcing everything to be immutable and without side effects, is not.

So, while I like some things about functional programming, I'm not a fan of pure functional programming. I'd rather have a language that supports mixing multiple paradigms, even though that means giving up some things that are avaliable only in pure functional languages.


> I'm not a fan of pure functional programming

> Being explicit what is mutable and what can produces side effects is really good

!!! That's what Haskell is !!!

What language are you using which is explicit about side effects?


> !!! That's what Haskell is !!!

I haven't tried Haskell yet. Only F# (of the functional bunch). From what I heard about Haskell, I'm not sure it's what I'm looking for. It sure has functional side covered, but not much else.

To me functional programming is based on logic. However, every program runs on a machine, and sometimes it's important to reason what the machine is doing. Focusing just on the logic is not enough every so often.

> What language are you using which is explicit about side effects?

Sadly, none that I use.


For me, the second variant with map chaining

  const dataForTemplate = notificationData
    .map(addReadableDate)
    .map(sanitizeMessage)
    .map(buildLinkToSender)
    .map(buildLinkToSource)
    .map(addIcon);

is the best one, it has way less mental overhead comparing with the end result.

Also a catch with no error handling

  try {
    return Just(JSON.parse(data));
  } catch () {
    return Nothing();
  }
in most cases is not the best idea, at least I'd put a log about it

Nevertheless, waiting for the book from the author

UPD: edited code formatting


It's not performant though generally speaking you keep boxing and unboxing the type (imagine having a list of 100,000 notifications and then mapping over it 5 times in a row). The composition law would give you a just-as-readable option

    const dataForTemplate = notificationData.map(pipe(
      addReadableDate,
      sanitizeMessage,
      buildLinkToSender,
      buildLinkToSource,
      addIcon
    ))

> in most cases is not the best idea, at least I'd put a log about it

Well, in FP you don't want to do side effects like write to console. Instead you want to hold onto the errors (with an `data Either e a = Left e | Right a` instead of `data Maybe a = Just a | Nothing`) til you get to the part of your application where you do do the side effects.

  try {
    return Right(JSON.parse(data))
  } catch (err) {
    return Left(err)
  }


Following TFA example, I think there's one more step that could be done (or could have been done earlier in the processes) - shift from piped, hardcoded function calls to a data structure which describes the transformations you would like to have applied to your data.

With just a few lines of mini-framework definition code, you can define a Processor system which behaves like a pipe+reducer but takes a simple data structure of initial value, functions to call with that data (and optional additional params), with the result of each step fed to the next. But in the framework you take the output of the previous step and update the results key within a process hash. Or if the function fails, you can accumulate errors in an errors key-value pair. Steps can have flags, such as :hard-fail, etc. which the Processor can use to affect special behaviors (like a rollback or a termination) based on errors in steps.

This enables really dynamic programming options as the transformations to be done on the data can be determined at runtime based on formulas or process generators.

It might seem that such an approach would make debugging and troubleshooting more difficult, but because you can add additional metadata and debug behaviors to the Processor steps data, tracing exactly what has occurred in very easy.

The only downside I've found from this approach is that IDEs may not "see" your call to functions as they are just references or strings/symbols (depending on the language).


JavaScript is a bad example. We rely on so many libraries which are not functional. React one of the biggest is not functional at all.

Also JavaScript biggest advantage is that you can mix up. And it’s biggest disadvantage is that non experience developers will mix up too.

Also you need to cloning multi level objects.

To work 100% FP in JavaScript you need the libraries and you need to have a flat data structure every where.

I don’t know how many JSON APIs you see who work like that.

JavaScript has also a rich set of language features you can use to save your time and still have quality code.


I'm a huge fan of writing code in an FP style in a language where that's the convention.

Erlang, Elixir, OCaml, Clojure, (Hell, even Ruby.)

The problem with the current state of JS and Its popular libraries, is that the convention is to write OOP and Imperative code. Unless your entire team is onboard, its causes a lot of issues.

I feel like js-transpiled languages like Purescript, Rescript, ClojureScript, ReasonML, etc do a really good job of allowing Devs to write functional code on the frontend, while leveraging the JS-lib ecosystem.

I personally have a hard time reasoning about while and for loops, etc, and mutable variables, my monkey brain can't keep track of that many spinning plates.


Interesting article with an interesting writing style, I may pick up the book even if I suck at reading books!

I didn't start as a skeptic and really think functional programming looks beautiful and is very useful in preventing side effects. However, in most practical situations I've found over the years functional programming to be harder to understand and debug when stuff goes wrong.

In javascript in particular, functional programming is very nice but at the same time usually way more ineffecient so for many tasks I tend to go back to normal for loops just because it's that much faster.

I had a collegue that wrote a lot of functional code and his code was extremely hard to follow and understand. Even if it was beautiful figuring out where the system went was the hard part and I had to go to the function decleration, understand it, move on to the next until I found whatever the issue was. When I found the issue, I couldn't easily fix it because the next function called was expecting the data structure to behave in a specific way so for one issue once I had to change basically every function called before and after which was way more tedious then if it would have been more sequential.

I don't know. There is something to say about functional programming but lately I have kind of tilted back into the more standard approach. Functional programming is a lot more beautiful in practically every implementation I've seen but the standard approach is, in my experience, usually more useful in practical terms.


> I had a collegue that wrote a lot of functional code and his code was extremely hard to follow and understand.

That's not a problem with functional code, it's about badly designed abstractions. You get the same issues in garden-variety OOP codebases, only to a far greater extent.


I am not a zealot as the book might describe. My personal goals are portability and predictability of code. Use of functions with defined return types in TypeScript, as opposed to other containers by reference, allow me to achieve those goals.

Most of my functions though are void functions (returning null or undefined). It’s not essential to me that functions must return a value, in defiance to functional programming, so long as like instructions are grouped in a single container that may be called by reference.


While there are areas where my functional convictions have greatly diminished, my mid career zeal had the tremendous benefit of illuminating new architecture and data design principles.

Storing data as discreet changes and relying on pure function selectors to calculate values is wonderful.

It's not always a viable approach at scale (at least not for my ability in certain circumstances) but, when it is, testing is a breeze and I love being able to debug forward/backward in time with consistent results guaranteed.


I write JavaScript all day long for my personal projects and my clients. But functional programming, at least as I see in this article, looks like a lot of extra work to do something simple.

I am sure I'm missing something here. But I don't understand why I would ever need to use code like this. My clients and I value shipping code as fast as possible. This just looks like it would turn my 1 hour job into 2 hours with not much benefit.

Anyone who doesn't have a lot of experience with JavaScript or functional programming will have a really hard time reading and understanding my code. I've been coding for a long time now in JavaScript but this looks like a lot of complex code to me for simple tasks.

I want to love functional programming because everyone is talking about it. But what am I missing? What would be the benefit of recreating a language inside a language?


> Functional programming is all about having confidence in our code.

Perhaps, but if I understand this article correctly, I must first know and understand all these laws and tools before I can have that confidence. Meanwhile, the arcane abstractions don't exactly inspire much of that confidence.

Look, I love what someone else here has already dubbed "softcore FP". I love chaining functions that pass immutable data structures to each other. And I fully admit I've written some highly abstract code that looks quite a bit like some of this. But does it inspire confidence? Quite often the effect is that you've created your own language that others don't know, which means your code will be harder to understand for others, and they may lack confidence in the code, and not dare to maintain it properly.

I want to believe that this direction is the right one, but I lack the confidence that it really is.


I basically write code in any paradigm but (loosely defined) FP feels very natural to me. Immutable data and thinking about how functions get some input and provide some output is the most natural way of thinking about a domain for me.

I have no good way of describing what I mean but I tend to think about functionality not things/items first I guess.


There's no agreed way to organise a functional codebase. You end up with thousands of functions, can you imagine the mess a big team is capable creating? With some discipline I suppose it could work, but you would need some organising principle and some serious discipline. Object oriented is organised by virtue of the object paradigm.


> Object oriented is organised by virtue of the object paradigm.

Or in other words, OO code ties business logic to presentation logic. By sticking to the dominant 'one class per file' principle, the same code behaves differently based only on whether it appears in one file or two, so it's not trivial to move code around to make it more organized.

When I write functional code, I first write all the functions and types I need in a single file, without worrying about presentation or naming until it compiles. Then, before committing, I reorganize them into folders, files, and modules so it's easier to read and navigate, and I can do it any way is more appropriate (sometimes layer-first is more readable, sometimes domain-first).

I can also split pure functions into chains of smaller, pure, private functions if they're too long to follow (90% of the time some functions end up being way longer than I expected), which is _way_ simpler than splitting a large class into smaller ones.


The languages in the ML family have modules. They are the agreed-upon way to organize code and control visibility in those languages.


In my opinion, FP is one tool in the toolkit, and is best suited for expressing math-oriented things such as set operations, analytics pipelines, and other kinds of data transformations.

For any kind of I/O, though, FP is not even applicable in the pure sense of a function, as disk/network access is inherently a side effect. JS can of course mask much of this with its Promises.

Any kind of business-oriented logic, especially handling of different kinds of data entities, is still good to encapsulate in classes in OOP fashion. Classes are a good impedance match for entity types, as instances are to entities.

Sometimes, though, data transformations are still more clearly expressed in imperative style. At the very least they’re more accessible to juniors, unlike the innards of declarative transformation engines.

Imperative, declarative, object-oriented, functional — these are all tools for different jobs.


Pure FP is most useful for IO. Without IO, the concept of pure FP doesn't make any sense.


That doesn’t make sense. Any function doing I/O is not pure by definition, and while using an IO monad can shift the impureness a bit to make a function behave as if it was pure, it is not pure and cannot ever be pure. Can you explain?


Yeah - I think from your explanation I can only deduce that you don't know the actual definition and concept of pure functional programming. Note that the term "functional programming" has been watered down over time and now pretty much means "use .map and .filter instead of a loop" etc. Historically the meaning was different though, see: https://en.wikipedia.org/wiki/Purely_functional_programming

With pure functional programming you essentially treat IO-effects as a first class concept and make them explicit in your code. You never "execute" IO directly but compose small "blueprints" for doing IO into bigger blueprints and so on until you end up with one big blueprint which is your application. You then pass it to the runtime ("main() { ...; return bigBlueprint; }") and then it gets executed.

In other words, pure functional programming treats IO as more important compared to "regular programming" and without any IO there would be no need to even do so. But without any IO, a program wouldn't have any meaning, because you want at least expose the result of a computation to the outside world.


I do understand the concept of pureness in the appropriate context. However, it appears that you do not.

This is a good example, since it ticks all the boxes: there is imperative code run as a side effect of executing the code output by the compiler based on declarations in functional style, including those impure IO-using functions.

Slapping IO on your function is making it explicit that it is impure.


The concept of pure functional programming is totally independent of what the compiler generates. Of course, in the end, there will always be something running that executes effects and that will be impure instructions. But for the context of pfp only the programming language matters since that is what we are working on. If you work with assembly directly then yeah, that is impure by all means.

> Slapping IO on your function is making it explicit that it is impure.

Not sure what "slapping IO in your function" even means here.

To maybe some it up and get the discussion to an end: if you have an expression and can freely duplicate it and have it evaluated anywhere within your code (e.g. by assigning it to a variable) and it does not change the semantics of your program (so e.g. performance can be worse, that's okay) then the expression is referential transparent. If all expressions in your programm are referential transparent, you are doing pure functional programming. That is the simplified definition.


“The IO monad does not make a function pure. It just makes it obvious that it’s impure.”

— Martin Odersky

https://alvinalexander.com/scala/fp-book/pure-functions-and-...


Turns out he was wrong. Already discussed/explained:

> https://news.ycombinator.com/item?id=20744274


Seems that thread is moving the goalposts. The way I see it, universal pure functional programming is an academic exercise and cannot practically exist on Von Neumann architectures.


I assume you are genuinely interested in a discussion, so let's get back to the original question and tackle it from a different way:

> Any function doing I/O is not pure by definition, and while using an IO monad can shift the impureness a bit to make a function behave as if it was pure, it is not pure and cannot ever be pure

Pure functional programming should have a definition that is useful. I already gave my definition. If you think this is not a good definition, then what would be yours? Or more concretely to your example: what does it mean for a function to 1) be pure and 2) to behave pure? And what would it mean if a function is/does neither?

> The way I see it, universal pure functional programming is an academic exercise and cannot practically exist on Von Neumann architectures.

Unlimited memory can also not practically exist on Von Neumann architectures. But in programming languages we still use concepts like linked lists that have no size limitation whatsoever. In the context of the language and reading and understanding the code, this is important and it does not matter what the hardware actually does, except for rare edgecases. The same is true for (erased) generics. They simply don't exist in the generated machinecode. But it still matters for us humans. Programming languages exist for humans to serialize our thoughts in an easy way while still being able to have the machine interprete it. So in the context of a style of programming or a programming language feature, I don't see how what you said makes any sense or is related in any way.


> Or more concretely to your example: what does it mean for a function to 1) be pure and 2) to behave pure? And what would it mean if a function is/does neither?

1) function is intrinsically pure in the mathematical sense: it produces the same output for the same input each time. In practical terms, code execution must not have side effects on the system.

2) function has been made pure in the sense that a compiler can reason about its inputs and outputs as if it was actually pure in the mathematical sense. Code execution can have side effects on the system, but this has been neatly abstracted away.

Memory limitations are not a good analogue: I/O requires interrupts.


> 1) function is intrinsically pure in the mathematical sense: it produces the same output for the same input each time. In practical terms, code execution must not have side effects on the system.

Now if we go back to the example and Odersky's citation. If you have a function `foo` that returns `IO[String]` using e.g. cats.effect.IO or zio.IO. And the String is e.g. the content of a file or something else. Then, is this function pure or not? Answer: it is. You can call it multiple times, it proces the same output for the same input.

    val x = foo
    val x2 = foo
    val x3 = foo
No matter which of those x you use, the result will always be the same. You can call the function as often as you want. There is no side effect being executed. Hence the function is pure by your the definition you just gave and hence Odersky's claim is incorrect (I think he probably would not say this again nowadays).

> 2) function has been made pure in the sense that a compiler can reason about its inputs and outputs as if it was actually pure in the mathematical sense. Code execution can have side effects on the system, but this has been neatly abstracted away.

What does "neatly abstracted away" means? How is this function different from a one in 1) and how is it different from a function that is just impure? Can you give an example?

> Memory limitations are not a good analogue: I/O requires interrupts.

They are a very good analogue, because conceptioally they too cannot exist on Von Neumann architectures. Why does the reason matter? I also gave another example: generics. How about those? They don't even require any specific physical limitations, they simply vanish. I can come up with more examples, but I don't really see the point. Obviously people use pure functional programming and they call it like that. If you say that isn't possible, I think we are now discussing (again) terminology and not practical implications.


I believe looking at the implementation of ”IO” is a sufficient example of ”neatly abstracted away”.

There are practical implications to all IO: interrupts are asynchronicity and they have failure modes, what if a network is down or a hard drive has to try a few times to read a sector? Abstractions only go so far. At least your program crashes when it runs out of memory.


What makes it a "sufficient example"? Why do you care about interrupts but not finite memory? How about generics?


> 1) function is intrinsically pure in the mathematical sense: it produces the same output for the same input each time. In practical terms, code execution must not have side effects on the system.

Now if we go back to the example and Odersky's citation. If you have a function `foo` that returns `IO[String]` using e.g. cats.effect.IO or zio.IO. And the String is e.g. the content of a file or something else. Then, is this function pure or not? Answer: it is. You can call it multiple times, it proces the same output for the same input.

    val x = foo
    val x2 = foo
    val x3 = foo
No matter which of those x you use, the result will always be the same. You can call the function as often as you want. There is no side effect being executed. Hence the function is pure by your the definition you just gave and hence Odersky's claim is incorrect (I think he probably would not say this again nowadays).

> 2) function has been made pure in the sense that a compiler can reason about its inputs and outputs as if it was actually pure in the mathematical sense. Code execution can have side effects on the system, but this has been neatly abstracted away.

What does "neatly abstracted away" means? How is this function different from a one in 1) and how is it different from a function that is just impure? Can you give an example?

> Memory limitations are not a good analogue: I/O requires interrupts.

They are a very good analogue, because conceptioally they too cannot exist on Von Neumann architectures. Why does the reason matter? I also gave another example: generics. How about those? They don't even require any specific physical limitations, they simply vanish. I can come up with more examples, but I don't really see the point. Obviously people use pure functional programming and they call it like that. If you say that isn't possible, I think we are now discussion (again) terminology and not practical implications.


> For any kind of I/O, though, FP is not even applicable in the pure sense of a function, as disk/network access is inherently a side effect.

If you meant to do it, it's an effect, not a side-effect.


That’s not what is meant with side effect in this context.


In the end, it boils down to the insight that functions compose better then classes, interfaces and objects. Also, OO and relational databases don't like each other. On the one hand you have trees as basic weapon for composition while on the other hand you have something where a tree is taboo.


For example, Haskell is optimized for developer efficiency. You can get a lot done, have a high degree of confidence that it runs reasonably without having to do too much thinking or ass-covering. We move fast & need things to be reliable. Rather than hand optimizing Doom 2 for ___.


An aside that I've been wondering about for a while with FP. I tend to use a lot of tools that do things like cyclomatic complexity analysis, test coverage metrics, etc.

It seems like one goal of this style of FP is to abstract away control flow itself - why bother with if-statements when you can compose automatically short-circuiting monadic APIs like Option and Result? An admirable goal that I am 100% in favor of. However I feel like it might also obfuscate untested branches in source code and artificially minimize computed cyclomatic complexity if the tools are not aware of these control flow abstractions.

I guess it's not super important in the grand scheme of things (I'm well aware of cyclomatic complexity's dubious usefulness as a metric).


Worth noting: the author of this articles does not believe React is functional: https://twitter.com/jrsinclair/status/1398780972506619907


To be fair - I mostly agree with that take. React with hooks is not functional. I think it's fair to say that React itself acknowledges this by making a distinction between a PureComponent and a Component.

React itself certainly can be functional, but... (hot take) functional programming without the part where you actually make state changes is - drumroll - useless.

If it ain't changing state and there are no side effects - no one gives a fuck. Because it literally isn't doing anything.

Turns out computers are fundamentally state machines. Functional programming certainly has some nifty ideas, and it can be a helpful way to reason about code that will lead to state changes - but the part people want is the state change.


When talking about writing lodash V5, jdalton (the creator) said "No FP wrappers. That fad is over. RIP your co-workers if you introduced that headache into your codebase. Definitely not team or human friendly." [1]

To me, this sort of FP seems to have been tried and have failed. I don't know how much of this is due to the language and how much is due to the pattern not working well in the minds of developers. I wonder if JS had an implementation of something like C#'s LINQ if it would be easier to write code this way.

[1] https://twitter.com/jdalton/status/1571863497969119238


What does FP wrappers mean in this context? Someone asked that in the Twitter thread and got a snarky response, but I have the same question.


I think they mean what in lodash was _() and .values(): You could not execute map or filter on an array. Instead you needed to wrap it in a lodash object using _(arr), that you can call .filter() on. To get the filterd values out, you call .values() on its result. You could also do _.filter(arr, …). To me it became confusing when I needed to use .values, and when nesting it got worse.

LINQ has instead methods on its types that partially returns builders. I find that easier to use, and using C# type system it calculates the result types for me.


How are the types not totally screwed up in this kind of pattern? I get that this is JS, but since most of us at the level to be reading this post actually write TS, isn't it true that most of those transformations actually modify the type of the object, and therefor each would need to have a slightly different intermediary type?

When I actually have this problem, I usually define the input as an interface and the output as another interface(maybe extending the input interface).

Then I write a SINGLE function that calls those other functions, collects the data into variables, and then builds a new object of the new data type, and then returns that.

Maybe I'm missing the point here.


One thing FP, learned with a suitable language at hand, teaches is, how often we unnecessarily reach for classes, when simple pure functions will do just fine and will be better testable, easier to understand, more reusable, and more composable.


I'm having a good time doing Data-Oriented Programming (which is applying Clojure-style programming to other languages, in my case Python).

IMO, chaining functions together rather than mutating objects and variables seem to be the most Pareto efficent takeaway from FP (at least in the Scheme/Clojure style, haven't tried F#/OcaML or Haskell). And HOP, though Python has native support for map and reduce and filter in the last few years, so great enough.

And to be honest when I try to do something complicated in Clojure I end up blowing the stack anyway. Recursion is a bit too hard to grasp for non-programmers like me.


I think functional programming is great for scaling systems but too restrictive since everything is immutable...

So the model of the future, in my opinion, should look more like https://www.val-lang.dev/

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p26...

This is less restrictive and also very scalable IMHO. Value semantics, basically.


> I think functional programming is great for scaling systems but too restrictive since everything is immutable...

It's how things work at scale too.

Source control? Git

Money? Ledgers

Distributed computing? Map-reduce, Raft


Lol. True. Concurrency without stopping...


What's great about it for me is helping me stay focused on one thing at a time. Composing and writing pure functions is easier for me to reason about as someone with an easily distracted mind.


interesting examples, but not compelling, i.e. procedural code to do the same thing would also work (and might be easier to understand for some).

the thing i loved about FP was the elimination of exception handling

the thing i disliked about FP was how complex classes were implemented as bizarre templates of structured list subtypes - too much focus on the data types and structures of the implementation, obscuring the domain-model (aka business) types and structures required for human comprehension.

disclosure: scala, a few years ago

there is a happy medium...theoretically


As far as I can tell, with this async approach you lose what I consider to be one of the main benefits of the native async functions; If an error is thrown, it will show up in the "reject" path of your promise. This is very useful and is great at preventing "random code" in some Task from causing havoc.

I'm not that great at functional programming, and perhaps this is a non-issue, but I feel there could be some way to achieve the same results without losing that ability.


One downside is that you now have a much higher bar for hiring talent to do trivial things. If you don't understand FP at a fundemental level, you will have a hard time finding people to work on your systems. Junior/Senior engineers are meant to be fungible to an extent. This ups the bar to a new language: this ain't JavaScript, it's an entire conceptual framework on top of JavaScript.

EDIT: That being said, I really want to read this book!


Obviously the best part of Functional Programming is arguing about the pedantics of whether a language or construct is actually "functional"


Just watched a video(1) about functional programming and this is the gist of it (for those like me who don't know).

Three main things of functional programming:

1. Keep your data and function independent

2. avoid changing state of variables, instead declare others

3. accept function/reference as a param

(1) https://m.youtube.com/watch?v=dAPL7MQGjyM&t=3s


To me it just seems like different syntax for different types of abstraction, not based on the OP, but from my own trials. Depending on how the calculation needs to be handled, one or the other may be more useful. As with all conceptual constructs, there's likely more than one conceptual projection in which it can be viewed with maximum clarity.


Excuse me sir, do you have a moment to talk about Functional Programming?

I love FP and I think it makes writing robust software easier. However, I don't think the benefits can be explained well. The only way to see it is to give it an honest try - in a true functional language. This is too much of an upfront commitment for the unconvinced.


I’m a fan of pure programming because of its predictability and testability. And I’m a fan of chaining array functions to make a data processing pipeline obvious. I like how these can be opt-in and don’t require a totally different paradigm/language/architecture.

I’m guessing this may be seen as a part of FP but not the core?


The dirty secret is Typescript is going to become the most popular language due to it’s dirty pragmatism and simultaneously very elegant features. It is definitely a get shit done language. Typescript without JS toolchains (for example compiled to WASM would make it very good).


I do FP in JS, but I don't use Result or Option types or pipes because they are not part of JS's lingua franca.

But I expect everyone in JS to be comfortable with `Array.prototype.map`, `Array.prototype.reduce`, and closures as these are core tenants of the language.


There really needs to be a more clear separation between functional programming (a very unclear and ambiguous term) and pure functional programming (formerly just functional programming, short and well defined but with huge implementations)


Yes, but pure means different things in different contexts. For language designers, purity is about the language itself, not what you're doing in it. E.g. Haskell is still a pure language even when you're reading and writing files. I don't think it's as useful a definition than the lay definition of purity.


I don't think that's really true. It's called "programming". Even Haskell is not a 100% pure-functional language since you have escape hatches. I think we should focus on what you do with the language, hence "functional programming" is pretty clear as a term no?


omg, so much negativity here...

you should read the previous articles from the author, eg. https://jrsinclair.com/articles/2022/javascript-function-com...

then

https://jrsinclair.com/articles/2022/what-if-the-team-hates-...

it would address some of the concerns and critiques ppl had on this topic here.


I wonder if the return of value semantics in languages lessens the appeal of FP. Localized mutation lets you manage side-effects but you don’t need to bring in an esoteric discipline of math to harness it.


How do you know if you're doing localised mutation?

Rust and Haskell are the only languages I know which help with this.


I was thinking of Rust, yes, but also Swift which has less restrictions on ownership but still requires exclusive access to modifying variables


functional programming has the same use to a programmer as category theory does to a mathematician.

It is very good at describing the kind of objects we tend to care about, so it is sensible to consider the converse and ask what kinds of things is it good at describing.

how naturally we can express our code is a very strong measure of how well we understand it. Both in terms of what it can and cant do.

Imagine having blocks that stick together like legos, but you cant see the dot lattice because your vision is too poor. FP is about seeing the dots.


Instead of

const x = await promise1(someArgs); const y = await promise2(someFunction(x));

we do

function processor(monad, args) { return monad.unit(promise1(args)).map(someFunction).then(promise2) }

That's all about monad.


Please fix the font used for code examples. Unreadable.


>"What’s so great about functional programming anyway"

Well, nothing. It is one of many paradigms. It has its place just as the others.


We don’t need functional programming!

- OOP developers, as their languages gradually port all functional programming language features


Never go full FP in JavaScript. I had a tech lead try to reinvent Haskell in JavaScript and everyone else in the team had such a difficult time dealing with their code it was ignored as often as possible. When you start seeing shit like a “blackbird” function to compose some other functions and you have absolutely no clue wtf the point of it even after reading the source and genuinely trying to understand you start to feel stupid and a bit resentful lmao.


I have grown impatient with writing which purports to educate or inform people who do not understand a topic yet is completely incomprehensible unless one understands the topic.

Practice some "Feynman technique" on your subject, or resign yourself to producing a suggested reading list of works that actually explain the topic, or please for the love of cognition stop masking signal with your noise.


write more robust, maintainable code


I always thought the benefit of FP is thread safety, at the cost of a lot of data copying.


> (great? (programming(functional)))

  (allows (ideas (elegantly(expressed))))


(-> except appropriate use of macros makes it easier to both write and read)


Beautiful website. The transitions between the code blocks are a very nice touch


The science says the math approach to programming is wrong. That is why the langs with very little need of understanding of math are the popular ones. It's the langs where people can be more productive and more easily be able to do things like code reviews. The topic is just silly.


Just want to point out that (as of 16:44 UTC 17th Nov) there are 52,268 words on this page. So far no-one has called anyone a Nazi. It's fascinating and good that people care about this stuff. Software engineering would be very boring if there wasn't room for opinion.

My opinion is that if you were to summarize the original article to only one of its words, it would be the lovely word "boffin". The right amount of boffinry can be a rewarding and beautiful thing. Sometimes though "C'est magnifique mais ce n'est pas la guerre".


This post actually confirmed my opinion about functional programming :D. Probably not in the way the author intended.


My view of FP is that it's not compatible with the concept of modularity or separation of concerns.

If your priority is to ensure that similar concepts are grouped together inside flexible and composable modules, then co-locating related state and logic is necessary.

Unless a module can fully encapsulate (have full ownership over) the state which it operates on, it cannot fully separate its concerns from that of other modules. If you have a global state store and any module can potentially read from and write to any part of that state, then you are bound to end up with modules which have overlapping responsibilities - These 'modules' won't be very modular or composable because, whenever multiple modules share the same state, they invariably end up having to have an awareness of each other's behaviors (to avoid writing conflicting state changes or to know when they should update themselves).

Underlying FP philosophy is the notion that there is no value in encapsulation; that there is no value in assigning exclusive ownership over specific state to a specific module; that such ownership is dangerous because it makes the module unpredictable from outside. What FP proponents fail to realize is that once they've rejected the philosophy of state encapsulation, they've implicitly accepted the philosophy of state multi-ownership. Whenever you have shared ownership of state, it becomes unclear who is responsible for that state; this leads to potential conflicts. Avoiding conflicts forces modules to become aware of each other's existence and interactions and this reduces their modularity.

The functional programmer's need for modules to be fully predictable with each interaction was born out of a need to micromanage them. On the other hand, OOP proponents understand the value of module autonomy and see them as black-boxes which need to be trusted to do their work. OOP proponents understand the value of simple interfaces when it comes to modularity and they understand that such simple interfaces are only possible if the module has sufficient autonomy and this requires it to have full control over its state.

An analogy for FP in the real world would be like a computer manufacturer relying on chips from a chip manufacturer, but the computer manufacturer insists that the chip manufacturer must produce chips using only the materials provided by the computer manufacturer... Let's say that, eventually, a new superior semiconductor material is discovered; the chip manufacturer will not be able to use it to improve their chips. Although it has autonomy over its manufacturing processes, it is constrained to the materials provided to it by the manufacturing company; therefore it cannot take advantage of the innovation behind the scenes without the computer manufacturer also updating its processes and materials. The chip manufacturer is predictable but it is not modular; it cannot improve itself without involvement from the computer manufacturer.


> To hear some people talk about functional programming, you’d think they’d joined some kind of cult.

I know that the chapter and book is coming at the topic pro-functional, and there's nothing wrong with that. But boy does that sentence ever ring true as a web application developer.

Functional brings with it a set of core principles and values that provide a lot of benefit. So does OOP. So does procedural. I'm a fan of always picking the right tool for the job at hand.

And JavaScript certainly brings with it a set of Functional-style features. Use them liberally, by all means. Let them solve problems for you.

As a professional, learn as much as you can about Functional programming. Also learn as much as you can about OOP, imperative approaches, procedural and any topic that will be relevant to your field. Be the best professional that you can possibly be.

But don't, for the love of God, try and write a modern frontend web application written in JavaScript (or TypeScript) from a "purely" Functional-first point of view. Not only will you fail and hate your life, but you will throw out lots of babies out with tons of bathwater.

JavaScript is not even close to a Functional language. It doesn't matter if Brenden Eich wanted to bring Scheme to the browser, that's not what he ended up creating. Not only is it far from being a functional language, but I would argue that it is downright antithetical to FP.

- Everything is mutable, even functions themselves.

- Not only does JS have a mutable global namespace but modern frontend apps often have to deal with no fewer than FOUR shared global states (the global namespace, state management a la redux store, local storage and server-side persistence). If cookies still count then that might count as five.

- Functional programming favours primitive types. Simplicity is the name of the game. Most client/server applications have to deal with rich domain models. Functional is great for "processes", while OOP is great for "things" (i.e: rich domain modelling and grouping data structures with the logic that mutates them).

- The asynchronous event loop brings side effects as a core feature of the language. Yes we have strategies for putting our side effects in a corner, but the entire application is built from the ground up around the concept of side effects. Instead of taking simple input and producing simple outputs, we send things off to the server and/or update a global state store so that multiple things can update independently. Side effects are integral to a complex web application, no matter how many pure functions you write (and you should write pure functions whenever you can, they are simple, easy to test and debug).

None of the above should matter except that I have come across many FP "purists" who come at the topic from a borderline religious angle.

I think this has to do with how trendy our industry is. OOP was very hot in the 90s and 2000s. FP start to gain traction, at least in web development, with the advent of Scala and Douglas Crockford's JavaScript The Good Parts. There is a tendency in our industry to think in binary terms. That we saw lots of problems with OOP applications and FP is promising a solution therefore OOP = evil and FP = a gift sent down form the heavens to save our souls. Nothing could be further from the truth, and trendiness is the root of all evil IMO.


Nice writing style!


Also the markup font is a little wonky and `Ok` ends up looking like `0`.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: