Hacker News new | past | comments | ask | show | jobs | submit login
Why I never finish my Haskell programs (plover.com)
296 points by AndrewDucker on Sept 3, 2018 | hide | past | favorite | 217 comments



I think many beginning Haskellers have this problem. To overcome it, my advice is to write Haskell code with the knowledge that you can re-write it more readily than you can re-write code in many other languages. Write the code that fits the immediate application, and rely on the type-checker to make it straightforward to refactor when the need arises.

I think that’s what many experienced Haskellers would say is the language’s best attribute for getting things done, that the type system makes it possible to refactor even a large program with the confidence that all the parts you replace will slot perfectly back into the original structure. Or that changing the core structure itself will result in a new structure that has all the right slots for all the various bits and pieces that need to slot into it. Having the confidence that you will be able to refactor painlessly, you should be less concerned with finding the perfect abstraction up front. Write code, make it work, then make it better.

And as others have mentioned, yes, the right abstraction and appropriate level of generality will become easier to recognize as you write more Haskell. Go with the first decent implementation you can come up with, and as you gain experience that first implementation will more and more often turn out to be a good one. In the meantime, run HLint and read more Haskell code, and you’ll quickly pick up most of the generalizations that really make sense to use in typical applications. The more experienced you get, the more confident you should become that significant time spent generalizing code to no real purpose is pointless and tends to result in code that both reads worse and runs worse than what you started with.


In my case, I think it's because large programs always end up containing subproblems that can be better expressed in other paradigms than functional. And it becomes frustrating when I can't shoehorn them to the Haskell way of doing things.

My favorite languages are, for this reason, multi-paradigm: Common Lisp, Mozart/Oz, Scala and C++. It's a bit like building La Sagrada Familia (and that's why it's depicted in the cover of CTM). If you want a superb solution, you end up using many styles like Gaudí did.

But I reckon the future will move towards more provably correct solutions, and we will be using things closer to e.g. Idris. Hopefully that's orthogonal with homoiconicity.


I agree with your sentiment, but I find the analogy to La Sagrada Familia funny, in that is has still not been finished, it's a black hole for money, and you can see, clearly, it's a hodge-podge of styles... All characteristics that might not be good for your software project.


It's not my analogy, it's CTM's [1]. A book that has cult status in some CS circles.

[1] https://www.info.ucl.ac.be/~pvr/book.html


which large software project has ever been ‚completed‘?


A lot of single-player (not online serviced) computer games could be regarded as ‘completed’: once they’re out on the store shelves, it’s pretty much done (although with the rise of Steam and continuous updates this is less of a case)

That’s probably why many game programmers are more pragmatic in their programming practices: they have a concrete deadline to pursue, with a predictable subset of hardware for their program to run...


point taken, a lot of software in the pre internet era could be considered completed - on the other hand there's always still going to be issues, so you could also claim that these projects are just abandoned.

In general I think software is more like a house than an art piece - it keeps adapting as long as people use it.


Strike the "large" in that sentence. I somewhat think that writing software is like painting. At some point you decide to stop, but you're never truly done. Also cf. Leonardo da Vinci: "Art is never finished, only abandoned."


interesting, I didn't even read your comment before writing my last reply, which says very much the same, just with less da Vinci in it ;)


Mozart/Oz??? I never heard of it. Now there is another rabbit cave to spelunk in my search for the perfect programming language.


It's an academic-only language, mostly used for teaching students about programming. Anyone sane would not use it in production, as the output would likely be "ill-typed".


There's also AliceML, an evolution of SML with Oz-like semantics. Sadly not in very active development.


So you're saying: don't use an unfinished programming language to write unfinished programs ;)


Completely agree with you, and I like to add that this is true for any programming language really. Often I find people, specially juniors, obsessed with how they solve a problem and design patterns used than actually solving a problem. To be honest I was once such a person.

The best advice, as you stated it, is to go with the first instinct, and then refine it, IF needed. It might turn out that the code you wrote was redundant anyway.


I know Elm isn't the same as Haskell, but these points are also used as a selling point for Elm. And more often than not, as with Haskell, the type system makes it possible to refactor a large program with the confidence that all the parts you replace will slot perfectly back into the original structure OR the compiler will yell at you until it does :).


The drawback with Elm is that it has a trigger-happy BDFL who doesn't care about breaking everything all the time. Haskell has a long history and has broken very few things over the years (n+k patterns come to mind).


It’s an unstable pre-1.0 language that hasn’t even finished its core API.


Started looking at Elm this weekend as a possibility for it to compile into other languages (javascript/elixir). Its too bad every release requires most packages to be rewritten. The good news is that releases are getting slower and slower, so you might go a year without any releases (even minor version patches). I guess that counts as LTS.

Elm would be so awesome if they just forked it at 0.19 and said "no more breaking releases". But I doubt that will ever happen.


Well, Elm is clearly still figuring out its core APIs. 0.19 refactored a lot of things into https://package.elm-lang.org/packages/elm/browser/1.0.0/, for example. And Elm's package.json equivalent was completely changed. And major questions like server-side rendering aren't even answered yet.

Seems like the opposite of a good time to release 1.0.

Of course, this means most people won't have an appetite for unstable Elm, but that's nothing new. Complaining that something is still 0.x unstable seems odd to me beyond the selfish "I wish I had it sooner."

A lot of HN's comments around Elm remind me of when I was a kid absolutely livid at Bungie for not releasing Halo 2 sooner. Doesn't Bungie know my summer just started? I don't want a fucking beta, Bungie! Me want game now! Bungie must be incompetent because everyone on my gaming forum wants it now as well!


I'm reminded of one of my favourite HN comments on Haskell:

'There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen.'

https://news.ycombinator.com/item?id=7962612

Although it's not 100% applicable in this case (unless you argue that the cost is to your own time) - I think the sentiment is perfect.


The blank canvas analogy is a good one but for the wrong reason. Think of the aspiring novelist with a blank stack of paper in front of them. The problem is not the freedom they have, it's their lack of discipline. It's far easier to write PR for an agency or injury reports for a sports blog than it is to write a novel.

The reason so many programmers struggle with this problem is because nobody is paying them to write CRUD applications in Haskell. If they were, they'd find it more than adequate to the task and the job would be easy and painless.

People only go down rabbit holes when they don't have a manager breathing down their neck all day. It takes real discipline to create really good hobby projects on your own, regardless of language.


> People only go down rabbit holes when they don't have a manager breathing down their neck all day. It takes real discipline to create really good hobby projects on your own, regardless of language.

I agree, but perhaps discipline is not necessarily complete restraint... I think some rabbit holes are worth exploring, they can separate outstanding projects from mediocrity. Perhaps it's about choosing the right rabbit holes in the right projects, another way of achieving that is minimalism where you limit the scope and not the depth.


Or you just fail due to your own miss-management more than just once, and learn that structure is not for the boss but for you, yourself. Happens more often when the style of abstraction is restricted for performance reasons (i.e., no java-like OO and so) and you fall on something like buffer mismatch like I once did (to e fair, it was a bug that repeatably depended on the presence of explicit synchronization calls and there was no documentation about that possibly changing anything).


It takes experience to get a sense of where to spend that time. There are parts in any project that are write only fluff that will not be changed or extended much once written. Then there are the others where you know that things will need to grow and change a lot. There, it is incredibly important to find a good starting point fron which the code can grow with as little pain as possible. A day or two to find the right approaches there can pay off nicely.


how to build the foundations of an everchanging building in an everchanging environment.

not specifically directed to the parent comment, but it lead me to write this: I think the discussion here is too focused on business/productivity. I feel like different people have different approaches to coding. Some of us really need to write beautiful code, and we are ok with "wasting" a lot of time thinking about fluff until we recognise that it really is fluff, or come up with interesting ideas simply because we spent time thinking about something. Maybe you care about the end result, maybe you care about the code, maybe you care about both. And even when you care about one or another, you do it in different ways. We start optimizing algorithms, then we build frameworks and "optimize" workflows, and then we want to optimize the approach to coding. But I don't think that's universally doable as with algorithms. We end with great insights and confused discussions.


That clicked a lot, the first sentennce, if code architecture is like a building with many extra dimensions then programming has this quantic challenge of getting beyond the fourth dimension of time to an intuition of how to build the foundations and subsequent DSL design levels of the code building's own future state, as if the future state exists in the present along with all other possible building states (that fit project[ed] constraints), since time is not a restraining cognitive concept in those higher dimensions of architectural mathematic representation.


I like playing with ideas and code as much as the next guy. But I have this distinction between things I do to learn and things I do to create results useful for others. For the later I need to find efficient ways to create what I need to create or I will take too long and the need is gone and my time is wasted.

Ultimately I want to be able to tell the world that my work had a positive impact in some form. I do not want to be the guy who did useless things. This is my main motivation these days.


> The reason so many programmers struggle with this problem is because nobody is paying them to write CRUD applications in Haskell. If they were, they'd find it more than adequate to the task and the job would be easy and painless.

I get paid to write applications in Scala that are often mostly CRUD. The temptation to overengineer is still there and still constantly needs to be fought.

I've seen the same thing happen in plenty of Java codebases too, mind. There it tends to be less "make this work for any possible Monoid" and more "make it work for any imaginable form that the user might want to build", but I think it's another facet of the same issue.


> People only go down rabbit holes when they don't have a manager breathing down their neck all day.

I've experienced this first hand. If I can't find a basket of Easter eggs within the 30 minutes of my journey into a rabbit hole, I have to backtrack and move on to something else. I do not feel this limitation when I'm at home, doing things on my own volition.


Better than a manager, is a customer. Though, any stakeholder is probably better than none, which I believe is your main point. (And one I agree with.)


I've always found interacting with clients to more satisfying than interacting with a manager. Managers have a probability of feeling like unnecessary middlemen, which presents a conundrum because they are your manager.


I think Rich Hickey described it pretty well when he said (paraphrasing from [0]) that static languages present the programmer with neat little puzzles to solve that feels like we're writing applications but we're just creating intricate types and abstractions. I think he has a point, but I certainly don't want to give up the benefits of static languages, like being able to catch all of my silly errors.

[0] https://youtu.be/2V1FtfBDsLU?t=39m44s


Former Clojure programmer here. I eventually learned I'd rather write neat little puzzles about static types than solve frustrating little puzzles on the root causes of runtime exceptions.


His argument begs the question.

If you pre-suppose that describing types and finding appropriate abstractions aren't "writing applications", that is, they offer no value in the process, then spending time doing them is of course solving neat little puzzles to no benefit.

On the other hand, if you pre-suppose that types and abstractions offer some value to the process of writing an application, then solving those puzzles is adding value.

Calling types and abstractions a "neat little puzzle" is a diminution we could apply to other aspects of writing an application: implementing an algorithm to correctly process some data is a neat little puzzle presented by our test cases.

Rich Hickey would need to demonstrate the lack of value of types and abstractions for producing applications to make a solid argument here, and that I believe is a steep uphill battle, because Clojure would be a pretty terrible language if you took out the ISeq abstraction, and there's an awful lot of effort spent shifting the puzzle of "is this function being used right" from the type system game engine to the test harness game engine.


Time spent with a type system buys you compiler-checked proofs of some properties. The important question is whether the time is worth the benefit.

One of many things that I love about the TypeScript type system is that it gives so much power to the author. It is the most expressive industrial type system I know of, but it also makes it easy for the programmer to say "I can't prove that this is true, just trust me that it is".

A nice side benefit is that this is also practice, so you can also gain skill. This is how I justify sometimes spending much more time with the type system than it would otherwise be worth when learning a new language or hacking on a personal project.


> One of many things that I love about the TypeScript type system is that it gives so much power to the author. It is the most expressive industrial type system I know of, but it also makes it easy for the programmer to say "I can't prove that this is true, just trust me that it is".

I really wanted to like Typescript, but once you're thinking in HKT it's incredibly frustrating to have to manually translate a function into its flattened expansion (just like it's incredibly frustrating to use a type system without generics once you've used one that has generics). Every serious industrial language allows a programmer to say "I can't prove that this is true, just trust me that it is"; I suspect many people who struggle to start out in Haskell would do well to make a little more use of unsafeCoerce and unsafePerformIO (they would no doubt give themselves runtime errors, but sometimes the easiest way to understand why you had a type error is to run the code and see what the values are at runtime).

(I made a small hobby tool with ScalaJS and was amazed how easy it was, so I'll be advocating for that over Typescript).


And most often the most beneficial is to spend more time on the problem and its domain, brilliantly illustrated in this comment https://news.ycombinator.com/item?id=17906700


Go is static but doesn't encourage this. The more experienced I've become the less I find myself doing this in languages like C++ either.

It's seductive until you start getting excited more about higher order reasoning about algorithms and about what programs really accomplish and less about just showing off. A breakthrough program written in some uncool language like VB.NET is more impressive to me than yet another ________ written in Haskell or Rust with perfect abstractions.


I think a key part is never losing track of the real goal so you can remind yourself what percentage of your time is going to things which anyone who isn’t a developer on your project would care about.


I agree but I feel like the fact that it's a puzzle is a sign the language could be improved. Describing generic types shouldn't have to look like stl's or boost's implementation details. They are nightmares of "neat puzzle solving"


There's something very seductive about a chance to comment on posts like this one for a particular group of people. These posts whisper in your ears "the fact that you've had a hard time and failed to understand, learn and harness these exotic languages does not mean that you are not brilliant. Here's a blank canvas for you to convince others of same, perpetuating the perspective convenient to your ego, which got bruised by the hard task in the past" (all in good humor) Jokes aside, I code professionally Haskell every day all day long. I'm very productive and get more productive and love the language more every day. It's all about realistic expectations and commitment (just like anything that's hard). You cannot expect to learn and be productive in Haskell within a week/month or even a year. You need a lot of patience and dedication, but you will be rewarded, that i can promise.


Oh yeah, as I mentioned in another comment, this is me. The result is that I have a bunch of 20% finished Haskell projects, and a bunch of finished Rails projects that accomplish the exact same task.


> 'There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen.'

Do they? I think the Haskell community online is pretty crappy for this and other reasons, but GHC/Haskell are at their best when you actually use them to write programs instead of doing things like this.


Coming from Prolog I'm loving Haskell, but I'm seeing a special kind of "worse is better" at work here: that projects using innovative and sophisticated languages have a high risk of running into obsessive "getting it right" and "holier than you" mentality resulting into them often getting never finished, and even if finished, having a high barrier of attracting contributors. It's unfair and embarassing, but shitty languages like JavaScript and PHP often allow you to be more utilitarian and churn out good enough code because you're not emotionally attached to them, and aren't under peer group pressure to express eg. algebraic properties in their purest form or some such.


I'll put a slightly different spin on this: At my current job the system I'm writing is in PHP (for reasons...). In its core it's all about domain modelling with some workflow sprinkled on top. Haskell would be a near perfect fit, but it can be done in PHP. However the code itself looks very Haskelly. I have a growing library of domain specific types that are composed into larger and larger tree shaped ADTs all the way to the top level entities. Validation is mapping/folding these ADT trees where nodes are (type, data) pairs that are mapped to their instantiation or its failure. The workflow bit is essentially a couple of FSMs with conditional transitions where the condition is usually the existence of some type that fulfils its constraint. Etc...

Reading it back it sounds analogous to the classic saying of one can write fortran in any language. In my opinion having experience with Haskell gives you a mindset first and foremost. When you bump into a problem where this mindset is a good fit you can use that knowledge with whatever tools are at hand.


I definitely find there's a strong relationship between language complexity and bikeshedding. When you have a big language like Haskell or Scala, it's easy to get distracted from solving the actual problem by trying to do it the most "proper" way possible. This is also how you end up with design astronautics in enterprise Java as well where people obsess over using every design pattern in the book instead of writing direct and concise code that's going to be maintainable.

Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems. That goes a long way in avoiding the analysis paralysis problem.


Reminds me of one of Rich Hickey's talks where he states that people just love to solve puzzles, and that complex languages with, for example, demanding type systems trick people into thinking they are adding safety or value when in fact they're just outsmarting themselves.

Often when I write something in Haskell I have this feeling. It feels satisfying to build up nice types and constructs but I don't know if it at all pays off in any objective or empiric sense. I can't really tell if I've invented the problem that I just solved.


> I can't really tell if I've invented the problem that I just solved.

Priceless observation, nicely done!

I was quite inspired by Rich Hickey's Simple Made Easy talk when I listened to it a last year. I think that's the one you're referring to. Excellent food for thought in that talk.


I think the very fact that this happens in Java - a deliberately simplistic language - is proof that it's not a problem with the language itself. If the language doesn't support particular constructs, all that means is that people will bikeshed over which pattern to use instead of which language construct.


I don't disagree in general but is Haskell a big language?


Haskell 2010 is pretty small. Comparable to say Clojure, but more complex than Scheme. GHC Haskell with the kitchen sink of extensions turned on is big. Very big.

Sticking to Haskell 2010 with a few extensions that make known behavior more consistent (GADTs, NoMonomorphismRestriction, and a few others in that vein) is the best bang for buck in my experience.


The complexity of a language has nothing to do with its size.

Brainfuck or Whitespace are two minuscule languages that produce the most impenetrable sources.


That's true. I meant size in the hand-wavy sense of "this feels complicated" (although the syntax part is also true in this case). For example there's a GHC extension to overload the meaning of a type declaration (DataKinds). This technically doesn't introduce new syntax but it's definitely a "big" extension in my book.


> I'm from Microsoft and when you see Microsoft documentation it often says, you know, x y or z is a rich something, right. In this case Haskell - or ghc's version of haskell is a rich language. What does this mean? Sounds good, doesn't it? But it always means this, right. That it's a large, complex, poorly understood, ill documented thing that nobody understands completely.

From a great talk by Simon Peyton Jones https://youtu.be/re96UgMk6GQ



Haskell has a ridiculous number of obscure operators. Here's a list of "common surprising" operators in Haskell:

https://haskell-lang.org/tutorial/operators


I don't think it makes sense to characterize Haskell as "big" on this basis, because 1) it is trivial to define an operator in Haskell, so there's bound to be a lot of them and 2) even the "standard" operators typically have a simple definition (e.g. https://www.stackage.org/haddock/lts-12.9/base-4.11.1.0/src/...).


On the whole, I consider user-defined infix operators to be a huge mistake. While the few common ones are great, the ability for every single library creator to add their own infix operator turns into a mess in the long run.


They fine inside a limited domain-specific scope, just don't go importing operators from many libs willy-nilly.


It’s very hard to convince people to keep them in that limited scope.


Even if the operators themselves don't count as "part of the language", the complex precedence rules around them certainly should.


In what way? Those are defined as part of the operator. I usually parenthesize them anyway just like I would for an equation.


Those are mostly library functions, not part of the language.

It doesn't seem any more reasonable to use them to declare Haskell a complex language than it would to have the existence of, say, a linear algebra library providing mathematical operators for Forth mean that Forth is a complex language.


Depends on what library they're from.

Is the STL part of C++? Sure, it's a library, but it's a standard library - it should be there in every conforming implementation. Personally, I think of that as part of the language.

Is some vendor's RS232-port-handling library part of C++? I would say no.

In the same way, I think that Java's standard library is part of the language, and is in fact the strongest selling point of Java.


I agree that this is one of the things that makes Haskell harder/scarier to learn, but I don't think that it makes it bigger.


Those are part of the standard library rather than the language itself though. You can (and a lot of people do) define and use your own standard library instead (hmmm... maybe a rabbit hole of its own? Although it seems companies with legitimate business needs do this as well so who knows).


There are just functions.


None of these are remotely obscure unless you don't know functional programming.


Large in the sense that it admits many approaches to solving problems.

https://www.willamette.edu/~fruehr/haskell/evolution.html


It's meant as a joke but as the author of that page points out, I think there's real pedagogical value in understanding all of the examples. You'll learn quite a bit of CS fundamentals independent of Haskell itself.

Then please do as the professor does when it comes to production code.


I think the complexity in Haskell largely comes from its advanced type system and laziness. For example, pervasive use of monads in Haskell is a direct result of encoding side effects using the type system. You can't just put a log statement in a function, you have to do a whole design exercise of how to push it to the edge of the application.


Eh... Monads aren't really just about side effects. They're really just a generalization of the threading macros in Clojure (sort of, at least I view them with the same motivation as the threading macros and their cousins in Closure). As for logging, if it's really just a log statement I'll sometimes just do the equivalent of `unsafePerformIO`ing it and not worry about it. All depends on what you consider semantically meaningful behavior from the program.

On the other hand, with the rise of stuff like OpenTracing and structured logging I think the Haskell community was pretty prescient about treating logging as an explicit side effect.


I agree that monads aren't just about side effects, and lots of languages use monadic patterns. My point was that using monadic patterns is prevalent in Haskell specifically due to using the type system to track side effects such as IO. Clojure has monadic libraries like cats, that let you write Haskell style code, but they're not popular because in most cases you can solve the problem in a more direct way.


Well the monadic structure is always there, it's just a matter of whether you use it or not :).

For the most part I admire the Clojure community's focus on data and wariness of higher order abstractions. Whether it be classes, typeclasses, or higher order functions, if you can express it with just data it's almost always better and most communities would do well to remember that.

On the flip side when there is a need for higher order abstractions, I sometimes find Clojure's standard tools to be lacking. Speaking of monadic structure being there whether you use them or not, transducers are a great example. I find them overcomplicated in Clojure, which I think is due to focusing too heavily on using standard function composition to compose them. I regard this as a bit of a trick or a coincidence. When's the last time you composed something with a transducer that wasn't just another transducer as opposed to an arbitrary function? In fact if you instead use monadic composition (i.e. compose `a -> m b` with `b -> m c` to get `a -> m c`, in this case `m` is just `List`) you'll find that transducers are just functions of the form `a -> List b` rather than higher order functions. And yes that `List` remains there even though transducers work on things that aren't just concrete collections.

I've been meaning to try to push out a Clojure library that shows this but haven't gotten around to it. Maybe this thread will be the kick I need.


Yup, there are always trade offs with every approach. A monadic version of transducers would be neat, and it would be fun to contrast them with the HOF approach in terms of pros and cons. :)


Well hopefully the monadic structure would be hidden. The only difference would be a custom composition operator instead of function composition. You could interop between the two representations with conversion functions.


http://hackage.haskell.org/package/base-4.11.1.0/docs/Debug-...

For future reference, this isn’t a good argument for trolling Haskellers.


>These can be useful for investigating bugs or performance problems. They should not be used in production code.


Well yeah, because a logging statement in production can fail (i.e. network connection drops). The type system forces you to deal with that fact instead of letting you write code that e.g. brings down your server unexpectedly because of some random log call.

Many would consider that a feature, but if you want to #yolo it anyway, like in most other languages, just use trace in prod and call it a day.


The type system forces you to solve this problem in a very specific way by structuring your entire app around pushing IO to the edges. There are plenty of other ways to address the problem that work perfectly fine in practice.

For example, you can specify what should happen is IO fails in your logging configuration. This handles the exceptional case consistently and in a single place without forcing you to structure your whole app around it.

What's more is that there really isn't a sane way to recover from such a catastrophic failure. If your database goes down, or you lose a disk, the only thing you can do is shut down the app. It's not like it's gonna keep humming along with the logging failing silently.

This kind of hyperbole that your either solve all problems via the type system or #yolo is precisely what makes Haskell community so toxic in my opinion.


> For example, you can specify what should happen is IO fails in your logging configuration. This handles the exceptional case consistently and in a single place without forcing you to structure your whole app around it.

In Haskell you can write a function that does "unsafe" logging and handles exceptions if that's the behaviour you want. You can even make that function change its behaviour based on a config file if you really want to.

> What's more is that there really isn't a sane way to recover from such a catastrophic failure. If your database goes down, or you lose a disk, the only thing you can do is shut down the app. It's not like it's gonna keep humming along with the logging failing silently.

That sounds like an argument for including the IO effect in your whole program's structure like people usually do in Haskell, no? If you change a function that doesn't access the disk (e.g. something that just grinds out a mathematical computation) into one that does access the disk by adding logging, you have a new set of possible failure scenarios to be aware of and want that to be visible.

I appreciate that this all sounds very theoretical but it can easily lead to real-world failures. I've seen "impossible" control flow because of an unanticipated exception in a function that didn't look like it could exception cause production issues. I can easily imagine e.g. leaving data in a remote datastore like redis in a supposedly impossible state because your redis-error-handling code tried to first log that an error had occurred and that logging then failed because local disk was full.

> This kind of hyperbole that your either solve all problems via the type system or #yolo is precisely what makes Haskell community so toxic in my opinion.

Every time I've bypassed the type system I've come to regret it, usually when it caused a production issue. It's not hyperbole, it's bitter experience.


All I can say is that we clearly have divergent experiences here. I've worked with statically typed languages for about a decade, I used Haskell specifically for about a year. I've since moved to Clojure, and I've been using it for the past 8 years professionally. My experience is that the team I work on is much more productive with Clojure than any static language we've used, and we have not seen any increase of defects in nearly a decade of using the language compared to similar sized projects we've developed previously in statically typed languages like Java and Scala.

If dynamic typing was problematic we would've switched back a long time ago.


What makes you think the Haskell community is toxic? I usually hear the exact opposite.


I find that the Haskell community is very friendly as long as you buy into their approach to solving problems. However, my experience is that if you question the effectiveness of static typing, or ask for evidence in support of the claimed benefits you'll get a very hostile reaction.

The comment above where pka snidely claims that using any alternative to types amounts to yolo is quite representative. He outright dismisses that any valid alternatives are possible, and he indirectly claims that people using other methods are being unprofessional and are cutting corners. That amounts to toxic behavior in my opinion.


> However, my experience is that if you question the effectiveness of static typing, or ask for evidence in support of the claimed benefits you'll get a very hostile reaction.

That's an interesting thing to say, seeing how representative figures of (specifically) the Clojure community get really defensive really quickly once somebody questions the effectiveness of dynamic typing - which I specifically didn't do.

> He outright dismisses that any valid alternatives are possible

I was merely dismissing your incorrect claim about logging in Haskell. And yes, not taking advantage of the type system when you are already programming in a language with a type system amounts to #yoloing it, in my opinion. Doing the same in other languages is fine, since you don't have another option really.


I'm not really sure what you're referring to by people getting defensive to be honest. People are just telling you that their experiences don't match yours.

You've already conceded that your solution is not appropriate for production, yet you keep saying the claim is incorrect. You can't have it both ways I'm afraid, and it's the definition of being defensive. You can't even acknowledge that your preferred approach to dealing with side effects has any drawbacks to it.

Meanwhile, the opposite of what you're claiming is the case in practice. Other languages allow you to use monads to encode side effects if you wanted to, but Haskell is the language that doesn't give you other options.


> You've already conceded that your solution is not appropriate for production

I’ve done no such thing.

> You can't even acknowledge that your preferred approach to dealing with side effects has any drawbacks to it.

It does, probably not the drawbacks you think of though (“conceptual overhead of types”?).

Generally, you seem to be awfully incompetent in a language you claim to have used for a year. That’s not a bad thing, but you don’t present your arguments with a big fat disclaimer stating that. If you did, I think there would be much less tension in these kind of discussions.


The link you gave literally says:

>These can be useful for investigating bugs or performance problems. They should not be used in production code.

This is perfectly inline with my original claim.

The drawback is that you're forced to structure your app around types, and this can lead to code that's harder for humans to understand the intent of. The fact that code is self consistent isn't all that interesting in practice. What you actually want to know is that the code is doing what was intended. Types only help with that tangentially as they're not a good tool for providing a semantic specification. If you don't understand that then perhaps you're the one who is awfully incompetent in this language.

>Generally, you seem to be awfully incompetent in a language you claim to have used for a year.

I disagree with the philosophy of the language because I have not found it to provide the benefits that the adherents ascribe, and I gave you concrete examples of the problems it introduces. I'm also not sure what you're judging my competence in it on exactly as you've likely never seen a single line of code that I've written in it. Perhaps if you stopped assuming things about other people's competence based on your preconceptions there would also be much less tension in these kind of discussions.


> I'm also not sure what you're judging my competence in it on exactly as you've likely never seen a single line of code that I've written in it.

And I don’t need to. I (and anyone proficient in Haskell, really) can infer your competence with somewhat reasonable confidence based on your comments, like this one.

People who’ve just read LYAH are able to contribute to production codebases already, no problem, but they still may not have even understood basic concepts, such as functors or monads (this is firsthand experience from work), let alone monad transformers, arrows or lenses - which I consider to be a good thing. Based on your comments (and not only this thread here), this is where I’d place you in terms of competency, but of course you are more than welcome to correct me.


>And I don’t need to. I (and anyone proficient in Haskell, really) can infer your competence with somewhat reasonable confidence based on your comments, like this one.

And this is the most hilarious aspect of Haskell community. You assume that the only reason people might not like the approach is due to their sheer ignorance of the wonders of the type system.

>People who’ve just read LYAH are able to contribute to production codebases already, no problem, but they still may not have even understood basic concepts, such as functors or monads (this is firsthand experience from work), let alone monad transformers, arrows or lenses - which I consider to be a good thing. Based on your comments (and not only this thread here), this is where I’d place you in terms of competency, but of course you are more than welcome to correct me.

Nowhere did I state that I have trouble doing any of those things. What I said is that I have not found any tangible advantage from doing it. I found that this approach results in code that's less direct and thus harder to understand. This is similar problem that lots of Java enterprise projects have where they overuse design patterns.

My experience tells me that the code should be primarily written for human readability. Haskell forces you to write code for the benefit of the type checker first and foremost.


> And this is the most hilarious aspect of Haskell community. You assume that the only reason people might not like the approach is due to their sheer ignorance of the wonders of the type system.

No, I don’t, and when I happen upon somebody who is competent in Haskell and still prefers Clojure/Erlang/whatever then genuinely interesting conversations tend to happen.

This is not the case here though.


At this point your whole argument is just ad hominem. You're not addressing any of my points, and you're just making unsubstantiated personal attacks on my competence. I don't see any point in having further interaction.

Have a good day.


Doubting your competence is not a personal attack/ad hominem, so don’t try to twist my words.


[flagged]


This comment breaks the site guidelines in multiple ways. No personal attacks, and no programming language flamewars, please, on Hacker News.

https://news.ycombinator.com/newsguidelines.html


You know he can be wrong about some things but still make a valid point about others. How do you think this reads in relation to his primary claim about the Haskell community?

> People who claim dynamic type systems are superior to static ones can be dismissed in much the same way flat earthers can because both choose to dismiss evidence they find inconvenient.

Seriously. This only feeds into his narrative.


I was going to suggest that yogthos may just be thinking of that one guy who converted from clojure to haskell and had a convert's typical evangelical zeal in #clojure, but then I read your reply. Ironic.


I hope you haven't gotten the impression that it's types or nothing when it comes to Haskell (although given the bleeding edge of the community's tendency to asymptotically approximate dependent types with GHC extensions I can see where the sentiment comes from). Haskell is certainly not expressive enough to push all invariants to the type system and sometimes you just record the possibility of an error in the type and then just do a runtime check. It's a similar phenomenon to Lisp beginners who discover macros and decide to macro everything and anything.

That being said I don't think the "well just let it blow up" is a good one either. It works in a certain subset of cases (when you know that your program can only do certain things and you have a useful supervisor such as in Erlang).

I think that philosophy is part of why Clojure has historically had problems with error handling especially in its own toolchain (e.g. its rather cryptic error messages) and still doesn't really have good tooling or patterns for it (e.g. I think nil punning is a dangerous pattern, throwing useful and specific exceptions feels unidiomatic with the need for gen-class, and it doesn't seem like other solutions have garned much mind share).

Spec is gaining momentum, but seems to still be spewing mysterious messages that require some familiarity to decipher in some cases (although last I checked things seemed to be on the upswing and I'm quite out of date with Clojure at this point).

Even when your app fails you still want to do leave a human readable message and then e.g. log metrics on what kind of error it was and metadata about the error to some supervisor service rather than just let whatever the deepest exception was filter through. "Our authorization request to the database failed with an authorization error and this is the metadata" saves a lot of time compared to an NPE, especially when you're the maintainer and not the writer of the code in question.

The Elm community I think is the gold standard of taking the error path as seriously as the success path when it comes to their tooling and it really pays in my experience when I use it.


In the specific case letting it blow up is really the only thing you can do unless the whole architecture is designed around it. However, my point was that there are many ways to handle that type of problem, and I've seen no evidence to suggest that using types is the most effective one in practice.

When it comes to error messages, I would argue that Haskell ones are no better than Clojure. If anything they're often even less useful because of how generic they tend to be. All you'll know is that A didn't match B somewhere, and figuring out why is an exercise left to the reader.

Personally, I like nil punning and I find it works perfectly fine in practice. My view is that data validation should happen at the edges of the application, instead of being sprinkled all over business logic. If you know the shape of the data, then you can safely work with it.

The idea of doing validation at the edges also applies to functions, if nil has semantic meaning then the function should handle it before doing any nil punning. If it doesn't then it should be safe to let it bubble to a place where it does.

The idiomatic solution for throwing errors in Clojure is to use ex-info and ex-data as seen here https://clojuredocs.org/clojure.core/ex-info

Spec errors are not really meant for human consumption, but there are libraries such as expound https://github.com/bhb/expound that produce human readable output. My team is using it currently, and we're very happy with it. I do think that more could be done in the core however, and it does appear that 1.10 will make some improvements in that department.

In general, the impression I have is that Clojure errors aren't poor due to technical reasons, but simply because the core team hasn't considered them to be a priority until recently.


Oh yeah GHC error messages are bad. Notice how I said Elm instead of Haskell at the end :). One of the things I dislike about GHC is the wasted potential there that's evident in the compilers for newer languages like Rust, Elm, and Purescript when it comes to errors.

I haven't found a good way of switching on ex-info generated exceptions. I usually end up having to do string matching or key searching both of which feel brittle. There's ways of following patterns within the boundaries of my own codebase (e.g. a custom key I know will always be there), but that doesn't play well with the ecosystem at large because there aren't well established conventions around what's what. I don't want to come off as saying there's a fundamental reason why Clojure couldn't have better error handling. It's not a language level thing but rather a community thing. I think Python is a good example of where the community has coalesced around using specific error types even though it's a dynamic language.

I totally agree with doing validation at the edges of the program and try to enforce that in whatever language I'm writing in. I think this is a common misconception about statically typed FP that shows up in e.g. Rich Hickey's talk about types. It's quite rare for things like `Maybe` or `Either` to actually show up in your data structure (e.g. Rich Hickey's SSN example). You usually end up bubbling the error to the top of your module and deal with it at a module level. The type is just there to make sure you don't forget to deal with the error (which is the biggest thing I miss when I'm in languages which emphasize open world assumptions and don't give good tooling to create closed world assumptions; I want to know if I've handled all my errors and all states of my application!).

Yeah the problem is that I've found in the legacy Clojure codebases I've maintained it's rare that nil ever has a unique meaning and often ends up getting reused for a lot of different meanings. For example "A key doesn't exist in this map" and "This data is of a completely different shape than I expected" are different error conditions with different errors that usually both turn nil in the ecosystem.

This lispcast article really hit home for me and sums up some of the pain points I hit using Clojure in production: https://lispcast.com/clojure-error-messages-accidental/. You eventually internalize the compiler errors so they're not a huge deal but the ecosystem at large doesn't have a great story for runtime errors.


Yeah that's a fair point regarding lack of standard error handling with ex-info. It does feel like one of the less thought out areas of the language to me. I definitely agree with the lispcast article in calling the errors accidental.

I'd really like to see something along the lines of dialyzer for Clojure This talk proposes a good approach for that I think https://www.youtube.com/watch?v=RvHYr79RxrQ

It would be great to have a linting tool that finds obvious errors, and informs you about them at compile time. For me that would be an acceptable compromise.

Overall, I would say that it does take more discipline to write clean code in a dynamic language. The problems you describe with legacy codebases are quite familiar. I've made my share of messes in the past, but I also find that was a useful learning experience for me. I'm now much better at recognizing patterns that will get me into trouble and avoiding them.


I'm somewhat wary of approaches like that. I feel like core.typed tried something similar (maybe McVeigh's approach does better inference of unannotated code?) and it's withering on the vine as far as I can tell. IIRC there's some theoretical hints that gradual typing from the direction of untyped to typed rather than the direction of typed to untyped is fundamentally less ergonomic (some type inference stuff becomes undecidable in the former case and remains decidable in the latter and the same occurs for some varieties of type checking). Of course theory doesn't always mean you won't have practically good solutions (since when has the halting problem stopped people from making static analyzers?), but they provide some hints you'll be swimming upstream.

My limited experience with static analyzers is that you also end up with unpredictable breakages of the form "hmmm... so if I leave the variable here my static checker tells me I'm wrong, but as soon as I move the variable down one level of scope it just silently fails to see the error."

Regardless I haven't used Dialyzer myself and I have a good idea of the very very finite number of minutes I've spent with Erlang proper (as opposed to just reading about it), so who knows. It'll be fun to see where this goes. Thanks for the link!

The really big innovation I'm personally waiting for is combining static types with image-based programming (e.g. Clojure's REPL) which seems like an open problem right now because it's pretty difficult to think about what static invariants can and can't be maintained when you can hot reload arbitrary code. That and better support for type-driven programming a la Idris. Working with the compiler in a pull-and-push method (which you can get a crude approximation of in Haskell with type holes and Hoogle and some program synthesis tools, but oh man if even the toolchain for that was mature that would be huge!) was as big a revelation for me as REPL-based programming in Clojure was.


Key difference here is that it's not aiming to be a comprehensive type system, just to catch obvious problems. So if it runs into something it doesn't understand it'll just move on and leave it as is. If it sees something it understands and it's incorrect it will give an error.

Personally, I would find this very valuable because it would help catch many common errors early while staying completely out of the way.

And yeah, I can't really do development without the REPL anymore. I find the REPL makes the whole experience a lot more enjoyable and engaging than the compile and test cycle. It's really a shame that most languages still don't provide this workflow.


Didn't core.typed try to do the same thing? Not provide comprehensive types but just as needed? I never really used it (a coworker did but ended up throwing it out I think). Even if it's exactly the same technically maybe it'll work out with a different set of social circumstances. Maybe if Circle didn't drop core.typed it'd be even more popular now. Never know about these things.

Haha, well that's where you and I differ. The REPL is amazing, but I still want my ability to create closed world assumptions first!


Core typed requires you to annotate everything in the namespace, or add exclusions explicitly. This introduces quite a bit of additional work, and I suspect that's why it never really caught on.

I've read that the author is looking at improving inference in it, and at generating types from Spec, so it might still find a niche after all.

And I understand completely, it's all about perceived pain points at the end of the day, and we all optimize for different things based on our experience and the domain we're working in. That's why it's nice to have lots of different languages that fit the way different people think. :)


you don't need to do either. just run performUnsafeIO if you really want to spit out log statements randomly.

I like having to do logging the Haskell way, actually. it allows for more coherent and easier reuse. What if want to reuse some code, but do logging in a specific way? By allowing the caller to decide where logging goes you can readily accomplish this.


In other languages, if the logging facility fails, you can simply continue running the program without logging. This works reliably enough across the world for many years that no one worries about log statements in production being unsafe.


Some people in the Haskell community think so too. See e.g. http://hackage.haskell.org/package/simple-logger-0.0.4/docs/... where you log pure code without introducing a change at the type level.


Right. If you want to troll Haskellers just ask about the runtime.


This is one of the things I rather like about Scala. It still leaves suitable room for the programmer to decide "that doesn't count", and defer deciding what does / does not count until after they have their application sketched out.

Your "pure" function can become "pure except for logging", "pure except for analytics calls", "pure except for notifications", or whatever you decide.


Haskell 98 is a simple language. Modern Haskell is not. Or, at least, the superset of modern Haskell that GHC implements is not simple.


I don't think it is small or minimal but I believe it to be smaller than Scala


> Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems.

Haskell is a simple and focused language, provided you use GHC with minimal extensions.


There's a weird interpretation here that this post is the author expressing frustration with this process. I often have a similar experience and I wouldn't want it any other way! This process of repeatedly asking "what is this?" just doesn't seem to come up in the same way in other languages. This gives me the ability to do some practice I wouldn't otherwise be able to do, one that often has tremendous transfer over to "real work", because I can start to see patterns and get a feel for what is really going on once I get rid of all the dull IO tedium.

If you want an analogy, consider this like studying jazz or something. Sure, you could just notice a II V I progression and call it done, but if you pick away at each individual note, you can find a whole lot more going on behinds the scenes.

Basically, I don't really consider what's happening in the blog post a bad thing. It just has a time and a place, and you need to be aware when it's the wrong time.


I've seen this, and I've never had a Haskell gig. One of the best pieces of advice I ever got re: programming was, "don't write the abstraction until you've written three cases first". This is good advice in the intended way (you will write the abstraction better when you get to it), but even better because you probably often won't ever write three of the thing in question, in which case you shouldn't write the abstraction anyway.


> One of the best pieces of advice I ever got re: programming was, "don't write the abstraction until you've written three cases first". This is good advice in the intended way (you will write the abstraction better when you get to it),

I don't think this is good advice in the context of Haskell. Haskell allows some abstractions that aren't just "black boxes" or glorified templates. It's of kind like elementary logic: the more models there are, the fewer proofs there are, and vice versa. Analogously, when you write a more abstract function, there are fewer ways you can manipulate it and thus it is in some sense simpler.


I half agree with you. I think the “abstraction” your parent is talking about would correspond to typeclasses rather than polymorphic functions. That is, I think that this would be a reasonable type for a function even if it is only called once:

  Eq a => a -> MyObj a -> Maybe Foo
Whereas I think the following would not be ok (probably even if you had a lot of instances)

  class HasFoo o where
    getFoo :: Eq a => a -> o -> Maybe Foo
  
  instance HasFoo MyObj where ...
I think the rule should be that one should write the most general code that minimises the entropy/size of the source code. That way one can prefer polymorphic functions (as they need less type-signature entropy and bytes, unless they have loads of constraints, in which case one should consider wrapping those constraints together), while still preferring not making crazy single-instance typeclasses.


I think the main reason is that there is no _actual_ problem that OP needs to solve. If there was one, then he would get pragmatic and figure out one of the reasonable solutions to this and move on with his life.

Though it's true that Haskell is easy to put you into a mindset where you want to simplify and generalize the code as much as possible, leading to wasted time on overly general solutions. Which shouldn't be the case, because e.g. if you want to extend the solution from lists to traversables, Haskell gives you the confidence to safely refactor the method at a later time.


There are some languages that tempt more abstract navel gazing than others. I’m not sure about Haskell, but Scala tends to do that. Anyways, there is something about human behavior and language design that can lead “more is less” situations.


If you don't have a use case creating a need that your program solves, what's the difference between figuring out make a zygohistomorphicanedoreticular for polynomials and writing a working library that no one uses? Either way, you do whatever is fun for you. If you have an actual customer requirement, then that guides your prioritization and attention?


More often, you do have a use case, you just don't quite know what it is yet.


I lost confidence in Haskell's ability to let me write something one way and safely refactor it later, when I found out that you can't use a ton of the algorithmic functions in the standard library because they do things all wrong.


What?

Other than strings desperately needing to be purged from the library, what are you talking about?

Haskell has some really solid standard libraries, and it's extended library set has some of the most sophisticated algorithms packages in the world.


A big one is lazy I/O[1]. It is really easy to mess up the order that operations happen in when using IO operations from Haskell's standard library. The non-standard library alternatives like conduits or pipes are more complex, but much harder to mess up.

[1] https://stackoverflow.com/questions/5892653/whats-so-bad-abo...


I agree lazy file I/O can be dangerous and needs to vanish, but I'm not sure this counts as an "algorithmic" complaint, which is what I was really curious about.

I also don't think you need Conduit or Pipes or any other performance destroying free Monad libraries to deal with it. My hot take: Conduit is in fact awful and radically overused and multiple superior options exist.


By saying "algorithmic," I used the wrong word. Lazy IO is a great example of what I meant.


I totally agree the prelude is full of cruft and needs to be scraped and redone fully. But the data structures libraries are all really good and if we could just pull in the succinct and unordered stuff, it's gone some of the best data structures a standard library can ask for.


I'm not the commenter you're replying to, but I've often found the Haskell numeric classes (Num, Fractional, Integral etc) prickly. The almost, but don't quite, map to (mathematical) algebraic structures.


Having use Haskell for math programming, I agree with this sentiment. Haskell's standard classes are in an uncanny valley of matching the mathametical structures.

If you want to do that sort of thing with Haskell, I would suggest switching to the numeric prelude [0]

[0] https://wiki.haskell.org/Numeric_Prelude


I'm curious what languages do it better. I'm looking for something new to learn.


That sounds like the classic category of "I hate Haskell because in any other language* I know it's impossible to write a correct program so I don't care about being wrong, but Haskell tempts me into thinking it's possible to get it right."

Not counting obscure special-purpose langs like Coq.


Could you give examples of algorithms done wrong?


I think the problem in this case is that the author's attempt at generalization went off in the wrong direction.

The fact that fixed-length lists aren't working well as a representation for polynomials is a hint. Polynomials with real coefficients form a vector space [0], so you should really think of them as infinite-dimensional lists of numbers (in which most of the numbers are zero).

Once you know you want to represent an infinite dimensional vector with only a few nonzero entries, you can use a sparse vector. The first library that comes up when you google "Haskell sparse vector" is `Math.LinearAlgebra.Sparse.Vector`, which lets you write something like this (I haven't run this code but it should get the job done):

import Math.LinearAlgebra.Sparse.Vector as V

poly1 = V.sparseList [1, -3, 0, 1]

poly2 = V.sparseList [3, 3]

sumPolys = V.unionVecsWith (+)

So, I read this more as an article about trying to reinvent the wheel in a domain which isn't necessarily simple, which isn't a good idea in any language.

[0]: https://en.wikipedia.org/wiki/Examples_of_vector_spaces#Poly...


I see this a lot with intermediate lisp programmers; they spend so much time building ivory tower abstractions that the original problem is forgotten. I sometimes call this "bottom down" programming.

Predicting the future is very hard; remembering the past is much easier. If you find yourself typing the exact same pattern for the Nth time, then it's time to refactor it into a macro or a function as appropriate.

Figuring out what parts of the next 1000 lines of code you are going to write will benefit from an abstraction (and which abstraction that is) is a rare skill that comes only (if at all) with experience.


“Bottom down” really resonated with me in my dalliances with both Common Lisp and Haskell.


I first heard the term "bottom down" from my dad a long time ago, but he used it to mean any sort of programming without a plan. It was many years later that I applied it specifically to people doing "bottom up" but get so obsessed with building the perfect base that they never solve the original problem.


The turning point for me was when I realized that these problems exist in other languages and are practically invisible. Without a good type system and inference you cannot hope to catch all of your type errors. You'll just write some unit tests and run your program many times until you're certain you've sussed them all out... until that pesky bug report comes in. Then you get to play detective!

I honestly don't have time left in my life for such meaningless drudgery.

With a type system I have the computer aid me in designing the program. It keeps me honest and ensures that I don't have type errors which are are huge class of things I'd rather not have to think too hard about.

When I program in Haskell I spend more time solving problems than fixing programming errors.


You could also just: write the less general version and stop listening to folks who flip and scoff at every piece of code isn't maximally general.

Crazy, I know, but especially when were doing labor in industry, even without maximal generality your code is probably going to outlive its patron corporation and then die in obscurity.


The only person who flips and scoff at every piece of code isn't maximally general is... the author the code. That's the problem.


That's not true. There are folks in the community who absolutely DO put pressure on open source libraries to be maximally general (or to use THEIR abstractions over others).

This fosters an environment that might already lead people to second guess themselves, because it's so big and new.


I always felt very productive in PHP, because the only rewarding part of PHP is having made something. It never rewarded sophistication... but making a web site that did something WAS rewarding, so all my attention went to that part.

Calling Haskell an anti-PHP seems fair.


The same author of the original explores this theme with Java: https://blog.plover.com/prog/Java.html


That perfectly describes Java (and C#).

It's the land of mediocritity.


>I ought to be able to generalize this

I've never understood this. Unless you write a library that you plan to publish, or already have actual cases where you need a more general solution, why spending time trying to generalise code instead of switching to the next task?


My tentative answer is this: someone who uses Haskell appreciates elegant solutions (a.k.a mathematical/functional) and is inclined to write things 'properly' once and they might also idealize that the functions they write will not only solve this current issue, but be useful to others and themselves in other programs ... thus going down the generalization and elegance rabbit hole.

Of course, all of this is purely speculation on my part.


Resuming: the typical Haskeller is a perfectionist.

If I allowed, without blinking, my perfectionist self would ditch every language but Haskell. No other mainstream language can give you more control and purity. For a perfectionist this is opium.


It is a particular form of perfectionism. Other types of perfectionists might want to be perfect at writing programs as fast as possible.


>elegant solutions (a.k.a mathematical/functional)

Meanwhile, if you look at pseudo-code written by actual mathematicians or logicians, it's almost always imperative, full of side-effects and global variables. Sometimes they even use GOTO!


I was likewise surprised to learn few Haskellers have any interest in computing the cohomology of their monads.


One reason is that more general types mean you can write fewer functions, and so the function that you do write is more likely to be correct.

The function `intMap :: (Int -> Int) -> [Int] -> [Int]` can do all sorts of crazy things that are not map. The function `map :: (a -> b) -> [a] -> [b]` can do far fewer crazy things, and just from looking at the type you can say that any `b` in the result list _must_ have come from applying the function to some `a` in the input list.


> you can say that any `b` in the result list _must_ have come from applying the function to some `a` in the input list.

Morally correct... but consider the function `\f xs -> [undefined]`, which can be typed as `(a -> b) -> [a] -> [b]`. (Obviously it could be given other types as well.)


When discussing Haskell and theorems about its types it's common to simply ignore non-termination; if we don't ignore non-termination there's basically nothing we can say about Haskell programs at all.

Interested readers should check out Agda and Theorems for free!


Fair enough. On reflection, `\f xs -> []` is a better example of the point I was trying to make.


Yeah that's a better one (which I think is discussed (or a similar one) in Theorems for free!), but the wording of OP weasels itself out of that being a problem.

> any `b` in the result list _must_ have come from applying the function to some `a` in the input list.

Since there exists no a in [], the quoted statement holds! I find that really beautiful :)


That is indeed a weaselly but accurate statement. :)

I'll have to give Theorems for Free a read, thanks for the suggestion!


My take is it's because there are some really great benefits to implementing things more precisely (which usually means "more general" in this sense), and Haskell is more amenable to it than most.

There are many cases where it's worth it, so much so that it's worth at least considering whether a more generic solution is better.

I think the problem is that it's hard to predict how deep a rabbit hole like this gets - so you think it's just a few minutes extra work, but it ends up completely derailing the project.


If you spend some fraction of each task reflecting on how you could've written it 'better', then over time you'll learn to write more of your code 'better' from the start. (You could say making it more general is not always better, and that's true. But it is a win often enough to make it a skill worth cultivating.)


What (non-abstract) client would want you to do this in their own project? Sure, he'll want the people who spent (wasted?) their own time doing this but would categorize you as a non-professional time waster if you do this out of their own pocket and would probably fire you after too many strolls in the 'abstract' realm


Good thing my employers were more enlightened. This is a matter for negotiation like anything else, and depending on timescales may be in the employer's narrow interest.


Some people overvalue the power of generality, and overdiscount the obscurity and complexity it tends to involve.

Also, it's not a bad way to learn the ins and outs of the language, really.


You can eventually "come out the other side" and get to the point where you write the general version correctly the first time. But it is some degree of work. I think it's a good exercise for a pro, but you can certainly live without it.

The general principle does come in handy elsewhere, though. Doing the most useful work with the minimum power is a generally useful skill. I get a lot of mileage out of it in other languages, because across a couple hundred modules, the difference between modules that have minimum dependencies and modules that carelessly overuse power becomes quite substantially different in character.


> You can [...] write the general version correctly the first time. But it is some degree of work. [...] Doing the most useful work with the minimum power is a generally useful skill.

There's something wrong with that logic, but I'm too lazy to work out the proof in the general case.


When people wonder why my typical comments run on to multiple screenfuls, it's because I'm armoring them against this sort of dismissive snark. I was in between tasks today and lacked time to make it longer.


[flagged]


"spending hours giting gud at Haskell doesn't translate to hours saved solving real problems"

As long as we're playing "make baseless assertions at each other", my experience says otherwise. That's going to be hard for you to talk me out of.

Programmers still seem to have this bizarre belief that programming, uniquely among all the skills in the world, is not possible to improve via deliberative practice, and an equally bizarre belief that improved skills can't possible translate into better programs or even worse, must inevitably translate into worse programs. I can't wrap my mind around it.

I don't deny that there certainly is a trap where you learn these skills in a sort of greenhouse environment, then fail to translate them out of that environment properly. But that's the fault of the practitioner, not the skills, and I prove by demonstration that the skills can come out of the greenhouse and improve real code. I mostly write my professional code in Go, so you can be quite assured my production code is anything but a mess of functional paradigms inappropriately applied, since that's basically impossible in Go. I find what I learned in Haskell to be incredibly useful in Go.


I genuinely don't think you read the linked article. The author clearly is quite good at Haskell and hasn't fallen into this "trap" you're positing. It's not a failure of the "practitioner" at all. It's a fairly insightful point about the psychology of programming and the diminishing returns of the kind of skills we're talking about here.

> you can be quite assured my production code is anything but a mess of functional paradigms inappropriately applied

The point of the article was precisely that functional paradigms appropriately (which is to say, "appropriately") applied are themselves a design smell.


I've enjoyed this amusing comment a lot, but I'm too lazy to explain why. Have an upvote.


Can’t say I agree. There will always be unexplored ways to generalize.


I think there's an error in the first example.

  Poly [1, -3, 0, 1]
Should be:

  Poly [1, 0, -3, 1]
EDIT: My mistake.


It's not - it starts with the unit coefficient (1), then x (-3), then x^2 (0), then x^3 (1). Some things become easier this way - including addition of polynomials of differing degree - and as an added bonus you can phantasize about representing power series as (lazy) infinite lists.


I thought so too at first but given the way addition is defined later on, it makes sense to keep the coefficients sorted by increasing power (the leftmost element in the list is its head, and the easiest to access when doing anything recursive)


Evaluation also becomes easy this way, using Horner's method: https://en.wikipedia.org/wiki/Horner%27s_method#Python_imple...


Something along the lines of this right?

  eval =
  v [] => 0
  v x:xs => x + v * eval v xs
You do have to love how clean definition by cases makes these sorts of things.


But this doesn't argue for the low-to-high order, because of reversed(). This code would be simpler and faster with the coefficients in the opposite order.


You're right, the process does start from the higher coefficients, and so does not really support the ordering presented.

You either need to use foldr to defer the multiply-and-add until the end of the list is processed, or reverse the list before processing it.


I am not a Haskell programmer, but is this correct?

    (Poly a) + (Poly b) = Poly $ addup a b   where
       addup [] b  = b
       addup a  [] = a
       addup (a:as) (b:bs) = (a+b):(addup as bs)
Imagine a simple example, adding `x+2` and `10`. In OP's representation, these would be represented as the lists [1, 2] and [10]. That is, the first element is the coefficient of the term of highest degree.

But doesn't this implementation add list elements left-to-right, so we'd end up with the result [11, 2] instead of [1, 12]?


You're misreading OP's representation:

> The polynomial x^3 −3x +1 is represented as Poly [1, -3, 0, 1]

It starts with the 0th coefficient and goes up. So adding `x+2` and `10` would be zip-adding lists [2,1] and [10,0].


>> The polynomial x^3 −3x +1 is represented as Poly [1, -3, 0, 1]

> It starts with the 0th coefficient and goes up.

Is a list in Haskell canonically written in the opposite order of what I expect? I expect that the 0'th element of the list [a, b, c] is `a`. In Haskell, is it `c`? Assuming `a` is the 0th element, then the coefficient for the highest-degree term is the 0th element of the Poly. And since the degree of the two polynomials doesn't necessarily match, matching up the two highest-degree coefficients and adding them is obviously wrong.

Or am I going crazy here?


  module Main where

  newtype Poly a = Poly [a] deriving (Eq, Show)

  instance Num a => Num (Poly a) where
    Poly a + Poly b = Poly $ addup a b
      where
        addup [] b  = b
        addup a  [] = a
        addup (a:as) (b:bs) = (a+b):(addup as bs)

  main :: IO ()
  main = print (Poly [1, 2] + Poly [10])


  *Main> main
  Poly [11,2]
Yep, you are correct.


Yeah it’s the cons operator so it matches the first element (head) of the list and next elements (tail). It would need a reverse and reverse back to work correctly.

reverse (addup (reverse a) (reverse b))


"Doc, it hurts when I do this." "Don't do that."


Similar things happen in other languages. Most recently, I started writing a program in Elm to try it out, realized I wanted to use some CSS, and then got distracted looking at the various ways to do that, with their different tradeoffs. (What is this stylish-elephants package?)

Sometimes it's more productive when you join a team that has already decided on its standards. You don't learn as much, though.


This is me. I have a backlog of personal projects that I've slowly burned through and every freaking time I start with Haskell and end up in Rails. I love working in Haskell ... in theory. In practice I spend way too much time figuring out how to wrangle data into the correct shape when it would've taken me 30 minutes to accomplish in any other language I know, static or dynamic.


It's about Scala, not Haskell, but the gist is the same:

at my company we're giving new candidates a live coding interview where they get one hour to write a very simple application using either Java or Scala. Candidates are free to choose between those languages.

The funny thing is, that candidates who choose Scala are never able to fully finish the assignment. Even though the application is really simple, many don't finish half of it and some even get completely stuck in complex for-comprehensions and what not. Candidates who choose Java however mostly are able to finish the assignment. The code might not always be the most elegant, but it does what it is supposed to do.

Even though I like Scala a lot, I feel it has the downside that it gives you too many options to do the same thing. This can get in the way when you are simply trying to implement some basic business feature.


So Scala is harder to write, but the question is: is it easier to read?

(Since in general a particular piece of code is read far more often than it is written).


That's an interesting question although it's not directly related to the article.

I think it is definitely possible to write Scala code that is easier to read than the same Java code. However it seems that a large part of the Scala community does not see this as their main objective when writing code.

Sometimes the focus seems to be on writing code as terse as possible, which is not the same as readable. Or the focus is on making code more generic and abstract, which can be a useful goal depending on the use case, but it's definitely not the same as readable.


Writing elegant code that never works can easily take me twice as long as writing okay code that actually works.


You're phrasing that in a seductive way. "Okay code". Maybe that's good enough, right? Maybe not? Someone's "okay code" may be someone else's "bad code". I'm fixing someone else's "okay code" on a daily basis since it's riddled with bugs. Somehow I'd expect that if they would have had the mastery to write elegant code, they may also have been able to make it less buggy, or at least to make it easier for me to fix it.

Also, as with code that's elegant, or well-tested, or specified really well, aside from all those explicit qualities, all of these are also just additional touch moments for the author in question to discover and fix the bugs before it gets handed off, and to become my problem on some future date.

(Not saying you should gold-plate it into elegant code. Just saying that there is value that should not so easily be dismissed.)


I think for me, "okay code" means that it's functional, debuggable, and up to professional sniff, but nothing flashy. As opposed to sexy-cool-trendy-meta-wow code.


I heard this gem a couple of weeks ago:

"Don't engineer things to failure."

When you find yourself saying "I ought to be able to generalize this", that's when you need to stop and just write the code you need to write NOW. Just because you can, doesn't mean you should.


I think the main problem with the specific example is that a list is the wrong data structure to represent a polynomial. Even a simple `IntMap` would be better, where the key is the power.


This is clearly overengineering. There is no need to write a generalized function because it is only useful for adding polynomials.

And a function that works only with polynomials will be easier to read.


I have to ask: does the author not have the same problem in other languages? If not, why not?

Any major language has dozens of libraries that do almost-but-not-quite what you want. Any language with support for mactos, templates, or generics offers opportunities to write unnecessarily generic code. (The authors of Spring managed to get into that tarpit even before Java introduced generics!)


I write in many different languages, but I would say this kind of thing depends a lot on the type of language.

With Go, e.g. I would not spend much time on this, because it is such a plain language. You just accept there is no fancy way of doing this and move on.

With C++ I found it a bit different. Either I mess around thinking there MUST be some way of solving this annoying problem only to realize there just isn't. Other times I find a solution but it tends to turn into a horrible ugly syntax mess, so you abandon it. I've pretty much given up writing anything elegant or fancy in C++. It just turns to shit so quickly.

Swift I found fairly straight forward to write. All the typical stuff that get me stuck in C++, never seems to pose a problem for me in Swift.

Julia is the language which ought to have thrown me into the Haskell problem described, as you can do a lot of crazy stuff with types and macros. I do to some degree but mostly Julia just does what I want it to do. I guess it is partly down to the core functions being well designed and that despite the flexibility it is still not as magical as Haskell, Clojure etc.



In any language, premature abstraction is bad. That applies whether you use type classes in Haskell or classes in Java. It takes experience to judge at a moment's notice whether something is worth abstracting. Beginners frequently get this wrong.


There are lots of relevant programming aphorisms: "Write the simplest thing that could possibly work." "YAGNI." "KISS." You either have a problem you need to solve or not. If not, why do you expect the process to ever finish?

This is one area where TDD really shines. You write a simple failing test and implement something that makes that test pass. If that isn't sufficiently generic you write another test, make both of them pass, and refactor until the solution is as simple as possible while the tests pass. Repeat to get to where you need to go.


There needs to be a balance. You should choose an architecture which will accommodate predictable future needs without too much refactoring. A suite of unit tests can't help you much if you need to unmangle a bunch of severe abstraction violations between what you now need to be well-encapsulated components.


Actual red-green-refactor TDD results in the simplest possible code to solve a problem, IME. Over- or under-abstraction tends to be pretty well instantly recognizable when working with a well written test suite, and can then be easily and safely refactored away.


The simplest possible code for the current test suite is not always strategic.


1) to me, this is the difference between haskell and Clojure 2) in the future, normal people will be able to code, so work backwards from that


>in the future, normal people will be able to code, so work backwards from that

This.

I'll never be able to understand certain programmers' insistence that imperative code is somehow unnatural or that "we're only used to it because of momentum" or whatever. For thousands of years people have been issuing imperative instructions to each other.

"Wash. Rinse. Repeat."


> For thousands of years people have been issuing imperative instructions to each other.

This ancient culture is pretty strong, you're right. I've lost count of times I begged people to skip all this unreliable "turn right at the second intersection, rinse, repeat" and just tell me the street address long before I learned the word "imperative".


> in the future, normal people will be able to code, so work backwards from that

Can you expand on that? Sounds intriguing, but I'm not sure what you mean.


Not that representing polynomials as linked lists is by any means ideal, but this "first-guess" program is quite elegant in my opinion.

I know there's a tendency to over-abstract things among the Haskell community, but if you focus on pure, immutable data structures and algorithms on them you'll discover a truly wonderful and indeed pleasant and productive language.


> “I ought to be able to generalize this,” I say.

lol, it's no wonder he never finishes a program. But this isn't a problem limited to Haskell - one can start generalizing unnecessarily in any language.

To be fair though, the Haskell ecosystem as described in this article (and from my experience) is quite frustrating, and it makes me appreciate my current language Rust that much more.


As others have said, go ahead and implement it; you'll have a better idea of what you might want to change or generalize after you have that version to experiment with.

(One case you might want to consider: with that way to create polynomials, won't x^1000000 take a lot of typing?)


It’s not the abstraction, it’s the level of abstraction - the higher you go, the more general and the less concrete it becomes. Striking the balance is the key.


This seems like "DFS" programming. I'd advise "BFS". Do it the easy way and put a "TODO: generalize like so."


Premature generalization. There.


In Haskell (and similar), the language offers the ability to really stab genericization in the heart in a good way, to do it the right way. But 90% of the time, you shouldn’t and it’s very hard not to. Not a matter of restraint, but the language makes it actually very hard to get simple things done unless you plug into the abstraction vortex.

Conversely, languages like Java, C++ and Python (if you use classes), make it very easy to write simple things without abstraction, but virtually all use of abstraction goes off the rails immediately and everything shoots you in the foot, so that good abstraction is not even really a thing at all.

Pick your poison!


i think i agree with you, but want to make one additional point.

I think haskell is actually fine at writing code without abstraction, it's just that people don't choose haskell to go down that path, and so aren't satisfied with those "simplistic" solutions.


True but I still manage to go down some rabbit holes in C++.

I could use the older STL iterators, but nooo I use a range, with a lambda. Returning a tuple with tie and pair and some more stuff from the latest C++1x standard.

And in Linux userland, I just want to check a pid but I invent some smart locking system to ensure I never get a false positive or negative.


In C++ the main actual rabbit hole of useless abstraction begetting useless abstraction is premature generalization from what you actually need to templates that can be used with types you don't need, which then proceed to kick you with implicit assumptions, type traits, unforeseen subtle type distinctions like those that come up in Haskell.

Other kinds of "stuff from the latest C++1x standard" tend to consist of relatively harmless novel ways to express something, usually not intrinsically complex and, when inappropriate, causing only localized damage (typically puzzling syntax or slightly wrong declarations with no impact on unedited code parts, even in the same class or function).


I have this with all languages I get into. I get the urge to radically adopt whatever patterns seem to be idiomatic for the language, whether that's sensible or not.

I tend to judge beginner friendliness of a language by how strong that urge tends to be, and I tend towards Common Lisp since it is very multi-paradigm and not very opinionated, yet with very little semantic warts that bite the user in the end. There are easier languages to pick up if you have no concept of programming (things like HyperTalk or Scratch) but they have obvious semantic deficiencies.

Haskell on the other hand is an excellent first language for structured teaching because it's easy to express concepts in, but it stubbornly resists attempts by the inexperienced or impatient to shotgun or cargo-cult a solution.


I've always felt of Haskell that it's a way for very smart people to never get anything done.

I want to like Haskell, I really do, but it's like learning Latin - you'll feel very smart except no one can talk with you except other people that correct your grammar.

Bleh.


I write Haskell professionally and I get plenty done :)


> no one can talk with you except other people that correct your grammar

Also, a compiler -- which is a pretty huge difference.


Part of the reason I stopped using haskell is because of how generalized everything is. I want to get something basic done and you are expected to read a whole book some maths topic to understand how do something basic. Another language would have an example on how to do that thing 99% of users are trying to do but in haskell land they want you to understand the 1000 different ways the library can be used and piece that together to work out what you want.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: