Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I appreciate your experience, you basically are saying that it would be hellish because of two features. Haskell and/or OCaml have many incredible benefits, and many of them do not rely on laziness to work. Sure it allows you to write some incredibly elegant, generic solutions - but I don't think it is necessary to have it as the default.

I think things like the record system could be fixed up with simply more eyes on the problem. Lets face it, the Haskell community and contributor base is relatively small. If it had the resources of say C++ or Java behind it - a lot of these small problems could be tackled.

But when it comes down to it these are minor details, not fundamental flaws of the platform such as the ones discussed with Java.

Also I do program solely in Haskell during my free time (ok, maybe some matlab on the side haha). While the learning curve has been incredibly intense - it is so damn rewarding that I keep coming back. The solutions are incredibly generic, elegant, testable, and even quite fast with a little performance optimization work (with experience the amount of effort required for this tends to go way down as with anything). I guess I would rather write the correct code first, and put a little time into optimization than write highly optimized incorrect code and spend a lot of time fixing bugs.

But far and away the largest problem that Haskell goes a long way towards solving is composability. Many may argue this point but I really believe this is the largest challenge we face today in software. So many issues boil down to this. The vast majority of bugs, re-writing, etc... comes from software that was not designed in a composable manner. Because let's face it - with the current tools like Java/C++ it is incredibly hard!



Oh, I think that nearly all of the problems with Haskell could be fixed up if it had the resources of C++ or Java. The problem is that in getting those resources, it would create new problems that would end up looking a lot like the whining about C++ or Java.

Java, as a language and a platform, is not fundamentally that bad. It was certainly done by very smart people with lots of experience.

It sucks because it has several design decisions embedded into it that were necessary for it to gain broad adoption and yet ultimately pretty annoying in the long run. Take closures, for example. The reason Java doesn't have closures is because it had a design constraint that all heap allocation must be indicated by the "new" keyword (same reason it didn't have autoboxing until Java5, and why classes aren't factory functions they way they are in Python). The reason for that was that at the time, the majority of programmers believed that heap allocation was slow and unpredictable, and in order for a new language to gain adoption, programmers needed to be able to control heap allocation and ensure it was used predictably.

This is actually still true - in domains like avionics and other "hard real time" areas, heap allocation is generally banned because it introduces nondeterminism into a program's run-time characteristics, and even with tuned incremental GCs, Java programs still spend a non-trivial amount of time in the GC. It just doesn't matter all that much, because Java started being used for applications where a few hundred millisecond pause for GC just gets lost in network latency.

Haskell faces a similar uphill battle. In order to get the critical mass of people needed to build out the ecosystem that C++ or Java has, it needs to be useful for the programs that people write today. But in order to do that, it needs to accommodate all these wonky developers with their weird ideas of what's necessary and what's not. Chances are, if it did that, there would be some design choices made that make Haskell nearly as lame as Java. And even if there aren't, and Haskell remains exactly the same as it is today, it'd soon be pushed into domains it wasn't appropriate for, just like Java, and people would be cursing it out for its flaws.

As Simon Peyton Jones says, "Avoid success at all costs."

There's a two-edged sword to every design decision that you make. Yes, Haskell gives you a lot of composability through lazyness, type inference, and nearly everything being first-class. Lazyness grants this through "pay for only what you use" - if a caller never uses the result of a library, the parts of the library's code that generate that result will never be invoked. The flip side of this is that if you suddenly use one more field in a tuple, way down the line in some unrelated portion of the code, you might trigger a very expensive computation. This is unacceptable in many large apps written by multiple people; you shouldn't have to worry that a tiny field reference can bloat the program's memory consumption by orders of magnitude.

(I have example trade-offs for type-inference and first-class everything, as well, but this post is already getting too long, and I wanna go do other things with my Saturday. ;-))


> [To be popular, a new language] needs to accommodate all these wonky developers with their weird ideas of what's necessary and what's not.

So, popularity requires bad design decisions. Put it bluntly, most programmers are ill-educated. Fine, let's fix that problem, then.


> The flip side of this is that if you suddenly use one more field in a tuple, way down the line in some unrelated portion of the code, you might trigger a very expensive computation. This is unacceptable in many large apps written by multiple people; you shouldn't have to worry that a tiny field reference can bloat the program's memory consumption by orders of magnitude.

No matter what language you use if you want to use the results you're going to pay the calculation/resource fee. All Haskell (via lazy calculation) is doing is giving you a speed increase when it can tell you aren't using certain results.


But other languages give you a syntactic cue that what you're doing might be potentially expensive - it's a function call. That's why most language styleguides have a naming convention for accessors, so you can distinguish between method calls that are basically free vs. method calls that might be expensive.

In Haskell, every single token might trigger a long computation, so it's very easy to introduce something that wildly changes the performance characteristics of the program.


Hmm I have come across many a codebase in countless languages where there was no way to discern that type of information from the naming convention. Even accessors in say C# or Java are perfectly legit to do any side effect they want. In fact I think it is worse in those situations - because side effects are not modeled. Indeed, no language has found a way to model non-termination in the type system, hence the halting problem.

Honestly I just haven't experienced this very often in Haskell... When I do it is usually a 'doh moment and a simple fix.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: