Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, I understand what you mean, sorry if I created confusion. My point is that the claim that it makes it difficult to reason about is misguided in most cases because there is an easy way to estimate the upper bound of any function if you know the upper bounds of the same algorithms in a strict language.

Generally speaking, the complexity of any function has an upper bound equal to the most complex algorithm used. Laziness can bring the upper bound for complexity down, though. The fact that upper bound may be lower is almost certainly not something that you care about in the overwhelming majority of programs. And I haven't found the first person upset to find out that their algorithm performed much better than estimated.

If you have an algorithm with O(n) complexity, it will be the same both for a strict and non-strict language if the whole result is consumed.

When the whole result is not entirely consumed, then the upper bound for a lazy language may be lower.

The classic example, as you linked to is this:

    minimum = head . sort
If we consume the entire result of the sort function, then the upper bound is O(Nlog(N)). Because of laziness, using head makes the minimum function O(N)

The implementor chose to compose those two functions because laziness makes this possible. If you are in the business of reviewing core code to evaluate their complexity, you are right. It is difficult to nail down get the more accurate upper bound, but that does not mean you cannot guess what a less accurate estimate would be. I would have guessed O(Nlog(N)) the first time for a minimum, and would have been pleasantly surprised that laziness makes that faster.

When has reasoning about time complexity been difficult for you in a lazy language?




I misspoke in my last comment when I referred to space and time complexity. The usual problem in the field isn't figuring out the big O characteristics, it's unpredictable performance due to evaluation of an expression building up a huge chain of thunks. Because of the inherently non-local logic of lazy evaluation (or, whatever evaluation strategy compatible with non-strict semantics that GHC actually chooses to use), it is not always easy to find which piece of code is introducing unwanted laziness.

That said, it's the same fundamental non-locality that also makes it more difficult to figure out the big O complexity of an algorithm implemented in a non-strict language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: