Hacker News new | past | comments | ask | show | jobs | submit login

> this describes compilers for all modern languages

Yes, this is the current trend. And the results don't really justify the cost:

http://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_201...

http://proebsting.cs.arizona.edu/law.html

Which is why I (roughly) agree with Daniel Bernstein's polemic "The Death of Optimizing Compilers" https://cr.yp.to/talks/2015.04.16/slides-djb-20150416-a4.pdf

(I frequently make code 10x to 1000x or more faster, and compilers' contributions to that total tend to be fairly minimal, though larger than zero and usually worthwhile. But not worth the current shenanigans and not worth having no idea how the code is going to turn out)




This also explains the shift in functional programming from languages with good reasoning about correctness (Haskell, but also SQL) to languages which additionally allow of good reasoning of performance (OCaml, Rust, etc.).

Moreover, this explains the NASA coding rule that all loops shall have an explicit upper bound.

The deeper issue seems to be: If you write code in a fashion that you can reason easily about its performance, then you might not get the optimal performance in all cases, but you establish an minimum level of performance.

So performance stability is more important than reaching optimal performance, because the latter may be easily destroyed accidentally in future versions of the software.

The only exception are well-defined tasks with stable input/output definitions (e.g. numeric primitives like matrix multiplication, fourier transformation, etc.) where the whole point of newer versions is performance improvements and nothing else.


I don't doubt that you make code 10x to 1000x faster, but could this be selection bias? I.e., you are focusing on the code that is slow and not the code that the compiler has already had substantial impact on?


You make code 10x 1000x faster by changing the algorithm or data layout that you use instead of just emitting different instructions for the same higher level code.


I don't think anyone who mattered ever argued than an optimizing compiler could or should attempt to turn bad algorithms into good ones.

It seems clear to me that at the present state of tech that Most developers are better at picking algorithms while compilers are better at picking which low level instructions. Clearly there are some exceptions, but things like gotoBLAS are clearly exceptions and not the rule.


No, but there can be this myth that certain things will be "optimised away" by the compiler, which is why you can get single digit or fractional speedups by just tuning the optimisation level.


I would much rather deal those "myths" than with the class of myths surrounding i++ being faster/lower than ++i or any of the non-sense around divisions.

At least people claiming something will be optimized away can readily be empirically tested in a way that most will listen to. I have no problem testing the / operator, but I have never had holders of such beliefs accept my results, but people claiming something like "virtual function calls can be optimized away" can handle it when I can demonstrated that inside a single binary they can be but at places like library boundaries they cannot.


> I would much rather deal those "myths" than with the class of myths surrounding i++ being faster/lower than ++i or any of the non-sense around divisions.

This is one area I dislike in C and C++ culture, there is this tendency to micro-optimize code like that, without even profiling it, or it causing an actual impact on the application being built.

So one ends up with endless bike shedding discussions about how to write code, instead of writing it.


That is a facet of programming culture. So many programmers in so many languages have their silly myths.

A few years some Ruby dev said that declaring methods without parenthesis would let the Ruby parser parse method declarations faster. The dev never even measured, he just thought the parens were ugly and a ton of gem devs removed parens claiming a speed up. It wasn't until a few years later that someone benchmarked it and determined that there either was no difference or that it cost just the tiniest amount to remove them.


Those are the things that the compiler can do to optimise your program by a few x. The sorts of optimisations that it can't do are things like picking a different algorithm, or reorganising the layout of a class so that it or multiples of it can fit within a cache line.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: