One particularly impressive improvement:
"Memory usage building Firefox with debug enabled was reduced from 15GB to 3.5GB; link time from 1700 seconds to 350 seconds" - http://gcc.gnu.org/gcc-4.9/changes.html
This is a really interesting aspect. gccgo was intended to be the very high performance compiler for Go (with the intermediate compilation that could benefit from the many years of optimization in GCC), versus gc which was relatively new and unoptimized. It is great to see them catch up.
Though it would be interesting to see hard benchmarks. As is I have found performance regressions when compiling with prior versions of gccgo, as compared to gc.
Still no D compiler huh? I've been out of the scene for a while, but I think they were they promising D support would be included a releases or two after Go support was added. What happened?
It's on LTO (link time optimization), which was previously not much optimized for compilation speed. Most projects probably don't need LTO so this won't matter for them, but for projects like Firefox that do, faster LTO compilation is a great thing.
not being an expert of such things, could you explain to me why do you think most projects don't need LTO ?
(I'm interpreting that as "won't gain much", rather than "will find optimizations unnecessary")
In my experience things like PGO and LTO matter most in very large projects. Of course it depends on the codebase, but in general that's what I've seen.
In very large projects, fully optimizing all the code is often unneeded, and bad because fully optimized code is larger (inlining, unrolled loops, etc.). PGO lets you find what actually needs to be optimized, and you can keep the rest compact to improve load times. Vice versa, PGO can tell you what code is run immediately on load so you can order it so that happens faster, etc.
Similarly, LTO is most useful in large projects where it is not obvious to the compiler how to optimize across compilation unit boundaries - as in a small enough project, either there are few such boundaries or it is easy to manually optimize for them.
"Most projects don't need LTO" is just opinion (to be fair, no-one needs LTO but it may still help many people)
Larger projects could stand to gain more from LTO because the code will likely be spread across many more files. As a result, the compiler is only seeing a very small percentage of the code when it compiles each file. Potentially then, the compiler is missing out more optimisations.
It seems more like they were doing something really dumb before to waste 11.5GB unnecessarily, but potato poh-tat-oh. I'm just happy it keeps getting better. :)
Yeah I remember reading a review of CMake that asked why, with so many good scripting languages out there these days, they would hand roll their own subpar implementation. But I mostly think it's a wash between CMake and vanilla make, with its significant tabs nonsense.