Hacker News new | past | comments | ask | show | jobs | submit login
GCC 4.9.0 released (gcc.gnu.org)
156 points by arunc on April 22, 2014 | hide | past | favorite | 28 comments



One particularly impressive improvement: "Memory usage building Firefox with debug enabled was reduced from 15GB to 3.5GB; link time from 1700 seconds to 350 seconds" - http://gcc.gnu.org/gcc-4.9/changes.html


"When LTO is used". For more info about LinkTimeOptimization and Firefox see the recent blog post from one of its main implementors: http://hubicka.blogspot.com/2014/04/linktime-optimization-in...


> Memory usage building Firefox with debug enabled was reduced from 15GB to 3.5GB; link time from 1700 seconds to 350 seconds

Why does using LTO reduce link time? It would expect the result to be smaller, but transient memory usage to be significantly higher.


It doesn't reduce link time, the comparison is between LTO link times for old & new gcc.


Previous discussion of the GCC 4.9 changelog: https://news.ycombinator.com/item?id=7578896


This is excellent:

  GCC 4.9 provides a complete implementation of the Go 1.2.1 release.


This is a really interesting aspect. gccgo was intended to be the very high performance compiler for Go (with the intermediate compilation that could benefit from the many years of optimization in GCC), versus gc which was relatively new and unoptimized. It is great to see them catch up.

Though it would be interesting to see hard benchmarks. As is I have found performance regressions when compiling with prior versions of gccgo, as compared to gc.


Still no D compiler huh? I've been out of the scene for a while, but I think they were they promising D support would be included a releases or two after Go support was added. What happened?


It seems to be in active development http://gdcproject.org/downloads/ "Merging GDC into upstream GDB" is on the list of project ideas http://wiki.dlang.org/GDC/ProjectIdeas


How is growing D? I played with it 6 years ago, then I totally forgot it.


From the changelog

|Memory usage building Firefox with debug enabled was reduced from 15GB to 3.5GB; link time from 1700 seconds to 350 seconds.

That's pretty impressive.


wow, that is. Any idea if there are any downsides or bugs resulting from this optimization? Or is this just pure win?


It's on LTO (link time optimization), which was previously not much optimized for compilation speed. Most projects probably don't need LTO so this won't matter for them, but for projects like Firefox that do, faster LTO compilation is a great thing.


not being an expert of such things, could you explain to me why do you think most projects don't need LTO ? (I'm interpreting that as "won't gain much", rather than "will find optimizations unnecessary")


In my experience things like PGO and LTO matter most in very large projects. Of course it depends on the codebase, but in general that's what I've seen.

In very large projects, fully optimizing all the code is often unneeded, and bad because fully optimized code is larger (inlining, unrolled loops, etc.). PGO lets you find what actually needs to be optimized, and you can keep the rest compact to improve load times. Vice versa, PGO can tell you what code is run immediately on load so you can order it so that happens faster, etc.

Similarly, LTO is most useful in large projects where it is not obvious to the compiler how to optimize across compilation unit boundaries - as in a small enough project, either there are few such boundaries or it is easy to manually optimize for them.


"Most projects don't need LTO" is just opinion (to be fair, no-one needs LTO but it may still help many people)

Larger projects could stand to gain more from LTO because the code will likely be spread across many more files. As a result, the compiler is only seeing a very small percentage of the code when it compiles each file. Potentially then, the compiler is missing out more optimisations.


You could already have profiled & inlined performance critical code. With LTO, maybe you wouldn't have had to do that manually.


Sounds like it's a round of optimizations, see the changelog. It's both software and a .0 release, of course there will be bugs :)


It seems more like they were doing something really dumb before to waste 11.5GB unnecessarily, but potato poh-tat-oh. I'm just happy it keeps getting better. :)


I wouldn't call LTO "really dumb"


Me either. But to scrape off 80% of memory use, you had to be wasting 400% of the lower limit -- which seems dumb in retrospect.


OT but tangential, what do you guys use to build/make via GCC?

Makefile seems like you have to code every new addition in your build manually and take care of the dependency tree.

Eclipse CDT seems difficult to set up.

Ant doesn't seem like good C/C++ Make options and require lots of XML and extra plugins to get it work.

Scons seems like a good option. Haven't tried Maven. What do you guys use to do C/C++ builds for production or for fun?


CMake is worth a try. Notably, it's the first build system that IntelliJ's new C/C++ IDE will support once it's released.

Although it's discouraged, CMake does have a way around the "code every new addition in your build manually" problem: http://www.cmake.org/cmake/help/v2.8.8/cmake.html#command:au...


I despise CMake's syntax. But it is by far the best build system to use if you like IDEs and don't want to manage separate project files.


Yeah I remember reading a review of CMake that asked why, with so many good scripting languages out there these days, they would hand roll their own subpar implementation. But I mostly think it's a wash between CMake and vanilla make, with its significant tabs nonsense.


Autotools and makefiles for me.


CMake


Autotools all the way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: