Hacker News new | past | comments | ask | show | jobs | submit login
GNU Make 4.0 Released (lists.gnu.org)
169 points by jgrahamc on Oct 9, 2013 | hide | past | favorite | 88 comments



Congrats to the GNU Make team.

I still use Make occasionally, but redo (https://github.com/apenwarr/redo) just fits my mind better - it's simple, consistent, doesn't require keeping another language with its dark corners (it's just shell scripts), and works well.

It also solves the bootstrap problem - there's a short script called "do", which always rebuilds everything, all of 177 of non-minified bash script. If you need to distribute anything, you should distribute it with "do" inside - and guarantee that your users can build without relying on an installed package (or a specific version of GNU Make).

p.s: OS/X, Linux and BSD supported. Supposedly, windows through cygwin or busybox - but I've never tried.


An attitude "It's just a shell script" can be dangerous when you have software that is supposed to be built on multiple platforms. You can never be 100 % sure what your /bin/sh is, be it bash, ash, dash, ksh, zsh or even an ancient Bourne shell.


Thinking like that leads down the same rabbit hole where autoconf/automake disappeared, only using bourne and m4 (!). Just in case you want to compile your code on a decade-old copy of HP-UX. Or I suppose some shiny new distro that is using zsh as /bin/sh.


According to the the redo readme, it takes care of that problem. Alternatively, if a specific shell is required, it can be specified with #!...


To explain why you are being downvoted:

It doesn't take care of the problem, it mitigates it. It tries to find the most POSIX-like shell. You still don't know which shell that will be. And then, even if you did, you are still relying on external programs, which may all work differently than the ones on your box. Writing portable shell scripts is hard.


> "do", which always rebuilds everything,

To rebuild everything with Make, you can use the -B switch:

       -B, --always-make

            Unconditionally make all targets.


It's not doing it as a feature but as a fallback. If you don't install the whole redo program but just include a "do" script it doesn't do dependency tracking so it just rebuilds everything. If you install redo itself it's much smarter and only rebuilds what is needed.


I see. Nice feature.


Inspired by redo, I'm working on a similar project that takes advantage of some of the nicer features of the Inferno OS: sh features like apply, <{}, and quoting rules; /env for shell-variable dependencies; and bind to overlay directories. Its user interface replaces build description files (eg makefiles) with a set of command-line tools to manage build rules and dependencies, an approach I haven't yet seen in other build tools.

https://github.com/catenate/credo


Interesting, I've started a similar kind of program, but wouldn't mind abandoning it for something better. Gonna try it out.


Meanwhile, many users are still stuck on 3.81, because 3.82 introduced an intentionally backward-incompatible change to the handling of pattern rules, which breaks real makefiles such as those in the Linux kernel: https://savannah.gnu.org/bugs/index.php?33034

This bug is the reason why make 3.82, released in 2011, still sits in Debian experimental and hasn't yet graduated to unstable where it might form part of a release. Make 4.0 seems likely to suffer the same fate.


I agree that breaking backwards compatibility from 3.81 to 3.82 was not a good idea.

However, breaking compatibility from 3.x to 4.x is perfectly fine and won't cause any surprisings. After all, that's what major version number are good for.

(... at least in their original meaning, before various free software projects started some strange kind of race with their major version numbers)


The kernel actually patched around that, so kernel developers aren't actually stuck. It's true that older kernels won't build with current make though. I think I remember noticing that Fedora carries a patch that fixes this issue, but am too lazy to check. It's possible other distros do too.

FWIW: Other issues with 3.82 broke things in Android and openembedded too. It wasn't their finest release.


When will developers start to understand that backwards compatibility is one of the most important goals of any software system? Randomly introducing something that is backwards incompatible in a minor point release is simply unacceptable.

A huge amount of complexity in using and deploying software comes about because of very narrow version requirements, where due to combinations of backwards incompatibility and necessary new features, there are very narrow windows of software which is actually usable. Add in dozens of different packages, and you are frequently left with a situation in which there is no single set that will all interoperate.

The Linux kernel has a very reasonable attitude: you don't break userspace. You just don't do it. I wish more tools would adopt this attitude. You just don't break your API, ever. No backwards-incompatible revisions, ever. If it's backwards incompatible, it's a new piece of software, not an update.

I realize that this is fairly idealistic, but having experienced so much breakage with non-backwards compatible software, and had so much better experience any time people make an actual honest effort to preserve backwards compatibility, I really hope that more developers start to consider backwards compatibility one of their primary goals, only to be broken if absolutely necessary, and only with a major version change or entire renaming of the package.


Actually GNU make is not a good target. Historically make has been extremely careful about backwards compatibility. The "bug" here is that the Makefiles were relying on an undocumented feature.

In general yes it'd be good if developers paid more attention to backwards compatibility.


Sure, they were relying on an undocumented feature, but it's not like the GNU Make developers didn't know about this. They explicitly called out the backwards incompatible change in the release note; they knew they were breaking compatibility.

My point is that even undocumented features are an important part of backwards compatibility. If there is software out there relying on it, breaking it and saying "well, they were relying on an undocumented feature" doesn't actually help your users at all. Is your goal to ensure that developers do everything the way you tell them to all the time, and punish them if they don't? Or is your goal to provide software that your users can use to do their jobs?


I disagree, when using undocumented features, the burden is fully on the developer using these features.

It is how the world works, when you use undocumented features in every aspect of life (e.g. not following the rules) it is great when you get away with that, but don't expect the world to bend their rules to accommodate you, that only happens when a majority goes that path, and often it doesn't even happen then.


No, how the world works is that you satisfy your users or your users go someplace else.

Microsoft gets this. Linus gets this. FSF and GNU, while I love their ideals, are apparently a bit too ivory tower to actually get it on things like this.


In theory that sounds great, but as we've seen with make, the end result is that the users of make just didn't upgrade. If that's not a desirable outcome for you, then it's on you to prevent that. No else is going to.


Which change are you referring to? Excluding outright bugs, there were no undocumented features that changed in 3.82.

There were several outright bugs; without a patch (that most distros apply), it would SIGABRT during the Android build, perhaps this is what a few people are referring to (I am unaware if this was the only problem with building Android on 3.82)

The most significant backwards-incompatible change was changing how pattern rules are selected if there are multiple matches, but this is documented.

Perhaps you are thinking of wildcard expansion, which has the undocumented feature of returning a sorted list? That wasn't actually changed, it was marked as a "Future backward incompatibility" that would be changed in the next release.


I can't help but think that this approach is similar to Microsoft's, and appears to have led to stagnation. It must have some upsides for developers, but I'm not sure is a winning strategy in the long run.


> I think I remember noticing that Fedora carries a patch that fixes this issue, but am too lazy to check. It's possible other distros do too.

Doesn't look like it; I checked the Fedora sources for their make package, at http://pkgs.fedoraproject.org/cgit/make.git/tree/ , and none of the patches appears to affect that backward-incompatibility.

If anyone knows of a patch, in a Linux distribution or otherwise, which fixes this backward incompatibility, I'd love to hear about it.


Intentionally backward-incompatible changes can be gotten away with (e.g., Apple moving from PPC to Intel).

What could be done with GNU Make to help encourage a transition forward?


> Intentionally backward-incompatible changes can be gotten away with (e.g., Apple moving from PPC to Intel).

With massive, carefully coordinated transitions, and features provided to help with backward compatibility (e.g. OS X provided PowerPC emulation for ~5 years).

> What could be done with GNU Make to help encourage a transition forward?

First, documenting exactly why the previous behavior (mixing implicit/pattern rules and normal rules) is problematic to support going forward and is worth the pain of a backward-incompatible transition.

Second, adding a backward-compatibility option to enable the old behavior, or ideally allowing it by default, perhaps with a warning if transitioning away from it is that important.

And third, keeping that option available for many years as existing software projects making use of that behavior filter out of usefulness (until, for example, all of the enterprise Linux distributions have moved to versions of the Linux kernel that don't depend on the old behavior).


The key is that Apple changes the output format that they accept, not the input format that developers work with.


Back at the dawn of time, Borland C wrote dependency info into its object files, and Borland Make could read the object files to get the info. That's so much simpler than makedepend or gcc -M that it's sad.


> New feature: GNU Guile integration

Because if there's one thing the GNU software-building toolchain needs, it's more languages! How many are we up to now?

Grouping output by target for parallel builds sounds very useful though \o/


Guile is the official GNU extension language, so it makes sense. Guile's goal is to provide practical software freedom by allowing more GNU programs to be extensible in the way that Emacs is.


I think it's great. In the future all GNU tools will probably be extensible with Scheme.


Can't wait for EMACS with scheme support.



How's that going anyway? It seems GNUcache, Lilypond and TeXmacs are already using guile (and geda also but that doesn't seem to be part of GNU). So make is the fourth Guile-using GNU project?


There are more https://en.wikipedia.org/wiki/GNU_Guile#Programs_using_Guile

But GNU Make seems to be the first major one.


Finally! An answer to all the clamour for a Guile-based extension language to an increasingly nonstandard implementation of an increasingly fragmented 'POSIX' 'standard'.


I don't see that the sarcasm is warranted. GNU make, really, is the relevant standard. It's what virtually everyone doing serious work (as opposed to e.g. emitting "makefiles" from some other tool) with make uses. GNU Make is simply more featureful and more performant than the other choices.

One of the features it doesn't have, though, is a first class language for doing non-trivial extensions. Some systems (the kernel Makefiles and Android's build system are really good examples) have stretched make's built-in environmnent past the limit already and might have benefitted from something like this.


I just can't wait to rewrite the next generation of GNU Makefiles so I can build them from /usr/ports.


I'm being incredibly unimaginative today, but what's a good use-case for extending Make via scheme?


Some years ago I designed a Makefile-based build system [1] that uses many GNU Make features. If you write Makefiles on that level, you quickly notice that GNU Make's internal structure resembles Lisp quite a lot [2], e.g. you write $(filter xx,yy) which is very close to (filter xx yy). Also, you have other Lisp-like stuff like quoting, eval, and so on. Sometimes GNU Make really felt almost like Lisp, but with a slightly more cumbersome syntax and evaluation model.

So I often asked myself why they don't use Lisp directly. Now they almost do, which I find very consequent.

[1] http://mxe.cc/

[2] https://github.com/mxe/mxe/blob/master/Makefile


That's funny, I see what you are talking about, but I had a moment like that with make, but instead likened it to prolog instead in my head one evening!


I think Prolog is a much better analogy than Lisp. Like Prolog, the Makefile declares relationships, which go into a rule database, and then when you need to make something, the runtime scans the rules looking for what matches. This will imply other dependencies, which are satisfied recursively.


There are two separate stages with GNU make. The first is very lisp like with the functions like $(filter) and you use it to create all the rules for the second, more prolog like, stage. A simple Makefile would do nothing for the first stage because it'd all be static text instead of function calls.


Not only that. In the above mentioned MXE project, the $(...) functions also play a vital role on the "second stage": They create a huge, automatic set of rules and actions, based on a higher-level description that is given for each package via a bunch Make variables (some of which are multiline and contain actual commands for that package).


Being unreasonably optimistic, I hope for the replacement of a bunch of other cruft around the toolchain with Scheme.

Being realistic, I look forward to adding Scheme to the cruft.


I often think how much better Make would have been with elisp or scheme syntax rather than extending it with shell.


GNU make already has a number of functions for things like filtering a list for strings matching a pattern. There were many requests for additional functions. There's some sense to integrating guile rather than going further down the path of extending the macro/function language that GNU make essentially has.


One starting place for some of these functions is:

http://www.gnu.org/software/make/manual/html_node/Functions....

(note "foreach", "call", "eval") and:

http://www.gnu.org/software/make/manual/html_node/Multi_002d...

For instance, from the "call" documentation --

"The call function is unique in that it can be used to create new parameterized functions."

Sometimes these types of functions are really important, and you have to be creative to use the builtins to set up the dependencies you want. For instance, by combining "foreach" with "call" or "eval", you can dynamically create sets of rules. This can be quite powerful.

And you don't have to use make only for software compilation. You can use it to create more general-purpose data transformations. When you get a batch of new images, for example, you type "make", and it just recomputes all the derived data (but only what depends on the new stuff). I have done this, and it was very slick, but it tends to stress the existing function set.


The obvious example I can think of is autotools. If make had a built-in scheme 20 years ago, would we be suffering with the existing m4/shell monstrosity?


It would be nice if gmake had some predefined, portable, macros for working with Java, so XML clutter like Ant and Maven could be shown the door.

I so prefer reading makefiles to a sea of XML gibberish.

    mytarget:  source1 source2

        do-something -o mytarget source1 source2

so much easier to read than Ant tasks. (and, yes, there are ways to get the file lists into symbols without explicitly enumerating them)

Ant, and by extension, Maven, was a huge step backward. Sure, have a way to make portable commands, BUT, was XML really needed??? And, does every task have to be built such that the task checks dependency timestamps, rather than the framework that calls the task???


Isn't what you are saying is that if Make was better then it could be better than Ant?

Ant and Maven both have plenty of issues, but being dissimilar from make is not one of them. At the end of the day 'file based' tools like Make are not really suitable for Java which is more concerned with source directories - rightly so in my opinion. There is no program that cannot be written because of a consistent source layout.

There are ways to enumerate files, but nothing good (in my experience) and by the time you are using them the make files are no longer easy to read (never mind debug).

Make also has this Faustian pact thing going on. Access to the shell and all of its power and familiarity, but at the (significant) expense of portability.


Does make have any sort of way to see if a .h file changed, that all C/C++ source files that directly or indirectly include it need to be recompiled, but others don't need to?


Make does not do this directly. Typically you generate a set of dependencies using gcc. gcc can be told to output a rule suitable for use in make that describes the dependencies of a source file. See the -M option and friends.


Not make by itself, but gcc comes with related preprocessor options[1], in particular -MMD.

Add that to your CFLAGS and include the generated dependency files by adding something like

    -include $(OBJECTS:%.o=%.d)
to the Makefile.

[1] http://gcc.gnu.org/onlinedocs/gcc/Preprocessor-Options.html


Another commenter already pointed out the `-M` flag on GCC (and Clang), but I thought I'd share the rule I use to provide automatic dependencies. It's a bit simpler than the one suggested in the GNU Make Manual:

https://github.com/mcinglis/c-style#use-gccs-and-clangs--m-t...


I suggest just building with "-MMD -MP", using "-MT" if for some reason you want .d files to reside elsewhere. The "-MP" option prevents build errors when a file is removed. No need for crazy redirects or an actual rule to create the .d files.


`-MP` makes sense, but I don't see how Make could work out when to regenerate the dependency files themselves without the `-MT` value as in my rule, or the `sed` expression given in the GNU Make Manual. Maybe I'm missing something about `-MMD` - I don't understand everything said about it in the manpage of GCC.

Generally it's my style to prefer I/O redirection in the shell to programs taking output parameters and managing their own files. Thus, using `> $@` rather than `-MF` or `-MD`.


You seem to be invoking the compiler separately for the sole purpose of building .d files (with a %.d: %.c rule). Don't do it that way. With the -MMD option, you create the .d file as a side-effect of the real compilation. Any change that would require the .d file to be rebuilt has to involve a change to the old dependencies as listed in the old .d file. So the source file will be recompiled. In order for a new dependency on a new header file to be added, an existing and remaining dependency needs to change to add '#include ...'. So the .c file will get recompiled because that file has changed and then you have a new .d file.

For C/C++, I have to subvert GNU make's attempts to rebuild .d files by using $(wildcard) on them and inserting a /./ in the middle of the path.

Note that there are other cases where I do have a rule for building .d files and have them rebuilt. In particular for Fortran 90 modules. This occurs wherever you need to build the .d files first for the initial first build to be in the correct order. This is only an issue for C/C++ if you are invoking a program to generate C sources (e.g. rpcgen). In practice, that is often best handled with a few dependencies listed explicitly in your Makefile.


Yes, but you have to work a little bit for it.

If your rule that builds .o files from .c/.cpp files uses the one of the dependency-generating flags to GCC (options beginning with -M), GCC will create an alternate or additional (depending on the flag) make-compatible file that describes what files were included during the build. If you then include these files in your Makefile with the 'include' directive, then you'll get the behavior you're looking for.

Implementing it correctly can be a little tricky, but once you understand how to do it it's not too bad. As I recall there used to be some subtle issues that required some post-processing of the GCC-generated dependency files to avoid a problem when header files were renamed or removed (foo.c included foo.h, so a dependency was generated; later foo.c was modified to not require foo.h, and foo.h was removed, but the build still thinks it's required to build) -- it looks like GCC now has a -MP option to add phony empty rules for all dependencies, allowing make to power through these.


Make only understands rules such as "X depends on Y", but gcc can output these dependency rules (-M* switches). So yes, it is possible to do this but make itself is source code agnostic.


Depends on which version of "make" you're using. GNU make does not, but automatic dependency detection is just one of many enhancements in Electric Make, a drop-in replacement for GNU make.


For this reason and others I just wrote my own build script using /bin/sh:

https://github.com/chkoreff/Fexl/blob/master/build

Configurable with a small src/config file:

https://github.com/chkoreff/Fexl/blob/master/src/config

It automatically greps header dependencies, caching them in obj/*.i files.


GCC already does this for you (with the -M... options that others have mentioned). Don't re-invent the wheel.

.i is the extension for pre-processed C source. If you re-use it for something else build-related, I guarantee you'll regret it.


Thanks. I'm not using make, so I don't really need a make rule, but I can see that gcc -M does something useful:

  $ gcc -M src/value.c
  value.o: src/value.c src/memory.h src/value.h
The result of my grep/sed gives me just the header dependencies alone, which is really all I need:

  $ cat obj/value.i
  src/memory.h src/value.h
So I might consider replacing the grep/sed with gcc -M, and strip off the first two entries, which would give me the equivalent result, but it doesn't strike me as a "must do" right now. Again, I don't use make, just /bin/sh, so I don't need a make rule per se.

Thanks for the advice about the ".i" suffix. I don't think there's a conflict though, because all my intermediate files go into a completely separate "obj" directory, and no pre-processed source will be going there. However, I will consider choosing a different suffix just for the heck of it. I'm sure anything I choose (such as ".dep" or ".inc") will mean something to somebody somewhere, but since I'm in the entirely orthogonal context of the "obj" directory I don't think it matters.


Everbody seems to use .dep, FWIW.



I know you already have quite a few answers to this, but you could also use makedepend.


Considering that one of the most popular operating systems among developers has an 8 year old version of make, I wonder how many people will start using new features.


Considering that this OS is hostile to free software, it's not the target audience of GNU make. Plenty of people of free operating systems will start using these new features.


> Considering that one of the most popular operating systems among developers

That is a pretty presumptuous thing to say.


Because GNU Make 3.81 was the last version released under GPLv2.


There are very few releases of GNU make: 3.81 (in Debian too) is from 2006, 3.82 from 2010 and 4.0 from 2013.


What's annoying about GNU make is that the language has a lot of dark corners and the maintainers are pretty willing to break things. Half of the 3.82 release announcement was discussion of backward-incompatible changes (and that are of course only the incompatibilities known at release time). Now you could say that 99% of projects won't be affected by this, but in a large distribution that is still a lot of breakage.


What OS are you referring to?


That would be OS X. The make that comes with XCode is rather old, although its easy to update it using homebrew.


Both Ubuntu 12.04 and Mac OS X 10.9 ship with make 3.81 which was released in 2006


OS X


gmake 4.0 is GPLv3, it will never be added to OS X.


3.82 was also GPLv3.


OSX ships with 3.81:

    $ make --version
    GNU Make 3.81
    Copyright (C) 2006  Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.
    There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
    PARTICULAR PURPOSE.

    This program built for i386-apple-darwin11.3.0
(which predates GPLv3)


Yes. I was pointing out that GPLv3 is not unique to make 4.0. In fact, 3.91 will never be default on OS X because it too is GPLv3.


Ah, I was wondering why the major version number had changed.


And I, for one, am glad.


Why? You like running old software?


What's wrong with old software. If it works, why change? Just because something is "new" and "cool"? C'mon.


Is there something broken with make 3.81?


There are a few of the features mentioned that I already thought that make had. At least now I might start actually using them.


When would you make over a bash script?


Well, for starters: make does dependency resolution. If you change a single file and recompile, it will figure out what needs to be rebuilt instead of rebuilding everything. It will also figure out what can be built in parallel if you give it the -j option. It also has implicit build rules so you don't have to write a line in a bash script for every single file.

EDIT: To answer your question more directly: when you have a large project, or a project that needs to build on many platforms. Probably a whole load of other situations as well, but those are the two that stick out.


Make makes it possible to compile only the files you have changed in your code base to produce the library / output you want. bash scripts usually end up compiling the entire codebase even if there is only one file that has changed.

TL;DR faster turn around times using make.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: