Everything that target `R` depends on will also have SHELL and .SHELLFLAGS over-ridden. If `R` depends on some data generated by another program, it probably wants to be built and executed with the default SHELL (or another shell, perhaps).
Now, `R`'s dependencies will be generated with the makefile's defaults.
Usually I prefer to build up richer stages like this using the system shell anyway, though. Build a target which in turn is executed by the shell normally to traverse the next edge in the graph. But I can see how this mechanism has its uses.
Make was designed for building dependencies. I think it is always problematic to use it as a command runner (for example there is no standard way to list out the available commands).
[just](https://github.com/casey/just) is a tool that feels similar to make but is designed explicitly for the purpose of running commands.
I think of this as a simple CLI for your project workflow. You still really want to avoid putting code into a Justfile and put it into scripts. But the Justfile helps provide a slightly nicer UX and automatically invoke dependencies.
Yes I agree with this. I use shell instead of make, because make wrapps shell and its syntax collides very poorly with it. For example, the PID is now $$$$ and not $$.
Most people forget to mark their targets .PHONY, so they have a subtle bug in their build (touch build; touch test).
----
But shell also suffers from the problem where it doesn't list the commands. I filed a bug for Oil shell here:
I mentioned a couple other "frameworks" there like just, go, Taskfile, etc.
But it should really just be built into the shell, since it's so common. And there should be command completion too, which I think bash-completion has for Makefiles on many distros.
Apparently there is no standard name for this kind of "task runner". But I think shell makes a lot more sense than a custom format, because there are many instances where you need a simple loop or conditional. It scales better. (And Oil also fixes bad shell syntax while remaining compatible: http://www.oilshell.org/blog/2020/01/simplest-explanation.ht...)
If anyone wants to help let me know :) The code is plain Python and pretty hackable. However it generates fast C++, so you get the best of both worlds (in progress)
This is GNU Make + a few patches. So it's 100% compatible. And you get an interactive debugger, and lots more stuff. For instance, to list out the commands:
remake --targets
No idea why this hasn't been merged upstream.
Your larger point really stands, though: if you're just running commands, you shouldn't be using Make. But it is abused in that way often, so...
There's --targets and --tasks to handle such things, but it really depends on the Makefile in question. If you really want to know how it behaves, apt install remake.
Yes, I remember using zsh and in my experience this was barely
useful since most Makefiles are auto-generated with hundreds or
thousands of targets.
> one could always have a 'help' target that prints a short documentation
Sure, that's fine. But the point is that if you have an unknown Makefile
you can't (or shouldn't) just execute it without knowing what it will
do. Makefiles should be treated as individual programs just like any
other executable and there's no guaranteed standard way to get help
from it.
"make help" is definitely not a standard. For all I know "make help"
builds help.exe. But there is a standard way to get available commands:
Just look at the README or open the Makefile with a text-editor!
The lack of a standard argument for getting help doesn't make it
problematic for use as a command runner. You can't get atomatically all
available commands from a Makefile just like you can't get all
command-line flags from an executable. The program/Makefile has to
provide it by itself.
I didn't generate any files in the examples for simplicity. But you could imagine a workflow where Python generates some data and then you use R to plot it + run some statistical tool.
This makes me feel like I've sold my soul to the devil, and that I'm just living on borrowed time until it all fails and falls apart. It hasn't yet, however...
-f is guaranteed by POSIX, and #! is de facto portable.[1] My criteria for shame is, "will this silently break in the future?". I think you're good. It's not my style, but if it were something I came across at work, so long as it worked well it wouldn't even cross my mind to try to "fix" it.
FWIW, using make -f in the shebang is also done for debian/rules in Debian package builds. I don't know if it serves any real purpose. I suppose it permits one to write a bespoke script for building targets without using make.[2] I guess I wouldn't be surprised if someone, somewhere depended on that capability, given how old and widespread Debian packages are.
[1] /usr/bin/env make -f would be better, but then you run afoul of the problem that you can't portably pass more than a single explicit shebang command argument.
[2] Which I see now is a bonus to your process-data.mk script. It could be replaced with a non-make version without effecting callers.
That shebang is appealing but unfortunately more than one argument (past the initial command name) in a shebang is unportable: some OSes will coalesce the extra arguments into one, others make them separate arguments.
There's also a special bonus papercut you might hit when /usr/bin/env is in the shebang with extra arguments: an infinite loop!
GNU coreutils env supports a flag for it since 8.30 which translates to Debian 10 & Ubuntu 19.04. Not a perfect solution, but it appears to allow portability across the vast majority of modern OSes?
Note that this article (like many, many others) assumes GNU Make. POSIX
Make has neither .ONESHELL nor local macros. Neither do most built-in
Make implementations in other OSes, like OpenBSD's bmake.
POSIX shell is also notoriously obtuse and difficult to use. As a big advocate of POSIX as a target, I don't blame anyone for using GNU make - or perhaps BSD make is a better lowest common denominator.
Personally, I try to use POSIX Makefiles, but I often find that they're most useful as a target for Makefile generators (in my case, these are usually a shell script called configure).
One person's obtuseness is other person's simplicity :-) . In all of my
personal and some of my work projects I used nothing but portable
features in Shell, Make, Sed, etc. Checking with multiple
implementations where possible. As long as you use the right tool for
the right job, there shouldn't be any problems.
The most common mistake of that sort that I've seen is people trying to
do complex conditionals inside their makefiles when they clearly would
be better off in a Shell script. (I'm looking at you, fans of ifeq.)
Once POSIX standardizes "!=" then POSIX-portable conditionals will be possible using the same technique as above, replacing, e.g. OS = $(shell $(OS.exec))$(OS.exec:sh) with just OS != $(OS.exec). Though, you'd need to wait for Solaris, AIX, and macOS gmake[1] to add support for !=.
Alternatively, if you add an extra level of indirection using .DEFAULT to capture and forward make invocations, you can simply pass OS, etc, as invocation arguments. Indirection solves everything, though, so that's cheating.
[1] Apple's ancient GNU Make 3.81 predates != support. :(
bmake is the implementation of make in NetBSD and FreeBSD. OpenBSD dropped bmake a long time ago and wrote their own implementation. OpenBSD make doesn't support ONESHELL, either, though.
I wish someone would write a modern alternative to GNU Make. I've looked and there don't seem to be any. The closest is Ninja but it doesn't seem to be intended to be hand written.
There are a lot of options, but make is just everywhere.
Sometimes it's just simpler to bite the ancient bullet and go with a Makefile, with all its included pains and gotchas rather than try to figure out how to get the fancy new makefile replacement installed in all the relevant environments.
A lot of people in bioinformatics use SnakeMake. In this field you often want to restart analysis after something changes somewhere along a pipeline (for example the pipeline is under active development and changing frequently), and individual steps can take hours or more, so automatically rerunning just the right stuff is a great feature.
However, SnakeMake, Nextflow, etc feels excessively verbose compared to standard make. And the prior workflow managers of last decades were far worse. With standard make, you type pretty much exactly what you would for shell commands, and not too much more.
All other alternatives are going to be more verbose than make, and to me that's a negative.
We did start with C++ thinking (correctly, IMO) that if we can build C++, we can build pretty much anything. But build2 is a general-purpose build system, for example:
I think these examples support my position, they do not refute it.
If you have to write plugins to describe the rules that walk the edges of the DAG, then you haven't captured the essence of Make. It isn't just the DAG, its also the ability to walk graph edges with a generic shell alone. Here's some examples of things that we're using it for:
- Compile C, C++, and FORTAN on a common DAG for 5 unique ABIs.
- Parallelizing and sequencing atmospheric analysis with orbital mechanics programs.
- Post-processing our regression test suite.
- Executing and verifying SystemVerilog tests.
- Generating documentation with Doxygen and LaTeX.
- Generating linear flash images and a compressed initial filesystem.
- Transforming said initial filesystem into a linkable object.
Make does all of things without any prior knowledge of any of them, because it just uses the shell to express how the edges are walked. In some cases, we build the program that traverses an edge and express that as just another dependency in the chain.
If you have to write and compile plugins into build2 to do things like that, then you haven't re-implemented Make. You've just created another purpose-dedicated build tool. That's fine if its what you set out to do. But that also means statements like "We believe, paraphrasing a famous quote, that those who do not understand make are condemned to reinvent it, poorly." do not belong in your documentation. Because I don't think you understand Make.
Author probably wants to use `private` for those target-local variables, though.
For example,
Everything that target `R` depends on will also have SHELL and .SHELLFLAGS over-ridden. If `R` depends on some data generated by another program, it probably wants to be built and executed with the default SHELL (or another shell, perhaps). Now, `R`'s dependencies will be generated with the makefile's defaults.Usually I prefer to build up richer stages like this using the system shell anyway, though. Build a target which in turn is executed by the shell normally to traverse the next edge in the graph. But I can see how this mechanism has its uses.
See also https://www.gnu.org/software/make/manual/html_node/Target_00...