I don't think most software needs to care about these weird exotic systems anymore (MINIX? seriously?). Maybe if that's really a goal of your software, it's reasonable to use autotools.
For what it's worth, I agree that Autoconf tests for a lot of things that aren't necessarily relevant today. I don't think I need to worry about the existence of <stdint.h> on just about any platform.
But there ARE a lot of very relevant tests mixed in there too. These include things like:
- Width of an int
- Endianness
- Availability of arbitrary functions for linking
- Search through a list of libraries until one is found that provides a requested
function, then add that library to LDFLAGS
- Does the C compiler work?
- Do compiled binaries run on the build machine?
And it gives you control over things like:
- Mixing custom CFLAGS with conditional CFLAGS and package-specific CFLAGS.
- Enable/disable specific features at configure time.
- Add/change directories to search for existing headers/libraries needed for compilation
- Add/change directories for installation
Automake gives you:
- Automatic handling of platform-specific library creation differences.
Dynamic libraries in particular can have very different semantics across platforms, even living platforms today.
- Automatic handling of making parallel-safe Makefiles
- Standardized clean/test/install/dist/distcheck targets
- A reasonable unit-test system that's well integrated with the rest of Automake
If your software depends on this, you're doing it wrong.
As you said, depending on the existence of <stdint.h> is just fine, and you can then specify what you need. Even in the incredibly rare case of needing the width of int (serializing inputs and outputs of existing libraries interoperably on multiple machine ABIs) <limits.h> has you covered, albeit in an awkward way, unless you can assume POSIX and thus have WORD_BIT.
> - Endianness
If your software depends on this, you're doing it wrong.
If the wire/file format is little endian, input[0] | input[1] << 8, if stupid-endian, input[0] << 8 | input[1].
> - Search through a list of libraries until one is found that provides a requested function, then add that library to LDFLAGS
This is the exact opposite of what I want! If I'm depending on other libraries, I want that dependency explicitly listed and settable, not automagically found.
> - Does the C compiler work?
How is this reasonable to test? If it doesn't work, it can't actually do anything about it.
> - Do compiled binaries run on the build machine?
Totally ignores cross compilation, or deploys to other places -- e.g. containers.
> And it gives you control over things like:
These are generally useful, but the complexity required for autoconf is a huge cost to pay.
> Automake gives you:
All useful, yes. But these have never seemed to be particularly hard to do manually.
(Semi-) automatic handling of libraries, especially dynamic libraries, makes autotools worth the price, alone. Supporting one architecture manually is easy. Doing for six isn't. Doing it several dozen times, for all six, is spectacularly painful.
Such exotic systems as OS X in a few versions, BSD and four flavors of Windows? (cygwin, mingw, MSVC, msys)
Add a few flavors of Linux and perhaps even Android (all 3 current targets) on top.
That with cross compilation.
Even Cmake lacks some useful portability tools to handle this...
Though Autotools have major problems too.
I think the most annoying issue is that for larger software with lots of options and dependencies, running configure just takes a long time (especially before SSDs where a thing), and will terminate with exactly one error. Getting past ./configure could take literally all day.
This made me think back then that someone could fix autotools by caching tests results for a specific system. Why not me? So I opened its sources... and closed it. No positive outcome could recapture the time required to even understand what’s going on in these scripts.
Agreed that this is an annoyance. Autoconf-generated configure scripts are definitely slow, mostly because they test a ton of things that probably don't still need to be tested.
The overall system still provides a ton of value and a lot of relevant tests in addition to the not-so-relevant ones. After years of writing increasingly complex Makefiles to test for this-and-that, Autoconf was a breath of fresh air for me when I made the switch a couple of years ago (even with all of its crustiness).
I think that very little software is hurt by supporting multiple platforms -- usually the software benefits from exposure to a different set of assumptions and can be made more robust for when the preferred set of systems changes in the future. Does your software compile for WebAssembly today ? If it it also compiles for MINIX, it probably already supported WebAssembly as soon as a compiler backend existed without you having to make any changes.
Minix happens to be one of the most used operating systems because it turns out Intels Management Engine, the secret computer inside your processor, runs Minix.