Hacker News new | past | comments | ask | show | jobs | submit login
Small Project Build Systems (2021) (neilmitchell.blogspot.com)
35 points by akkartik on April 3, 2023 | hide | past | favorite | 16 comments



I'm a big fan of the "backwards" build system Make. It's very concise, portable enough, and not language-specific. I use it on literally 100% of my projects to hold small scripts. Example: "setup Python virtualenv" or "do a fast feedback loop with Lint" or "do a slower high quality feedback loop with Pylint and the full test suite".

I'm thrilled people are developing other options, building a project "forwards".

FYI "strace" on Linux makes it doable to trace build dependencies. Alas it doesn't work on macOS, and I haven't found an easy equivalent.


I agree. While I like the idea of tup (https://gittup.org/tup/ -- the first "forward" build system I remember hearing of), writing a makefile is easy enough that thinking about the problem upside-down doesn't offer a compelling reason to switch.

Ptrace is one option for tracing dependencies, but it comes with a performance hit. A low-level alternative would be ftrace (https://lwn.net/Articles/608497/) or dtrace (https://en.wikipedia.org/wiki/DTrace).

Tup uses LD_PRELOAD to intercept calls to C file i/o functions. On OSX it looks DYLD_INSERT_LIBRARIES would be the equivalent.


I use Make the exact same way. Basically, it's a very concise way to express "here are some tasks with dependencies on each other. Abort if any of them fail" and sometimes "some of these tasks can be skipped if the inputs haven't changed". You also get free tab-completion on 99% of desktop Linux systems.

If I'm working on a complex software project, I'll probably use an appropriate build system. Maybe Bazel, CMake, sometimes even Autotools.

But I'll probably also keep a Makefile somewhere, and use it to automate the process of running that shiny build system.


Note that the rattle system Neil talks about works cross-platform. It uses LDPRELOAD to make it work.


My findings as a spartan C programmer:

I've been using a simple unity-build ad-hoc build.bat/build.sh "system" for years now, works wonders.

YAGNI will serve you well, 99.99999(repeating)% of everyone's code will only be built and run on 1, maybe 2 platforms, why bother with these insane monstrosities that we call 'build systems"?

The few times I've needed to build for a new platform I just wrote that build script then and there, took a few minutes and that was it.

Modern machines can churn through a tens if not hundreds of K lines of C code in less than a second, so incremental builds aren't needed either (and if anything, with too many translation units you end up with linking being a bottleneck).

Single TU benefits:

- Global optimizations "for free".

- Make all functions static (except main) and you get --gc-sections "for free".

- Linking is blazingly fast.

- Don't have to bother with header files.

- No one has to download anything to built my code, I make it work on a default msvc/gcc/clang install (i.e if you have gcc, cl or clang in your path when running build.bat/build.sh, it will build).


I'm a fan of using two translation units:

* one TU with code that is fast to compile and modified often

* another TU with code that is slow to compile but modified rarely

I still use header files, but the fast-to-compile and modified-often code goes directly into headers, so I can still organize my code into separate files.


I got sick of juggling code that migrated from one category to the other, so I wrote a little script that deals with chopping up a large source file into multiple TUs before feeding them to the compiler.

https://github.com/akkartik/mu1/blob/master/build2

More details: https://news.ycombinator.com/item?id=33574154#33575045


Good post. I am still hopeful that a large scale "backwards" build-system can be made ergonomic enough to be used on even small projects.

If Bazel didn't have so many gotchas, it could be the one:

    cc_library(
      hdrs = [ "app.h" ],
      srcs = [ "app.cc" ],
    )


The problem with bazel here is that every project, no matter how small, ends up needing some beefy WORKSPACE and bazelrc, especially when you start bringing in this party dependencies.

How do we streamline this? bzlmod and bcr are steps in this direction, but it still seems far away to me.


That and some portion of the population hates heavy Java programs.


Sure but that's not the interesting part. If bazel hit some critical mass of usability and/or package availability it could be rewritten.


The JVM is not visible to the user. Bazel ships with one so you don't need to install it.

People avoid Bazel due to Java like vegans avoid a meal containing meat; it's silly honestly.


Yes, that declarative style makes more sense to me than a forwards build statement of steps.

I developed my own tool for building C/C++ projects which follows this style. It automatically handles things like Qt moc and scans the source files for headers, so in your example even the hdrs line can be omitted.


The trouble with scanning for headers automatically is that it allows you to accidently include one that is from an unrelated part of the project.


after all these years in the industry, chatgpt has enabled me to be productive in gnu make. both are such wonderful tools.

really like gnu make now. i know it can solve the problems i have, and chatgpt helped me get there. when dependencies are setup properly, it's quite amazing to see it work.


I've honestly become so used to CMake at this point I just use it for everything C/C++ related.

I do still use Makefiles directly when there isn't an environment to search for - namely AVR projects where there are no libs to find.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: