In the stated examples there's are no benefits to the additional complexity. No one would argue that complexity for the sake of it is a good idea. That'd be insane. If Alice's restaurant could handle 5,000,000 covers a night with only 1 member of staff while Zola's restaurant could only handle 10,000 then you'd have a more realistic scenario to compare with the SaaS industry. The benefit of "complexity" is that you are able to do more things with less work.
The ideal is to build powerful systems from small, simple processes - if any single stage is trivial then anyone can understand it, fix it, modify it, and so on. With many little processes working together you can do amazing things. A good example in software is a build process - a good system can lint code, test it, uglify it, minify it, push it to version control, watch for changes, reload an output mechanism, clean up a distribution, and push it to a live server if it's working all from a single command. That's very 'complex', but really it's just a set of very simple linked processes.
I think the restaurants were a poor analogy. Much better for me was the rule right at the end:
> Innovate on your core product, not on your plumbing (this rule is extremely tempting for developers to break - see next rule)
I was first exposed to this during my final year university project in 2004. We had a sponsor who wanted to build a marketplace for tourism operators and B&Bs. I was the only real developer on the team of 4. I worked really hard with little sleep, producing a really great caching layer and a very impressive WYSIWYG editor. But when the project deadline arrived you couldn't even book a room on the site.
I still struggle with staying focused; the best technique I've discovered to deal with this is to log ALL work around a project, prioritise the tasks and stick to only working on logged tasks with the highest priorities assigned rather than do whatever I feel like.
(It took me even longer to realise what a business opportunity I had lost.)
He should have used the analogy of Alice making an automated wait staff (e.g. waiter drones) for the restaurant. Not an amazing electrical system. I wonder how that kind of story would fair in his scenario.
> The ideal is to build powerful systems from small, simple processes
That's only half of the story, because now the complexity lies in orchestrating those processes.
Let's stay at the example of build systems. These are usually a serial execution of simple processes, so they make up a "best case" scenario. Yet, build systems quickly reach a uncomfortable complexity. See Autotools, CMake, and so on. It is kind of ongoing research: Every few years we find better ways to orchestrate a build system. As of today, there is no build system which is 1) simple and easily comprehensible, 2) reliable and rock stable and 3) still able to build complex applications with all their (seemingly) quirky needs.
It's not only about having simple processes, but also about splitting the complex problem in an intelligent way, so the simple processes have simple interfaces and simple (and few!) interactions. Otherwise, the orchestrating itself becomes the main application, and may easily become even more complex than a monolithic approach would have been.
As of today, there is no build system which is 1) simple and easily comprehensible, 2) reliable and rock stable and 3) still able to build complex applications with all their (seemingly) quirky needs.
Redo solves some problems with reliability, which is good [1], but regarding complexity reduction and simplicity it seems to be not better than a plain Makefile. [2]
Also, I don't think it is a good idea to implement this in Go, because you are limiting your userbase to those willing to install Go and willing to compile and build your stuff. From another perspective: Redo is not a tough task, so why not using a ubiquitous language such as Perl or Python? That way, it would run out of the box on almost every machine. Heck, you could even implement it in portable shell script with acceptable effort. If you ever want to establish a new build system, the entry barrier should be as low as possible.
[1] But honestly, redo don't address any issue that "make clean; make" wouldn't solve. So the practical advantage is rather limited.
[2] Nothing wrong with plain Makefiles, though, I use that approach successfully for many small projects.
> regarding complexity reduction and simplicity it seems to be not better than a plain Makefile.
Makefiles work great most of the time, but become difficult when you need to do things that don't fit well with the make model. I do a lot of multi level code generation, for instance, and make requires a lot of incantations to get right. Whereas redo works exactly the same way regardless of the complexity. I used make for many, many years and got very good at using it before I decided to implement something new.
> Also, I don't think it is a good idea to implement this in Go...
I choose Go because I would not have enjoyed it as much in C. I did not use Perl or Python because redo is used recursively. Perl and Python startup were too slow. I actually wrote a shell implementation that served me well for a while, but it was too slow.
Likely, there are those who won't use it because it's Go and that's fine. I've solved my problem and made the solution available to anyone else to whom it might be useful.
>But honestly, redo don't address any issue that "make clean; make" wouldn't solve.
Fixing make's reliability issue with "make clean; make" is like rebooting Windows when it hangs.
Yeah, you can do that, but it doesn't actually solve the underlying problem. With redo, you don't need need to do that.
The redo-inspired build tool I wrote abstracts the tasks of composing
a build system, by replacing the idea of writing a build description
file with command-line primitives which customize production rules
from a library. So cleanly compiling a C file into an executable
looks something like this:
Find and delete standard list of files which credo generates, and derived objects which are targets of *.do scripts:
> cre/rm std
Customize a library template shell script to become the file hello.do,
which defines what to do to make hello from hello.c:
> cre/libdo (c cc c '') hello
Run the current build graph to create hello:
> cre/do hello
Obviously this particular translation is already baked into make,
so isn't anything new, but the approach of pulling templated
transitions from a library by name scales well to very custom
transitions created by one person or team and consumed at
build-construction-time by another.
I think this approach reduces the complexity of the build system by
separating the definition of the file translations from the construction
of a custom build system. These primitives abstract constructing the
dependency graph and production rules, so I think it's also simpler
to use. Driving the build system construction from the shell also
enables all the variability in that build system that you want without
generating build-description files, which I think is new, and also
simpler to use than current build-tool approaches. Whether all-DSL
(eg make), document-driven (eg ant), or embedded DSL (eg scons),
build tools usually force you to write or generate complicated build
description files which do not scale well.
Credo is also inspired by redo, but runs in Inferno, which is even
more infrequently used than Go (and developed by some of the same
people). I used Inferno because I work in it daily, and wanted to
take advantage of some of the features of the OS, that Linux and bash
don't have. Just today I ran into a potential user that was turned
off by the Inferno requirement, so I'll probably have to port it to
Linux/bash, and lose some of those features (eg, /env), to validate
its usability in a context other than my own.
EDIT: Replaced old way, to call script to find and delete standard derived objects, with newer command.
There's not much in the way of DJB's documentation other than a conceptual sketch so there's much
room for interpretation.
There are many differences between the two implementations, some quite fundamental. redux uses sha1 checksums instead of timestamps. Timestamps cause all sorts of problems as we know from make.
apenwarr redo has all sorts of extra utilities to ameliorate the problems.
redux has the minimum functionality needed for the task (init, redo, redo-ifchange, redo-ifcreate)
and I don't think it needs any others.
Redux passes most apenwarr tests. The few it does not pass are apenwarr specific.
I've not tried the converse. Might be interesting.
I actually have a task list to add an apenwarr compatibility mode to redux so users can switch easily. At that point, it should pass all the apenwarr tests.
No one would argue that complexity for the sake of it is a good idea.
Yet somehow we've convinced ourselves that frontend development now requires package managers, compilers, CSS test cases, CSS frameworks, etc.
Sure there are web-based apps with a rich functionality that require some of these things, but the vast majority of pages that I see out there are just as simple as they were five years ago.
In the case of some front-end tools, they are an absolute must.
When your CSS compiler takes care of all the stupid -webkit-/-moz-/... prefixes, allow you to define variables, mixins, use loops, you gain a great amount of power and your code is a lot cleaner.
When your build system integrates with livereload and make your changes appear instantly as you save, you gain speed and comfort.
That's major benefits for little to no inconvenience.
I don't know about the necessity of CSS frameworks, styleguides or package managers, but some tools are just too good.
It's not about the end result, it's about using better tools to get to it.
After gaining a fair amount of experience with various technologies over the years (by which I mean, getting burned (especially by things that seem wonderful and then cease to exist for a variety of reasons)), that has become the single most important question I ask about a new technology. If it's been around for ten years, it'll probably still be around in ten years. If it hasn't then it may not, especially in a form recognizable as related to today's.
C and Java are, well, not horrible, but bad. However, C has been C for my entire career. And I'm fairly sure that if I write something in simple, plain Java, in five or ten years when it needs some significant work then that work won't start with a complete rewrite.
I don't understand this: with open-source, it doesn't matter that your CSS compiler has been abandoned: as long as you are satisfied with the current feature set, the code isn't going to disappear.
Stylus is only about 3.5 years old. But if all development were to stop now, I'll still be able to use it 5 years down the road. It probably helps that npm dependencies are pinned to a specific version.
C has been around for dozens of years, but you can consider its feature-set pretty much frozen. That's about the same thing as being satisfied with (CSS compiler X)'s feature set and sticking with it.
DSLs like the ones for CSS compilers are feature-complete fast enough. You don't need to wait 10 years for it.
I was merely using the CSS compiler as an example. For something like that, just put a copy in your source repository and forget about it---it'll be fine for a couple of generations of browsers, until the "best current practices" have gone beyond deprecation.
I'm sitting here looking at a JRuby on Rails application using JRuby 1.1.4 and a suitably ancient version of Rails (2.2, maybe?). It's been in production for roughly five years and has received essentially no love during that period; it Just Worked[TM]. The poor sod who was responsible for it and our one other Rails application, our one and only Rails developer, finally managed to move on to other things.
At this point, security issues and (most likely) the inability to slather on new features have percolated up the chain of command and it has been agreed that Something Must Be Done. Since the upgrade path seems to recapitulate the phylogeny of Rails, our options seem to be to rewrite the applications in a modern Rails and kick the can down the street, hoping it'll work better this time, or rewrite it using our common tools around here (and because I've got something to do with it, in the simplest manner possible). (And against the usual "never rewrite anything, ever" meme---which isn't an option---it's going reasonably well.)
As for C, yes, it's feature set has been more-or-less frozen, which is good. It's environment is not. I was reasonably happy with the first version of gcc I used; call it 1.37 or so. Do you think you could use gcc 1.37 today?
You don't need a build system during development. I work on a 500.000 line web app, and it has no dev time build step. You check out the source into your web root, and run the app. Edit + f5 is fast enough, no need for smart build systems. Build steps are ofcourse inevitable for the jump into production.
If you're satisfied with pure CSS and Javascript, the only thing you really need is a build tool for concatenation, minification and gzip. And livereload. And probably normalize.css.
-> I use a Makefile, because it's the most simple to modify and adapt. When I have several targets of the same type (ex: build.min.js and debug.min.js), I use a yaml file with a custom python script for concatenation. See gulp.js and a thousand others for alternatives, YMMV (my Makefile nowadays: https://gist.github.com/idlewan/11012492). Magical bit for running make every time a file changes:
watch:
@inotifywait -m --exclude ".*\.swp|.*~" -e moved_to,close_write -r . | \
while read; do make -s; done
To get more from just CSS, you need a CSS precompiler.
-> I use Stylus with the nib library (https://github.com/visionmedia/nib). Alternatives: LESS, SASS, and some others. You might want to add some grid and 'responsive shortcut' mixins on top.
For webapp needs, you need a template library (some people prefer using a fat framework that does everything for them, e.g. Angular).
-> I use Jade (https://github.com/visionmedia/jade), because it runs on the server for Python, Node.js and soon Nimrod, and on the client with precompiled templates. Alternatives: Mustache and a thousand others, YMMV.
The ideal is to build powerful systems from small, simple processes - if any single stage is trivial then anyone can understand it, fix it, modify it, and so on. With many little processes working together you can do amazing things. A good example in software is a build process - a good system can lint code, test it, uglify it, minify it, push it to version control, watch for changes, reload an output mechanism, clean up a distribution, and push it to a live server if it's working all from a single command. That's very 'complex', but really it's just a set of very simple linked processes.