It's not a rehash. Dare uses "Worse is better" as a jumping off point to examine why Ted Nelson's Xanadu failed while the much less capable WWW succeeded. He says Xanadu was much too complex to even get out the door.
I remember visiting the Xanadu project sometime in the 80's. It was a bunch of hackers in an old house in Menlo Park, CA. I came away with the feeling that it wasnt going to succeed because it was too complex. I remember in particular the requirement for bidirectional links and thinking that it was going to require too much cooperation to maintain them. Think, every time someone linked to your site, you now had an outbound link to maintain. Did that mean that if you moved a page, you would have to update a thousand outbound links or a billion browser caches?
The thing is that seventeen rules of Xanadu, in themselves, aren't that much more complex than the WWW - The http protocol is not an uber simple thing.
Further the WWW haphazardly succeeds at being everything that Xanadu wanted to be and more, except for those characteristics which had to do with intellectual property (what is now "DRM"). Many people have described the impossibility and undesirability of DRM. One easy way to see the impossibility is to realize that a "transclusion" DRM system would require every protected piece of information to exert control over the entire system, a problem which gets harder as the system gets bigger.
It does relate to duct-tape programmers in the sense that since the WWW doesn't have the fragile global requirements of Xanadu, it can be "duct-taped" to work. But it also should impel us to go a bit beyond Spolsky in the sense that
it points to designs which avoid particular inherently difficult problems rather than using simplistic formula of 'avoid any complexity'.
"The thing is that seventeen rules of Xanadu, in themselves, aren't that much more complex than the WWW - The http protocol is not an uber simple thing."
I have to disagree. Requiring secure identification of a web server means that every web site must use an SSL certificate. Most likely, the way it was intended, it must also be an identity-verified SSL cert, i.e., "Verisign" et al, not just "some SSL cert I generated last night". Requiring secure identification of the user at all times is very onerous when propagated throughout the entire stack, as it would have had to have been. Now you have to be logged in to all sites, all the time, with some sort of universally-agreed-upon protocol which would run smack into the problem that not everybody's identification needs are the same. "Every user can store documents" means you are not allowed to browse the Xanadu without paying for hosting privileges. Backwards links complicates every CMS, ever, horrifically, and also has to manifest in the protocol. And I'm not even all the way through the list, the problems keep going, but I fear boring the reader.
If you look at what they really mean (remember, these are summaries), it is night and day. An elementary HTTP server can be bashed out in an hour, and it'll work with modern browers to at least some degree. In college, writing an HTTP proxy server was a 2 hour lab assignment in networking class, to give an example of another piece of the stack. AFAIK, nobody has ever produced a full Xanadu server, despite massive amounts of effort. This is not a coincidence; this is a reflection of the almost-impossible-to-overstate difference in complexity between the two ideas.
Actually, I think you're saying the same thing I at least meant to begin with: Xanadu's requirements are fairly simple to state but extremely complex to implement.
My further point is this show a world where things are a bit muddier than Spolsky's simple/complex division.
"I remember in particular the requirement for bidirectional links and thinking that it was going to require too much cooperation to maintain them."
Although, partially, we are now starting to see this with "trackbacks" and "pages that link to this page". Of course, things would be much more complicated if this was a requirement instead of an option.
IMHO, for something to be successful in the "real world" it should have as few hard requirements as feasible, and as many applications as possible.
I like the way this article treats Nelson as not that important. The constant hero worshiping is very annoying sometimes. We know [1] that there were people before and during Nelson's time that were just as creative on the "ideas" side. Berners-Lee got it working on the "rough consensus and working code" KISS principle side. I am sorry but the (albeit small) group of people who consider Nelson the "real" father of the web are wrong. To say nothing of the fact that journalism has by far overemphasized the web compared to the Internet. Even when a small number of journalists correct for that it's Kahn and Cerf with TCP/IP not Licklider, Baran and Davies.
If anybody it ought to be Vannevar Bush or Doug Engelbart, not Ted Nelson.
But Ted Nelson had a bunch of stuff figured out that the web is still struggling with today (such as proper attribution and an automated royalty system based on fractional contribution and derivation).
Berners-Lee's design for URLs, etc. is spectacularly elegant and carefully thought through. It's not a good example to illustrate the alleged superiority of "duct-tape programming".
There are different kinds of design, useful in different situations, harmful in other situations. Whipping out small web sites, patching bugs in messy business apps, etc., don't call for great elegance. Seeking elegance would only get in the way. Duct tape wins. On the other hand, remember Gopher? Gopher was a simple, functional predecessor of the web, designed without much insight. The WWW needed the kind of simplicity that you can't get from duct tape.
I think Joel Spolsky implied "duct tape programmers" to be those who are pragmatic and get things done, those who do not try to show off unnecessary techniques but focus on solving problems, those who do not try to play toy problems with all wonderful constructs but solve real problems with simple elegant lines of code.
it feels like WWW is a kind of product that is worth designing better. why not take a little longer to make it simpler and more extensible? it doesn't mean it has to be built by a huge committee fighting over standards and trying to include every kitchen sink anyone wants, but it does seem more like a public good than some company's product.
as for "duct tape," maybe it conflates two concepts? i personally think "fast and simple" and "fast and crap" are two different ballgames. my first projects were over my head, so i built crap. then i built crap faster. these are good for throwaway demos, maybe prototypes.
at some point building crap became slow. good design simplifies problems. good abstractions and good framworks make it easier to add more with less code. it's faster to add features, and much less buggy. the upfront cost to building the first feature takes a bit longer to get right, but upon a strong foundation one can really roll. it's like being bound by linear v constant time. when one considers handling bugs in crap code, the worst case becomes exponential time.
i guess what i'm saying is that the complexity of the solution should be appropriate for the entire situation: the developer skillset and prior experience, the scope of the project, the time available, the importance of iteration, extensibility and maintainability, etc.
duct tape is good, but you still have to figure out where to draw the line between overzealous copy-pasting and meta-programming.
I've had the displeasure of maintaining code written by duct tape programmers in the past. We called them combat programmers, and some of them worked at places like Netscape with people like jwz (who was mentioned in this article).
A thousand lines of un-commented Perl script (with few, scattered functions) that looks like it was written in cat(1), for example, is the kind of duct tape that fails quickly and makes life miserable for whomever ends up needing to fix a bug or introduce a new feature.
In my experience, this is the sort of thing duct tape programming (almost always) produces, and I think it's worth at least a little bit of actual design and documentation time to create something far better and more maintainable.
I remember visiting the Xanadu project sometime in the 80's. It was a bunch of hackers in an old house in Menlo Park, CA. I came away with the feeling that it wasnt going to succeed because it was too complex. I remember in particular the requirement for bidirectional links and thinking that it was going to require too much cooperation to maintain them. Think, every time someone linked to your site, you now had an outbound link to maintain. Did that mean that if you moved a page, you would have to update a thousand outbound links or a billion browser caches?