> I really enjoyed reading someone far smarter than I am presenting these arguments clearly and concisely. I only wonder, how is it that more than 25 years later we still need to be making roughly the same points—how is it that they still feel fresh, mainly uncharted, and in need of advocacy?
Anti-intellectualism is a symptom of our age and prosperity, and it infects development as well. I routinely see modeling and design slammed on this site. Time-to-market trumps everything. Experience is systematically devalued. Discipline is something that 'interrupts flow' (JS semicolon debacle).
The sky is not falling, but for a site that presumably has a high contingent of skilled devs, it's a bit concerning. It's not the new-ness of the tech, it's the cavalier attitude of not learning from the past.
So Naur is saying that programming helps programmers understand how software can solve the problem at hand, and that this is more important than the resulting code.
I might be reading this the wrong way, but this helps explain the Not-Invented-Here syndrome [1].
Why not just re-use some other code? Because it's not just about the code, but about the programming team's understanding of how the software tackles the problem.
It looks similar but it is not exactly the same thing. In some sense, not being the 'writer' of the code makes it difficult for you to understand it and hence to use it as the expression of your theory, which may lead you to rewrite everything (thus 'invent it here').
However, the rational way to do it would be to try and read the outside code and reuse it as much as possible. In the same way that you read a book on Calculus and do not need to write a new one if you have learnt it.
You're probably not going to learn calculus just by reading a book, though. To understand it and to be able to deploy the knowledge to solve actual problems, you need to do practical exercises.
In a sense, doing those exercises is "rewriting calculus" in a form that is internalized for yourself. It's the same as learning to play a piece of music: if you're a good piano player, you can read the sheet music for a Beethoven sonata and have an idea of what it's like [0], but you need to practice the piece to really understand what it means.
Maybe the design of software should also contain some kind of built-in learning process. When one encounters a new codebase, it can take quite a while to figure out where to even start deciphering the architecture... What if there was a design document with a textbook-like approach that extended all the way into the code itself: there would be "exercise hooks" expressly for the purpose of allowing a programmer to experiment with the software's internals in a controlled fashion.
- -
[0] I guess -- I suck at music, so I wouldn't really know.
Doing the exercises is not rewriting calculus, but it does lead to the discovery of useful pseudo-theorems about mathematical expressions and calculus-based techniques for manipulating them.
An actual rewriting of calculus would involve discovering proofs for things like the Mean-Value Theorem or the Fundamental Theorem of Calculus. Short of that, you could wrestle with trying to discover proofs for theorems so that you can at least appreciate the canonical ones. Incidentally, this is what every mathematics major has to do in order to earn their degree.
I think one of the reasons for NIH is that, unlike mathematical theorems, programs are not customarily explicit about the domain over which they hold and therefore are just pseudo-theorems. The few programs that do have that explicitness, such as compilers and small UNIX utilities, tend not to get reinvented. Also, the domain can become obsolete (e.g. ascii vs utf8 or unstructured text vs XML).
Thus, NIH may really just be a want of a theorem (sometimes merely a stronger one).
I think you are right it helps explain NIH, but I don't think it justifies it. In the article, Naur says:
I shall use the word programming to denote the whole
activity of design and implementation of programmed
solutions.
So 'programming' is not just 'producing code', but also everything leading up to that. It is very well possible to design a solution for your specific problem and then conclude somebody already implemented (parts of) it for you. You get NIH if you need to implement (part of) the code to discover how you can use software to solve a problem and subsequently cannot let go of your code. So there are two opportunities to get rid of NIH: 1) become sufficiently proficient that you can design software without needing to write lots of it already and 2) be able and allowed to throw your code away.
To me, the idea that the essence of a program is the model of reality it embodies suggests principles for the use of comments, literate programming tools, and other methods of documentation. A prime goal of documentation ought to be to avoid premature or unnecessary "program death."
Or perhaps, to enable program resurrection. Naur explicitly contends that theory is not something that can be expressed. I disagree. Communication from one mind to another is never perfect, but humanity has thousands of years of experience communicating mental models about the world.
There is definitely something to this. Literate programming, or something like it, really does come close to hitting the nail on the head. It should be more prominent.
Why isn't it? Well, I think that certain humans are better than others at inferring and "globbing onto" the abstract theories held in other people's heads. Those who are good at it become great software developers. And we take pride in our ability to do this "grokking" without any hand-holding or "crutches." From that viewpoint, literate programming and other things that would address the problem look like "crutches" for "sissies."
And maybe it is more efficient to find the people who can "grok" than it is to try to articulate the theory and keep that articulation up-to-date.
On one hand, a 'theory' is described as something completely internal and irreducible:
"the theory, is not, and cannot be, expressed" (quote from PATB)
But on the other hand, a 'theory' is applied to external objects:
"if viewed in relation to the theory of the program these ways [of changing it] may look very different, some of them perhaps conforming to that theory or extending it in a natural way, while others may be wholly inconsistent with that theory" (quote from PATB)
These cannot both be true. A 'theory' cannot be wholly internal if it is applied to things. If something external conforms to it to some degree, then that thing is to that degree an expression of the 'theory'. What else is an expression? And, a 'theory' cannot be irreducible if it applies to things that are reducible. If it matches something -- like software -- that is complex and determinate, it must itself be analysable into determinate properties or patterns or structure.
Since 'theory' is used to make actual software -- something that can fit or diverge from it -- 'theory' must have a substantial, and complex, objective part.
So instead of this confused term 'theory' we should think of something like a material: programming is the engineering-design of structures in a particular abstract 'material'.
That does not change the article's conclusions about programmers not being 'replaceable components of production'. But it gives a better viewpoint of the activity: not merely some obscure inaccessible idea of human thought, but some lead on the part of it that is objective and that we can get hold of and hopefully build some understanding of.
You seem to have confused expression with application. A person unable to speak but merely point is unable to express (that is, define precisely) what it is that causes them to point to, say: a red brick, a red door, a red pencil. However in their activity of pointing, that is, in their application of an unexpressed principle we can infer a "theory".
Naur's point is that a "theory" in his sense is a purely mental, immediate, intuitive ground for the understanding of a problem. It is the "red" in the above example, and as we cannot explicate red (only point to it) we cannot express the theory.
>If it matches something -- like software -- that is complex and determinate, it must itself be analysable into determinate properties or patterns or structure.
No. The mind is not a computer program. Mental models are not immediately accessible, completel and transparent to conscious thought, nor are their relations, nor are they "comprised" of anything simpler than more thoughts.
The theory doesnt "match" the computer program, the computer program is a symptom of the theory. The theory is how a problem is understood.
>Since 'theory' is used to make actual software -- something that can fit or diverge from it -- 'theory' must have a substantial, and complex, objective part.
I dont know what this means (nor much of your comment to be honest), but i suspect you're making the same error as above: that the (physical) products of mental activity reveal or constrain the nature and structure of that mental activity.
What is this 'theory' Naur talks of? It is not a nebulous feeling or sensation, it is something complex, articulate -- something 'built'. But how can something with logical structure be at the same time inexpressible? There is a contradiction lurking there.
Imagine you are 'building' one of these 'theories', to make some software. How do you know it is correct? The only way is by testing it against the world, and to do that it must be expressed. And any part that is not expressible cannot, for that reason, be a usable part of the 'theory'. (It is more basic really: a 'thing' in your mind, that cannot be expressed, is not really a 'thing' at all.)
The reason programmers are not simple replaceable resources is not because some kind of 'theory' thing is not expressible, but because making software requires certain significant practicalities of effort, knowledge, and skill.
The actual paper is at http://alistair.cockburn.us/ASD+book+extract%3A+%22Naur,+Ehn... (Edit: that link is broken now, but http://pages.cs.wisc.edu/~remzi/Naur.pdf still works.)
What Naur means by "theory" is some combination of what we'd now call "model" and "design".
Naur put the N in BNF and played a role in creating Algol.