Being quoted several times in this thread; what about the experience of finding something feels "clever" but then after a while of using it, finding it no longer feels clever but instead feels normal?
> Every weightlifter is fully aware of the strictly limited size of his own muscles; therefore he approaches the weight lifting task in full humility and among other things he avoids heavy weights like the plague.
As more people are familiar with a clever trick, it becomes an idiom, which others master. Now it is in the realm of "clear", graduating from the realm of purgatory of clever tricks.
Your analogy assumes that the goal of code is to make your code more and more clever over time, just as a weightlifter seeks to lift heavier and heavier. The goal of code, however, is simply to communicate a process to a computer. Or rather, when that process is subject to change over time, to communicate a process to a computer, simply.
Then why aren't they written on paper, in English? Because that's how people used to read things when the A&S quote was from, in 1979. And natural language is still how people read things today, even if on screens. People don't code as if code was primarily for people to read.
> Then why aren't they written on paper, in English?
On the other hand, why aren't programs all written in machine language, in hex or octal? Why invent assembly language? Why invent macro assemblers? Why invent high-level languages?
Programmers are not unique in this regard. Mathematicians and logicians do not write all their dealings in English. They've developed a highly specialized notation for writing compact and precise descriptions of their ideas.
Furthermore, many layfolk might even say that the language of jurisprudence isn't quite English, despite how it looks. The jargons of many fields, like “legalese”, serve the same purpose as mathematical notation, which is itself the same purpose as programming languages: to enable ease, brevity, exactness, and precision in their respective domain-specific communications.
You can see a little of all that in the same preface by Abelson & Sussman, which goes on to say:
These skills are by no means unique to computer programming. … We control complexity by establishing new languages for describing a design, each of which emphasizes particular aspects of the design and deemphasizes others. ¶ Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. … Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to.”
> Because that's how people used to read things when the A&S quote was from, in 1979.
Clearly it's not how people always read things back then, as it's not how people always read things now. People read programs, sometimes on screens, sometimes on paper, just like they read mathematical formulas łsĩ. In some cases, programs have been written on paper in some formal language that hadn't actually been implemented, simply because that language was seen as an effective means to communicate them. We usually identify it as pseudocode, ranging from “pidgin algol” to “plausibly python” to the M-expressions of the early LISP manuals.
M-expressions are still used in the LISP 1.5 manual of late 1962, despite the fact that 2.5 years after the LISP 1 manual, the LISP system was still incapable of reading M-expressions—the programmer had to translate them to S-expressions by hand before entering them. The Appendix B of the 1.5 manual gives the code for the interpreter, as well as some rationale:
This appendix is written in mixed M-expressions and English. Its purpose is to describe as closely as possible the actual working of the interpreter and PROG feature.
(It turns out to be possible to get an even closer description with a formal notation for the semantics, as was done with the definition of Standard ML, but such formalism has yet to catch on).
This emphasis on the importance of notation for the exact expression of thoughts and precice description of “ideal objects” is not particularly new, and it certainly predates the invention of the computer:
… I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. …
I believe that I can best make the relation of my ideography to ordinary language clear if I compare it to that which the microscope has to the eye. Because of the range of its possible uses and the versatility with which it can adapt to the most diverse circumstances, the eye is far superior to the microscope. Considered as an optical instrument, to be sure, it exhibits many imperfections, which ordinarily remain unnoticed only on account of its intimate connection with our mental life. But, as soon as scientific goals demand great sharpness of resolution, the eye proves to be insufficient. The microscope, on the other hand is perfectly suited to precisely such goals, but that is just why it is useless for all others. ¶ This ideography, likewise, is a device invented for certain scientific purposes, and one must not condemn it because it is not suited to others.
(from the preface of «Begriffschrift» by Gottlob Frege, 1879, translated by Stefan Bauer-Mengelberg).
In 1882, Frege further explained: “My intention was not to represent an abstract logic in formulas, but to express a content through written signs in a more precise and clear way than it is possible to do through words.”
> People don't code as if code was primarily for people to read.
I agree. I am often guilty of this too, although I usually forget about it until I try to read a program I'd written some time ago and discover that it requires some careful study to figure it out.
It's a shame, really, because we should be writing readable code. But after I'd read this statement, I was thinking: how do people code, then? And I was reminded of this little bit from Paul Graham's essay “Being Popular”:
One thing hackers like is brevity. Hackers are lazy, in the same way that mathematicians and modernist architects are lazy: they hate anything extraneous. It would not be far from the truth to say that a hacker about to write a program decides what language to use, at least subconsciously, based on the total number of characters he'll have to type. If this isn't precisely how hackers think, a language designer would do well to act as if it were.
It is a mistake to try to baby the user with long-winded expressions that are meant to resemble English. Cobol is notorious for this flaw. A hacker would consider being asked to write `add x to y giving z` instead of `z = x+y` as something between an insult to his intelligence and a sin against God.
While I generally agree with your sentiments here, and this is a somewhat pedantic response, with regard to the following:
The goal of code, however, is simply to communicate a process to a computer. Or rather, when that process is subject to change over time, to communicate a process to a computer, simply.
I would tend to disagree. Code is not about communicating with a computer. It is about communicating with humans. The computer does not care how the code is written - it is the humans that have difficulty with it. In a way, programming is the translation of a language that the computer understands into a form that humans can comprehend. Not so much the other way around. In this regard, clever is fine for a computer, but it is not always understandable to a human.
Still would not deploy anything from codegolf to any production environment. Fun, but definitely could undermine understanding. So there is a need for balance in my opinion.
For code that you only use yourself? Maybe do it, but I would guess that after one or two years you would not immediately understand what your former self fabricated.
The goal of code, however, is simply to communicate a process to a computer
But once that simple process was first communicated to a computer in 1965, what next? More complex processes, surely? And more and more complex processes as the computing power increases?
I worked for a company that asked me not to use hash tables, because it no one else at the company knew how to use them. Finding your way around an arbitrary-length array of arbitrary-length arrays was apparently "easier" for them than learning the full capabilities of their chosen language (ColdFusion).
Point being, a little bit of "cleverness" in one place may save a lot of effort down the line. I've also never been too fond of that quote.
> Every weightlifter is fully aware of the strictly limited size of his own muscles; therefore he approaches the weight lifting task in full humility and among other things he avoids heavy weights like the plague.