Hacker News new | past | comments | ask | show | jobs | submit login
How do we tell truths that might hurt? (E. Dijkstra, 1975) (utexas.edu)
66 points by Luyt on Dec 15, 2012 | hide | past | favorite | 67 comments



> How do we tell truths that might hurt?

Well, if you are attempting to influence someone like me, not like this. I think Dijkstra's great reputation and 37 years of experience with programming are basically enough for anyone reading this today to take all of his statements as basically true; but if someone published a similar list dissing all your favorite programming languages... well, we've all witnessed internet flame wars.

The basic problem here is that the list is nothing but assertions. Dijkstra is 'right' about almost all of them, but there's no proof, or attempt to argue the point here. If you didn't already agree with him, or god forbid, actually use one of the languages, what you'd hear is Dijkstra basically saying you are stupid for using that language.

And humans have a very natural tendency when someone calls them stupid to assert angrily that they are not. And then the argument basically becomes a form of "no I'm not stupid for liking Apple products, you're stupid for liking Android." Which is entirely not the point of what Dijkstra is doing.

In my experience, the best way to get someone to change their mind is to show them a better way, and show them how and why it is better without ever insulting the old way or implying people who follow it are stupid.

Easier said than done, of course. But the insults are unnecessary and, more importantly, distracting. They tend to be more about the ego of the person hurling them than anything else. And stuff is already hard enough to figure out even without egos getting involved.


That's entirely fair.

The most disheartening thing is that what you described is a good way to convince somebody, and even that often doesn't work. (As, I'm sure, many advocates of functional programming know well.)

If your goal is to convince somebody of something, this rational approach may be the best, in some sense, but it probably isn't the most efficient. Instead, I imagine a combination of some good points with a bunch of tactics borrowed from advertisers and behavioral psychologists is the most likely to actually convince people. I would love to believe that programmers are all rational and listen only to reason, but I've found this too often to not be the case.


Well, if you could absolutely convince anybody of anything, that would akin to hypnotism and will amount to fascism.

So, let's assume that you only want to convince people of things that are indeed true.

Then the thing goes awry: how do you know what's true? Maybe it's not, and they are rightfully not convinced. Maybe there are more than one ways that the thing can be seen. Etc.

Take something that is "obvious" for some people, like "PHP is a bad language".

It's not at all obvious to me that even that one universally true. One could answer, e.g: all languages have sore points so it's no big deal, if reversal of needle-and-haystack is your biggest worry about a library then it's pretty much fine, I can easily find programmers and that's what matters to me, it has never failed my projects, it works for Facebook, it has tools and communities I cannot find elsewhere like Wordpress, etc etc.

And PHP is as close-cut a case as it can get. Fighting Fortran? Not so much.


I was pleased and surprised to find concise descriptions of several approaches to 'persuasion' on Wikipedia:

http://en.wikipedia.org/wiki/Modes_of_persuasion

http://en.wikipedia.org/wiki/Rogerian_argument


Breaking news: our computing technology is not perfect, neither is our world

He dismisses and complains about every piece of popular technology back then as if the world is about to collapse because hey! look, these apes invented COBOL and FORTRAN.

What did he do? Did our world come to an end because how terrible COBOL was? No, we learned some lessons and we paid the price and now we're doing better, this is how it works.

Every piece I read from Dijkstra it seems like he is having a hard time accepting the imperfect aspects of our computing technology. He wants the perfection of formal proofs to be everywhere in every program.


He exaggerated his points because he did see the big picture which became reality; untrained people writing buggy, horrible software. He knew that would happen and tried to use his position to do something about it. And we are doing better? Computers were doing important things back then, but now they are almost driving our cars, they are flying our planes and giving us radiation therapy. All that, usually, without formal proof and with some kind of 'but we use unit tests so ...' attitude. Outsourced if possible because then it's cheap. Of course, humans make more mistakes than buggy software generally, however his point was that we have ways to almost prevent this by properly training software developers and writing proper software. You cannot prevent everything as he suggested (but, again, he exaggerated to make a point); you can make it better by not calling people who did some online course programmers and by creating open and robust systems. If you are young enough, I would be willing to bet that you will, at some time in your life be seriously hurt or even die from the consequence of not taking this seriously enough. You are probably losing money because of software bugs already; besides rounding errors, you (probably) have no idea how many bugs there are in the software which processes your insurance, taxes etc, all written in that evil Cobol by untrained monkeys. I know this for a fact in the Netherlands as I have seen this up close; probably true everywhere.


Hmm...makes sense. I see your point and indeed I do agree.

However, again, why didn't he do something practical about the perfect programming languages that he seems to talk about?

Whatever technology you name, it is always possible to dismiss it by talking about another level of perfection, but what is it beyond imagination?

We could perform our surgeries in this and that way...

We could build our cities and buildings in this superior way...


As someone who was born more than a decade after this was written, I can't say I understand the context in which this was written. Would anyone care to elaborate for me?

Specifically, I'd like to know what opinions about Fortran, Basic, Cobol, PL/I and APL were common at the time.

Also, how was this received?


The easiest way to update it is to replace those with the closest current equivalents.

Fortran = C, Basic = PHP, Cobol = Java, PL/I = Perl, and there is nothing on the landscape that is really equivalent to APL.

So let me say more about APL.

APL basically started as math notation turned into a programming language. In order to type it you needed a special keyboard for all of the extra symbols you needed. It evolved into an incredibly compact and versatile language, with a well-deserved reputation for being hard to read. In fact the classic challenge among APL programmers is that one would write a program in a single line, and the other would try to figure out what it did!

The Wikipedia article on the language is at http://en.wikipedia.org/wiki/APL_(programming_language). They offer the following example of an implementation of Conway's game of life in APL:

life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}

Update: Apparently one or more of my comparisons are offending people. If you'd respond indicating which comparison offends you, I'll explain why I made the comparison.


The comparisons you've made are flippant ones without much serious weight to them. I didn't downvote you, but, for example, COBOL bears no resemblance to Java beyond "it's used for business applications." Unlike COBOL, Java has a thriving non-business ecosystem and is a suitable tool for a lot of varied tasks. And, perhaps more importantly, it is regarded as such--the opinions held of COBOL at the time do not look remotely similar to the opinions about Java today.

I can't speak to PL/I : Perl so I won't try, but FORTRAN : C is likewise unfair. BASIC : PHP is closer, but still denies the credit (yes, there's some) that the PHP language, core library, and community deserve.

(EDIT: I will totally agree with you that APL is cool, though. ;-) )


I said closest equivalents, not close. And, in terms of understanding a 1975 comment with 2012 language comparisons, I challenge you to do better.

As for Java, it, like COBOL, was intentionally designed from the start to subtly restrict programmers so that large groups could work together on extremely portable programs. Both became popular in lots of "boring business applications" that will live forever. And if you think that current opinions about Java are dissimilar from old opinions about COBOL, you need to get out of your echo chamber and learn what adherents of various scripting languages think about Java. If you're tempted to retort that there are more Java programmers than scripting programmers, and Java programmers like the language, a 1975 COBOL programmer could justifiably have said the same thing.

The Perl vs PL/I comparison is not original to me. A bit over a decade ago MJD took a 1967 description of PL/I, did word substitution, and came up with http://perl.plover.com/perl67.html. The description fits - both languages combined several previous ones in an idiosyncratic way, with a strong emphasis on making it easy to get stuff done. The result in both cases was a language which made it easy for "sysadmin types" to solve real problems, with solutions that in time could become problems of their own.

The FORTRAN to C comparison is, admittedly, unfair as languages. But both have the characteristic of being low-level languages that are not (by current standards) very expressive, with large established code bases, riddled with repetitive mistakes that cause problems.

You admit that BASIC to PHP is better. If you think that I have failed to give the PHP language, core library, and community the full iota of respect that they deserve, then I likewise believe that you fail to give Kemeny and Kurtz full respect for deciding in the early 60s that computing was going to be a universal right, and there needed to be a programming language, for non-programmers, that would let them unleash that potential. And then designing a language that successfully served that purpose for decades.


And, in terms of understanding a 1975 comment with 2012 language comparisons, I challenge you to do better.

How about simply: The web is broken, Windows is almost as bad, Unix is getting awfully long in the tooth, and Javascript is braindead.


And that productive people continue to get amazing stuff done with all of these technologies, and that starting over from scratch in any of theses areas would be far harder than critics would like to admit.


>The comparisons you've made are flippant ones without much serious weight to them.

I find them mighty fine comparisons. He didn't want to convey the same language features -- he wanted to convey languages that draw similar kinds of criticism today and serve similar functions.


So, none of them? Again, never used PL/I, so I can't roll on that one, but I know guys who have been doing FORTRAN since the seventies and that culture and outlook bears little resemblance to the C community. Java's community is not nearly as buttoned-up and business-centric as COBOL's was (even the oh-so-wise "scripting people" aside); COBOL had literally-literally no other significant use case. And PHP, for all its many warts, is viewed as a language you use to ship products, novice or no. Actual products that people use, no less. The first time I can think of BASIC being readily usable for that was in the nineties.

They don't draw similar criticisms at all, unless you take the Djikstra paper literally (and not even Djikstra took the Djikstra paper literally). It's a chance to slag on languages rather than a way to move the conversation forward; he challenged me to make better comparisons and my point is that the number of factors that exist today are so much greater that a comparison will fail. To understand the Djikstra paper you have to understand the languages, not short-circuit a poor comparison.

(And I do know all of them except PL/I, which sort of pains me, but...)


They don't draw similar criticisms at all...

I actually have heard similar criticisms to the ones that Dijkstra made of every one of the modern equivalents. For similar reasons. And I believe that my comparison touches on why each of those criticisms is paralleled.

I freely admit, though, that the weakest parallel is to Java. Programmers who are not enamored of deep object hierarchies despise those designs when Java programmers try to implement them in any language. But love of deep object hierarchies is not unique to Java (though it does tend to be extreme there). Nor is Java as crippled as COBOL.

As for COBOL and other use cases - I would be willing to bet large sums that it would have had many other use cases had the people programming in it had access to computers that could run it for a reasonable price.

And finally, I did not post that comparison to slag on languages. I posted it to provide context for Dijkstra's quote. In fact I, personally, have spent a lot of my professional life programming in Perl - which is one of the languages that I offered up for criticism. And I freely admit that the criticism is not entirely unfounded.


>So, none of them? Again, never used PL/I, so I can't roll on that one, but I know guys who have been doing FORTRAN since the seventies and that culture and outlook bears little resemblance to the C community. Java's community is not nearly as buttoned-up and business-centric as COBOL's was (even the oh-so-wise "scripting people" aside); COBOL had literally-literally no other significant use case. And PHP, for all its many warts, is viewed as a language you use to ship products, novice or no. Actual products that people use, no less. The first time I can think of BASIC being readily usable for that was in the nineties.

Do I have to repeat that it's not about them being the SAME, but merely an equivalent for the modern era?

The PHP analogy here is the novice targeted/available everywhere/easily run part.

The Java analogy is the enterprisey, business, all serious part.

The C analogy is the made to run fast but too low level and too concerned with control/low level speed over proper abstraction stuff.

Etc etc.


Matlab and R could be seen as inheriting some of the features of APL; at least APL was my first thought after having written my first program in APL as a middle schooler and then done some work in R many years later...


>there is nothing on the landscape that is really equivalent to APL

I beg to differ: http://en.wikipedia.org/wiki/J_(programming_language)


Just watched a video about Array programming which mentions APL and J. It also mentioned a couple of interesting facts about Ken Iverson I'd not heard elsewhere:

1 - Iverson originally wanted to call his new language Iverson's Better Math but his company blocked this so he went for A Programming Language (APL) instead. NB. Iverson worked for IBM at the time :)

2 - Iverson passed away at a computer screen.

Video was very interesting so posted link on HN: http://news.ycombinator.com/item?id=4931929


Nice one.

But it is not widely enough known that just naming it would have clarified Dijkstra's comment about APL.


> APL basically started as math notation turned into a programming language.

The same is also true of Lisp. http://en.wikipedia.org/wiki/Lisp_(programming_language)

Interesting that a number of languages around this time started as mathematical notations, which someone then realized could be machine-evaluated.


fortran = perl.

Not because of their domains (which are very different) but because nobody notices that they are evolving, and people still rant about how ugly they are (because it was true of versions that are now 20 years old.)


I'd like to know why the author thinks that programming is one of the most difficult branches of mathematics, because I disagree.

Is it because they didn't have relatively easy-to-use languages such as Python back then?


Some math people I talked to found formal logic to be the hardest branch. Proofs about proofs are maddening. Reasing about reasoning and about things you can't reason about is tricky.

Programming--at a high level--is just this with different syntax. Actually, that's both a simplification and an exaggeration. However, Dijkstra was very interested in formal verification, and that really is very much like the study of formal logic.

More generally, the study of programming languages and semantics (which is probably most of what he meant by programming as a branch of mathematics) is very closely related to formal logic. And while I personally think it's not nearly as scary as people make it out to be--please don't be afraid of type theory and formal verification!--most people do find it somewhat difficult.


The first time I understood a formal proof was with Dijkstra:

http://www.cs.utexas.edu/~EWD/ewd13xx/EWD1311.PDF

He is an exceptionally sharp thinker who is able to communicate his thought process with clarity. Even if you don't follow the math you understand how certain proofs can be abstracted from proofs made earlier.

I think you are correct, Dijkstra was exceptionally interested in formal verification to the point where he wasn't willing to envision a computing world that was not formally verified. When he said that programming is more difficult to mathematics he was probably envisioning the problems of formal logic with increasingly sophisticated programming structures.


There were (and are) plenty of ways to make some simple scripts that you could term 'programming'.

This isn't what Dijkstra is talking about - he's talking about formally correct programming which has gone out of style in favor of unit tests and 'move fast and break things'. Completely different perspective on programming - a scientific approach versus an engineering approach to problem solving.

Dijkstra would be very upset by your hacking together Python scripts as programming and he would probably have sent a similar letter to the one in the OP. ;)


No, it's because a larger amount of programming was more complicated back then. Resources were significantly more constrained, and converting database result sets to pretty HTML layouts wasn't part of the field yet.


No, it's not. Dijkstra believed in programming as a formal discipline. One of his more contentious claims is that most computing scientists who wrote programs (note the "computing", not computer[1]) should use formal methods in addition to programming. In many cases this would mean proving your programs correct. In this sense you are not simply a "code monkey," you are a mathematician generating proofs of algorithmic analysis.

Now, of course, he was correct that this is quite difficult, especially if you want to do it for everything. It turns out that the demand for programs which sometimes work is higher than the demand for programs which always work, so most programmers are not mathematicians.

However, many of Dijkstra's feelings are being revisited, with a modern twist. The C security disasters of the latter part of the previous century convinced a lot of people that we have become far to lax in our discipline.

However, we still recognize that formal proofs of correctness are far too onerous for ordinary programmers. Instead, we'd like to offer the benefits of formal analysis to programmers without any experience with formal analysis. This is the study of type systems.

I could go on, but this is a long topic that I really just need to sit down and write a blog post about to cover in any sufficient detail.

[1] Computer science is not the study of computers, it's the study of computing


> I could go on, but this is a long topic that I really just need to sit down and write a blog post about to cover in any sufficient detail.

Please do.


>Is it because they didn't have relatively easy-to-use languages such as Python back then?

No, it's because (people like Djikstra at least) didn't regard the stuff we do with Python nowadays as real programming, in the same way you don't regard office doodles as Van Gong style art.


> Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.

This is completely backwards. The majority of papers about programming are accessible to most good programmers, perhaps after a little study. The majority of pure mathematics papers are inaccessible to almost all mathematicians.

What programming achievement is comparable in difficulty, complexity, or scope to the Classification of Finite Simple Groups or the Poincaré conjecture?


You seem to be under the impression that being difficult to understand is a sign that you're working on difficult stuff. I disagree. It more often means that you've got so much jargon and such poor expository skills that you're not understandable. See http://bentilly.blogspot.com/2009/11/why-i-left-math.html for more on my perspective about math in particular.

What programming achievement is comparable in difficulty, complexity, or scope to the Classification of Finite Simple Groups or the Poincaré conjecture?

Let's compare the Classification of Finite Simple Groups with the Linux kernel.

The classification spans tens of thousands of pages over decades, was done by about 100 mathematicians, in the end large key papers were so poorly reviewed that some of the most pivotal people in producing that classification came to the opinion that the whole cannot truly be trusted and began an attempt to redo the whole proof in a more understandable and verifiable form.

The Linux kernel is about 15 million lines of code from a few thousand developers (most of whom, admittedly, only contributed one patch) over a period of 20 years which is widely tested in the real world on everything from phones to massive compute clusters.

I agree, they are not comparable achievements. The Linux kernel is bigger, involved more people, and we have more assurance that it actually works.


You seem to be under the impression that being difficult to understand is a sign that you're working on difficult stuff. I disagree. It more often means that you've got so much jargon and such poor expository skills that you're not understandable

It's absolutely false to say that the Poincaré Conjecture or the CoFSG are hard because of esoteric jargon. The statements of both of these theorems are relatively simple, accessible to every mathematician. But the Poincaré conjecture resisted proof for a hundred years, and the CoFG for at least fifty, depending on how you count.

Your article, I think, supports my point. One of the ways a "poorer mathematician" (to use Dijkstra's dickish term) may make an original contribution is to hyperfocus on an esoteric field, where the unsolved problems are easier simply because nobody has cared enough to solve them yet. Programming is different: an average programmer may still make an original and useful program. This illustrates that creating an original and useful program is a lot easier than creating original and useful mathematics.

The Linux kernel is bigger, involved more people, and we have more assurance that it actually works.

Absolutely, because it's far easier. Most of the Linux contributors could create a program comparable to Linux, given enough time: there is nothing in it so difficult that a good programmer might spend their lifetime working at it fruitlessly. But I am certain that I could devote my life to finding an alternative proof of the Poincaré conjecture, with little to show.


I think the entire statement is silly in that it attempts to compare two entirely different branches of mathematics, it would be almost as arguing between physics and biology. Physics is the study of matter and energy. Biology is the study of life. Any attempt to assign some "order" to them or declare one "harder" than the other is subjective and pointless.

However, there are some factual problems with your reply which I want to address. The Classification of Finite Simple Groups was a large endeavor spanning over an entire century. It was not made "by about 100 mathematicians".

The fact that the current was not working is not sufficient enough to conclude that the result was a failure or that pure mathematics is somehow worse than computer science. It would be similar to writing some code, and then suddenly realizing that you have to rewrite it if you want to get acceptable performance or to stop it from becoming spaghetti code. Neither of this sheds bad light on the field of computer science.

The key note errors you refer to are not much different from bugs, which of course are inevitable.


Let me address the claim of factual errors.

I pulled my estimate of the effort put into the classification from the second paragraph of http://en.wikipedia.org/wiki/Classification_of_finite_simple.... Do you have a better source that I should have used? If so, then cite it and update Wikipedia.

I pulled my estimate of the effort for the Linux kernel from https://www.linux.com/learn/tutorials/560928-counting-contri... and then extrapolating from the fact that if 1316 had patches for one major kernel release, that over all versions we probably had thousands of developers contributing code but probably not tens of thousands.

The facts that I presented are the best that I have available.

On the overall question of the comparison, I did not choose what should be compared, I merely compared them by the most convenient criteria that I could and came to the opposite conclusion from the previous poster.


I've already counted 90+ authors and contributors from a _brief_ history of finite simple groups, using [1]. This includes "et. al", from which I have assumed best practice of APA Style Guidelines, indicating at least 6 authors. I also could have counted more using [2], but I believe this is sufficient.

I feel that the statement that "only 100" mathematicians worked on the classification of finite groups is disingenuous. There is clearly a disparity between contributing a kernel patch and dedicating your entire life to researching a mathematical topic.

As for the Linux kernel, I would like to point out that the original paper [3] states that 7,944 developers have contributed since 2.6.11 (which is more than your estimate, but important regardless). A paragraph later on is a little more telling:

>[D]espite the large number of individual developers, there is still a relatively small number who are doing the majority of the work. In any given development cycle, approximately 1/3 of the developers involved contribute exactly one patch. Over the past 5.5 years, the top 10 individual developers have contributed 9% of the total changes and the top 30 developers have contributed just over 20% of the total.

That is to say, 30 developers have made 20% of Linux. This is not unlike the classification of finite groups, where a small amount of people have done a large majority of the work. It would be insane to say only 100 people have ever looked at finite groups. Indeed, proving something is exponentially harder than learning it. Similarly, you would not say that 7000 people use Linux, because 7000 have hacked on the kernel.

As for your last comment regarding about how you did not choose the topic, I agree. But I think that this entire debate is silly: neither side is more important than the other, more difficult than the other, mutually exclusive, or so on. I do not agree with Dijkstra on this point, and I feel that his fame is no reason to excuse him from it.

[1] http://www.ams.org/journals/bull/2001-38-03/S0273-0979-01-00...

[2] A History of Finite Simple Groups, by FCC Doherty.

[3] http://go.linuxfoundation.org/who-writes-linux-2012


I would assume that "et al" hides a lot of the same people contributing to multiple papers. The wikipedia estimate is likely wrong, but I don't think by an order of magnitude.

I agree with you on the Linux kernel. I even noted that about half only contributed one patch.

I suspect that I disagree with you about what Dijkstra's point was. When you're programming, there is nowhere to hide from the fact of your code not running. By contrast in pure math, a surprising amount of sloppiness can be hidden in the fact that minor papers do not always get completely rigorous view. Therefore if you're inclined to that form of sloppiness, you're better off in pure math. (Which is not to say that all people in pure math are sloppy in that way.)

I don't think he's right - I've met too many programmers and bad computer scientists to have illusions that all are capable - but he does have a point.


There are plenty of mathematicians that don't program very well. Keep in mind, there are many "programmers" that are not really the sort of programmers Dijkstra is talking about. He's not talking about whipping together WP sites with a little JQuery to make dynamic menus.


What looks the most like traditional mathematics in computing is the η-conversion in the lambda calculus.

The η-conversion is any operation on expressions that yields an equivalent expression, on the condition that it is not just a rename, because that would amount to an α-conversion. It is also called mathematical derivation or what other people incorrectly call mathematical "proof".

Without denying the merits of being proficient at showing that expressions are equivalent through η-conversion, we can easily see that quite a few mathematicians only use this method to show that the one meaningless and useless expression is equivalent to another not more meaningful one.


I'm not sure what this has to do with anything. I suppose you're trying to refute the notion that computing is related to mathematics because of this one trivial and somewhat strange example you bring up, which I suppose is to mean that you think this is really the only thing in computer science that resembles mathematics. I would just say that that is pretty odd.


Well, the mathematical method is pretty much typified by its propensity to engage in derivation of equivalent statements. This is simply what people do when they construct a "proof", which is of course not a proof, but that is another issue altogether. Alonzo Church says that derivation can entirely be done by (1) renaming (=alpha) (2) application (=beta) and (3) substitution (=eta). I think that Alonso Church is right. That summarizes the only technique that pure mathematicians seem to be capable of, which is indeed a bit poor. So, I agree with Dijkstra: don't go into computing if that is all you can do.


Just like an artist only has the techniques: move pencil horizontally, move pencil vertically, move pencil toward or away from the paper. The art comes from knowing what to draw and how to combine these steps based on a vision in your head. And the real part of pure mathematics is envisioning new structures and new relations that nobody has used before, at which point the proof itself can become a detail. IT's not through lack of trying that formal computer proof systems haven't been able to touch a milli-part of modern mathematics

This is also a point brought up in the great guide: Advice to a Young Mathematician, http://press.princeton.edu/chapters/gowers/gowers_VIII_6.pdf

> We are all taught that “proof” is the central feature of mathematics, and Euclidean geometry with its careful array of axioms and propositions has provided the essential framework for modern thought since the Renaissance. Mathematicians pride themselves on absolute certainty, in comparison with the tentative steps of natural scientists, let alone the woolly thinking of other areas.

> It is true that, since Gödel, absolute certainty has been undermined, and the more mundane assault of computer proofs of interminable length has induced some humility. Despite all this, proof retains its cardinal role in mathematics, and a serious gap in your argument will lead to your paper being rejected.

> However, it is a mistake to identify research in mathematics with the process of producing proofs. In fact, one could say that all the really creative aspects of mathematical research precede the proof stage. To take the metaphor of the “stage” further, you have to start with the idea, develop the plot, write the dialogue, and provide the theatrical instructions. The actual production can be viewed as the “proof”: the implementation of an idea.

> In mathematics, ideas and concepts come first, then come questions and problems. At this stage the search for solutions begins, one looks for a method or strategy. Once you have convinced yourself that the problem has been well-posed, and that you have the right tools for the job, you then begin to think hard about the technicalities of the proof"


An efficient algorithm to compute lookahead sets for a LALR1 grammar is seriously intricate. The closure computation algorithm is definitely non-trivial. I don't know about the "Classification of Finite Simple Groups" but LALR1 lookahead computation results go into everything. Every computer, tablet, or phone carries at least 10 different copies of such lookahead set-sets. Does the "Classification of Finite Simple Groups" actually lend itself to doing anything useful? Can it be applied?


I would classify the discovery of these algorithms as computer science, not programming. Had Dijkstra said that computer science is harder than pure mathematics, I would have characterized his point as "arguable." Perhaps the CS term had not come into wide use when he said what he did.

Regarding your question, some proofs have immediate applications, and some do not. The CoFSG falls into the latter category.

But when you ask if a piece of mathematics is useful, the only two fair answers are "yes" and "not yet." For example, quantum mechanics relies on group representation theory, which had been worked out earlier, with few applications. Another example is Grassmann's work on abstract vector spaces, which was considered somewhere between incomprehensible and useless for decades after publication. Now it's taught in first year linear algebra, and is central to physics and many types of engineering.

As for the two theorems I listed, the Monster Group (discovered during the CoFSG) has connections to string theory, and the Poincaré Conjecture may have implications for the allowable shapes of the universe. So, while it's unlikely, it may be that these mathematics end up telling us something about the very nature of reality itself.


I agree that there is requirement for research results to be useful immediately. That would probably be counter-productive. However, there should be some hope that we will be able to apply them some day, even in a rather distant future ...


What sort of languages did Dijkstra really like?


He didn't use computers much, other than for email and browsing the web. Most of his writing was composed by hand using a fountain pen. He did work on ALGOL 60 and it is believed that he approved of the language.

Source: Wikipedia


In college I was reading a lot of the EWDs (particularly his trip reports, some hilarious stuff there) and IIRC he didn't like C/C++ a lot and thought that functional programming has some promising ideas. This was in the later EWDs, I'd say 1150+.

There are some fun quotations of him about APL and Basic.


I wonder what the uncomfortable truths that everyone is ignoring are in today's programming world?


HTML .. when PostScript was way better.

The many years spent with C and C++ when there are better alternatives around.

Microsoft Windows, OSX, Linux when Plan9 exists.

OOP.

NoSQL and its total misunderstanding of the power of having a declarative query language.

Cache coherency, ordering guarantees, where we should opt for less.


I think you confused 'truth' with 'things jlouis likes better than others' --- to be fair, djikstra seemed to suffer from the same distortion.


Out of interest, have you written much interactive PostScript?


No, but they could've extended that instead of creating a new monster?


I've done a fair bit of development inside Postscript (way back when it was one of my favorite languages) and its problem in this context is that it /is/ a programming language, and one that would have to be effectively sandboxed. HTML on the other hand is purely a markup language, is extensible, can be reflowed, and is much more natural for writing documents by hand.


That keeping "up to date" on programming is not reading the latest blog posts on updates to Backbone.js, or learning yet another Web based *MVC or ANTI-MVC framework.


Not sure if this is strictly programming-related, but: Eschewing traditional career-track employment in favor of joining a startup can be damaging, and frequently does not break even with respect to the amount of money and influence you would gain working in a more traditional position.


PHP.


Nah, that's the one that's popular ("pleasant") to recognize.

If I had to hazard a guess, I'd imagine Dijkstra would have a word or two to say about callbacks, given his famously strong beliefs about GOTOs....


Is there some uncomfortable truth about PHP that's not being shouted from the rooftops in every thread about PHP? If so, what is it?


Perhaps "Nothing else solves the deployment problem as well as PHP did."

(No, not even "git push to deploy".)


Perl.


    Besides a mathematical inclination, an exceptionally good
    mastery of one's native tongue is the most vital asset of
    a competent programmer.
Surprisingly true.


I love that one, and it always surprises me when I meet people who can't express themselves properly. There are some of the earlier EWDs, where he meets a lot with German computer scientists (in particular one of the founding fathers of the field there, Friedrich L. Bauer, whom he calls "Uncle Fritz"), and it seems that he had good understanding of German, too.

There is also an interesting video of Dijkstra from a Netherland's television station (http://www.cs.utexas.edu/~EWD/video-audio/NoorderlichtVideo....)


If you are Dijkstra, with a zest that becomes its own attraction. He did a great deal of important work, but seems too often to be remembered for stinging epigrams.


For a supposedly hard-core science guy, he was not beyond name-calling and BS accusations based on pure opinion.


Indeed, looking back over that list, I'm surprised by the fact that almost everything on it is - and was at the time - simply wrong.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: