Hacker News new | past | comments | ask | show | jobs | submit login
E.W.Dijkstra: On the foolishness of "natural language programming" (utexas.edu)
119 points by hasenj on Nov 17, 2010 | hide | past | favorite | 69 comments



A moment isn't such a long time anymore, not for compute clusters anyway.

If we don't keep trying to make AI, we're never gonna get there. I'm not saying data mining is AI. I'm saying human brains do data mining too, but differently, and figuring out how is what might come of all this.

Wolfram-Alpha-style natural language programming actually isn't that bad because it's a dialogue. The thing tells you its assumptions, so you're back to formal representations you can deal with. Natural language is mostly statistics, also known as guesswork. A machine can guess better than humans. Google proves that by suggesting "torrent" every time you search for a movie title.

Dijkstra fears a world where the lack of formalisms prevents progress. That's why he likes formalisms. I do too, maybe not for these exact reasons. Anyway, I take formalisms as a given because they've always been there my whole life.

Formalisms can be impractically formal. The level of precision required by machines is beyond even what most mathematicians are used to casually (i.e., pen and paper, or at a blackboard). If you had to do math like you write code, mathematicians would go bonkers. Include this, type-cast that, ... nah, everyone knows what this and that is supposed to mean, right?

A little NLP could make it easier to command computers. A little more NLP could turn machines into really useful slaves. AI would be opening a can of worms few in this world are prepared to deal with.


> Formalisms can be impractically formal. The level of precision required by machines is beyond even what most mathematicians are used to casually (i.e., pen and paper, or at a blackboard). If you had to do math like you write code, mathematicians would go bonkers. Include this, type-cast that, ... nah, everyone knows what this and that is supposed to mean, right?

Mathematics only looks formal: compared to software it is really another natural language -- because its purpose is to communicate between humans, whereas software is ultimately not about communication but design of artifacts.

The point about practicality still holds though, but since the purposes are different it must be something of a different kind of practicality to aim at.


This man knows about programming languages way more than most of us. Listen to him. I bet it isn't the first time he has this discussion. See: Algol, Ada and others.


It's often the foolhardy and the ignorant that tackle problems that are supposedly impossible. And sometimes end up succeeding.

Wisdom is certainly something to take from, but embodying the limits imposed by someone as a roadblock to onself isn't the way to innovation.


> It's often the foolhardy and the ignorant that [...] sometimes end up succeeding.

Please name some examples. How often is sometimes? What is their probability of success compared to that of domain experts?


Will get pointed up but just smacks of passive-agressive argumental.

History, specifically in science and math is full of examples where people who actively sought to question the knowledge of more senior experts have made discoveries that advanced the knowledge of human-kind and have then been overlooked or ignored because they did not have the correct background. Institutions such as The Royal Society and The Geological Society where created exactly on this principle.

The idea that "you cannot question X because X is a learned man above your station" is what IMO destroys general interest in fields such as physics. You have to spend a career following the party line in the hope that one day you will be given enough rope to actually challenge anything.

How on earth would you put probability on something like that?


The existence of a black swan doesn't imply that swans are usually black.

A. "There are examples where newcomers to a field found a novel solution that senior experts have missed."

B. "It is often the ignorant who succeeds."

B does not follow from A. Not even in mathematics where people use to say that you have to prove yourself worthwhile by the age of 25 or it won't happen.

> You have to spend a career following the party line in the hope that ...

This is a slightly different discussion but IMHO in many fields (e.g. medicine) contemporary science has little to do with genious or insight but rather is an industrial effort ... and that _often_ isn't fun.


A lesson on logic from somebody who equates "It's often the foolhardy and the ignorant that tackle problems that are supposedly impossible. And sometimes end up succeeding" to "It is often the ignorant who succeeds" seems ironic to put it mildly.


:-)


The model of the atom that was popular before the current model. The model of the atom that was popular before that one. The model of the atom before..., ok you get the point.

If you'd like more, A Short History Of Everything by Bill Bryson is about 600 pages of examples.



From the page: "George Bernard Dantzig, a doctoral candidate at the University of California, Berkeley. ... George Dantzig (himself the son of a mathematician) received a Bachelor's degree from University of Maryland in 1936 and a Master's from the University of Michigan in 1937." Not exactly my definition of somebody ignorant.


Certainly not, but he was ignorant of the most important fact, that these were unsolved problems.

This is one of the greatest stories ever. If it weren't true it would have to have been invented... but it is true.


Just because someone does not know a solution to a given problem does not mean they think it's impossible. The teacher gave the problem as examples of something that seems solvable but nobody has a correct solution to. Then someone with a lot of domain knowledge spent some time and solved it. IMO, that’s says more about the value of domain knowledge than tenacity. It's not like they spent 20 years comming up with a solution.

PS: There are a lot of long standing math problems which people have spent vary little time trying to solve.


I don't get your point. There's a difference between an unsolved problem and an unsolvable one? That's obvious. It's also irrelevant to the story.

The relevant distinction is between coursework exercises and open problems in the field. That's a pretty big difference. When students complete their homework, they don't typically get woken up by phone calls from excited professors telling them to write their work up for publication immediately.


The person who posted the example: presidentender

I can think of one:

http://www.snopes.com/college/homework/unsolvable.asp?a

Replied to my post: He was ignorant of the fact that the problems were supposed to be impossible. So he at least assumed the problem where thought to be unsolveable.

As to the gap between homework and open questions in the field, it can be fairly small in mathematics when the course is on the cutting edge. The field of Statistics was a lot more open back then, and there are still plenty of subjects where the gap between cutting edge homework and original research is fairly small.

PS: I once had a teacher suggest I write something up as original research as an undergraduate. The circumstances where a little different but less than you might think. It was a lecture where he was describing an algorithm and I said “that’s seems slow why not do X” but the same basic concept.


He was ignorant of the fact that the problems were supposed to be impossible. That's in line with the parent comment's sentiment.


Sorry, no. The probably was supposed to be solvable.

Once again, yet to be solved is not the same thing as assumed to be impossible. In the history of mankind nobody has solved a X^2 * (first thousand digits of PI) + (next thousand digits of PI) * X = 0. But, using modern techniques I don't think it would be that difficult. And if someone 50 years from now could find this post and decide to wast their time they might be the first person to solve a this 50 year old math problem etc.

PS: Examples of older and harder problems: http://en.wikipedia.org/wiki/Millennium_Prize_Problems note nobody is saying they are "impossible" just unsolved.


Let's suppose it's a really easy problem to solve.

Why the hell would anyone want to program in a natural language?

Suppose you could write a kernel in such a language, and one of the instructions was:

"Load the first 5 bits at address 0x323abc into the last 5 bits after address 0x89bbca"

Now, wouldn't it be easier to say 'shuffle_bits(5, 0x232abc, 0x89bbca);' where you control precisely what 'shuffle_bits' does?


Obviously you wouldn't write kernels. You would write the equivalent of shell scripts for ordinary users. Imagine if ordinary users could automate their daily workflow or customize MS Word as much as vim or emacs users do with their editors.


Or even:

    load_first_five_bits_into(0x323abc)
    load_last_five_bits_into(0x89bbca)
Or if you do mappings a lot, I'm sure I saw an IP packet parser based on syntax like this:

    [----5----|----n----|----5----]
    [0x323abc |void     |0x89bbca ]


you're actively trying to destroy the idea of natural language programming. you'll succeed because the idea itself means nothing unless someone implements it.

natural language programming means that you, as the human, concentrate on high-level aspects while the machine fills in the blanks (stuff you don't care about).

natural language programming... it's supposed to be a dialogue, not a specification. you say a few words and the machine creates a formal representation of that. then you're free to alter the assumptions made by the machine.

you're shaping something in broad strokes. you're designing something. you don't actually code something. you get your own private code monkey, you get to talk to him, explain your design, let him do the typing, and intervene when needed. now, that code monkey could be a machine.

i guess it takes a certain mindset to accept and rely on others (also machines) to do the grunt work. i certainly don't want to be doing repetitive grunt work if i could help it. it's fun now and then, when i have the choice. it stops being fun if i have to do it.


    copy the time field from cpuinfo to temp_time'
    backup the low 5 bits of goodnameformemorylocation'
    locA's low 5 to locb's low 5'
Seems like there are a fair number of nice representations. Context is obviously important though: in a kernel, you probably want fairly implicit ways of referring to bits.


That is basically what COBOL looks like.


some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes

This is interesting, I think - this is really one of the core parts of the argument between static and dynamic typing. I prefer bondage and discipline to free love when it comes to types - I find programming in dynamic languages mushy, ill defined and error prone. And yet, lots of people love them and manage to do amazing things with them. Although in general I agree with Dijkstra's essay, maybe there is more wiggle room for a "more natural" language than I'd thought.


That quote also stuck in my mind. I'm more of a dynamic-language type for most things, and his argument seems to go heavily against dynamic languages. By the way, I'd quote the sentence before as well since it adds important context.

It was a significant improvement that now many a silly mistake did result in an error message instead of in an erroneous answer. (And even this improvement wasn't universally appreciated: some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.)


Actually, I don't think this has to do with dynamic vs static typing. It is more about strong vs weak typing, i.e. implicit type casting.

In many dynamically but strongly typed languages (like Ruby), you would still get a type mismatch when you try to add a string to a number at runtime. It's just that with static typing, you would have realized your error much earlier at compile time.


While I think you're right, my guess is that he had strong static typing in mind when he wrote the article.

Ultimately they lie on a scale. With a dynamic, strongly typed language, you still leave plenty of opportunity to make undetected mistakes. The obvious example is changing the expected type of a method argument, while only updating a subset of the call sites.


I'm not sure what he had in mind at that time, but Dijkstra was never all that enthusiastic about type theory. He thought it was a promising and worthwhile avenue, but was suspicious of it being treated as a panacea.

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/EW...

"Another thing we can learn from the past is the failure of characterizations like "Computing Science is really nothing but X", where for X you may substitute your favourite discipline, such as numerical analysis, electrical engineering, automata theory, queuing theory, lambda calculus, discrete mathematics or proof theory. I mention this because of the current trend to equate computing science with constructive type theory or with category theory."


> [Dijkstra] was suspicious of [type theory] being treated as a panacea

I'd be suspicious of anything being treated as a panacea.


=) Sorry, I was being sloppy there - couldn't come up with an accurate and un-platitudinous way to characterize Dijkstra's attitude. There are other places in his writings where he's suspicious of an overemphasis on types, I think his criticisms of Ada mention it for example.

Dijkstra was a big proponent of rigorous mathematical means of establishing program correctness in general, of course, but he didn't necessary equate that with types.


I think alot of people have the paradigm wrong. I think people conceive Natural Language Programming as a simply change in syntax, which in it self would complicate stuff.

But if you take the stance that we as humans are Natural Language Programmers, the concept becomes easier to understand.

You can think of your boss at your software development firm as your programmer. He wants some job done and you program some approximation of what he wants done, then you modify based on his inputs. It is a similar thing with Natural Language Programming, your boss didn't use complex symbols to get what he wants done and didn't have to sit with you all the way through to get something done.

For me, it is not a question of whether this makes programming simply but whether we can build a machine with the same capacity as a human.


The bosses instructions and confused and imprecise. The natural language dance in this scenario involves us repeatedly trying and failing and adding more communication until the various terms being used are precisely defined enough so that the software can be written.

This is what Dijkstra was talking about. That without a formal system we end up wasting a lot of time tightening up what we mean.

It may be an interesting field to see if we can make a computer do this dance but it isn't useful.


The current process is: Bosses tells you the programmer and then you instruct the computer precisely. With natural language programming, the boss just tells the computer. There is no you now. Why is this faster? Because you used to take 3 months to get the sofware done, our new system can do it in 3 seconds[the times are arbitary, the general assumption is that it's less than a human].

Of course you are going to still have to specify what you want with both NLP and a human. So this time is constant and it is not relevant when comparing the two system.

What I'm saying is that the NLP would be like a human but faster and less error prone.


An interesting point is that programming a human computer is in itself an acquired skill.

A "boss" with years of experience instructing programmers will get better results than an amateur. Experience with the particular programmer also improves results. An important part of this is knowing what to specify and what to leave to the programmer.

Like you started off saying, looking at this as a way of making programming easier for people who can't currently program is probably wrong. It's possibly useful as something to paddle towards, but I suspect it isn't a genuine raison d'etre.


The problem here is that there is 20+ years of acquired "state" that allows this to happen. Also, think of the hiring process, weeding out potential programmers with "bad" "state"; i.e. they are crap programmers.

If you want a machine with the same capacity as a human you had better have something that can learn at the same rate as a human, and then add 20 years of "life experiences" and education. Then.. _maybe_ you might have a good programmer.

Sorry to rain on your parade, but us meat sacks are better at doing that, evolution has stumbled upon some pretty cool hacks that make it cheap enough (think of a 20 year computer maintenance and energy bill!) to have a "sentient" being.


I don't want to get into the how. The how should be a very open question. Like there is a million and one ways to program something, so there are a million and one ways to create AI. Some implementations are more efficient than others. What you should be asking yourself is how do I make this more efficient than taking 20 years?


It would be good enough to make one issue, even if it took 40 years. As long as you could copy it afterwards.


Do you have any idea how expensive a child is for that same 20 years? Way more expensive than the computer maintenance and energy bill, my friend (I offer my beleaguered bank account as evidence) - and you can't copy the software after maturation, either.


What makes 'natural language' 'natural' is not really the appearance, it is what it is used for. A natural language is one used for communicating between humans. It only works the way it does because of what is on the other end of it -- another person.

And that is why software must be different. Software must always be an 'un-natural language' because its (ultimate, essential) purpose is different: it is not communication but design. It works the way it does because it is for constructing an artifact.

If you want to build complex things, that is equivalent to saying you will have complex requirements. The complexity cannot be squeezed out, substantially, by changing language -- as Brooks said in 'No Silver Bullet', there is a hard core of essential complexity.

That purpose of design seems to mean an essential dichotomy of simplicity vs. control. If you want more simplicity you have to give up some control, and if you want more control you have to give up some simplicity.

One opportunity is that it seems likely true that we over-specify much of our software: we are using more control -- hence complexity -- than really needed. But the solution is not in changing language -- that would be the result. It is in figuring out what to do behind that -- AI figuring out what you want without you saying so. Maybe one day, in particular limited areas . . .


I think we'll see human and machine languages get more alike in the future. We're already seeing a huge increase in the amount of nonfiction written and the level of literacy of the public. English spelling has become more standardized over the past couple centuries. We could say, humans must always communicate by grunting, because they are dumb animals. Or that speech recognition is impossible because machines lack a soul.

Dykstra is brilliant, but he's also a curmudgeon who never likes anything.


Which boils down to: There's a reason that architects and engineers use schematics, rather than just writing instructions in prose.


"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

— Arthur C. Clarke's First Law


To be fair, he doesn't so much say it's impossible as say it's a bad idea.


Exactly. The grandparent's comment is anything but relevant. While I do agree with it, it is clear it has nothing do with this.


Arthur C. Clarke: Science fiction writer.

E.W. Dijkstra: Actual computer scientist.


Clarke was no technical slouch! http://lakdiva.org/clarke/1945ww/


Dijkstra said this in 1978, so he was only 48 at the time.


One could argue that he was an elder with respect to his field at that time.


This holds for everyone, not just elderly scientists. Impossibility proofs are by their nature more difficult than existence proofs.


NLP 1 : Natural Language Programming : ability to instruct a machine using your natural language

NLP 2 : Natural Language Processing : modeling and processing of the natural language phenomenon

NLP 1 requires advancement in NLP 2 to a significant level of accuracy which in turn requires the accuracy and care of the formal symbolism of lower level programming automaton. Having said that, the progress has been slow which often times leads to denouncement of pursuit of research in the NLP field.

The article is 6 years old and a lot has happened in terms of both processing power capabilities that an individual has access to now (cloud computing) and amount of data available (online information explosion, particularly social media streams) which used to be a bottleneck earlier. This opens up a huge opportunity for breakthrough in NLPs by mathematicians as mentioned by Yuri yesterday at Web 2.0 Summit. AT&T's voice activated remote ( http://www.research.att.com/projects/WATSONASR/index.html ) is an example of natural language programming that is achievable today.


Dijkstra died in 2002, so this is probably older than the 2004 date listed as the transcription date below the article. Looking at the date of other EWD notes in the same number range it was probably written in 1978.


I was thinking about lojban (http://lojban.org) when I read this. I mean we do have programming languages, which are unambiguous. Why should we not alter our "natural", i.e. legacy based, communication when dealing with computers?


> Why should we not alter our "natural", i.e. legacy based, communication when dealing with computers?

Or when dealing with each other, for that matter. In Lojban, it is grammatically incorrect for someone to give me vague directions like "we are at the pub to the left of the central plaza". They would instead have to say something like "we-not-you are now at the only pub to the left (when seen from the cinema) of the only plaza".

Imagine political debates where imprecise language and double meaning are much harder to express. Mmm...

.i mi batci le mlatu


> of the only plaza

Which plaza? Just because you said "the only plaza" doesn't make it unambiguous. There certainly exist more than one plaza in any given city.


> Just because you said "the only plaza" doesn't make it unambiguous.

That was just me compressing it to fit somewhat into English. What it really would say is "the only plaza which fits the current context", as opposed to e.g. "the specific place called The Plaza" or "the only plaza in the universe".

The specific Plaza change wasn't the intended point, though. The "to the left of X while standing at Y" as opposed to "to the left of X" is the biggest clarity gain.

It is obviously possible to express ambiguity in any language, even Lojban, but you have to work harder for it.


How exactly is that any different from "the plaza"?

What's even "the current context"?


Again, the "plaza" part of the sentence is not relevant to the actual point I was making. I did not claim that specific change gave a huge boost in clarity.

> How exactly is that any different from "the plaza"?

It is different in that the ambiguity is explicit.

> What's even "the current context"?

http://en.wikipedia.org/wiki/Context : "Context is the surroundings, circumstances, environment, background, or settings which determine, specify, or clarify the meaning of an event." Example context: Me and the person who gives me directions commonly use that specific plaza as a reference point. Another possible context: Directions are related to my small town, which only has one proper plaza.


We understand these things, but they are technically ambiguous, saying "the-only-plaza" doesn't remove any ambiguitiy. "The current context" is also implied, stating it explicitly doesn't add anything, what so ever.

Actually, referring explicitly to the current context would probably just cause confusion, it implies that we both agree on the exact meaning/content of the "current context", which is often not exactly the case.


"The only plaza" is either unambiguous (in cities with one plaza) or incorrect (in cities with multiple plazas). Just because a language makes it impossible to be unambiguous doesn't mean it makes it impossible to be wrong.


perhaps, you might say 'the assumed plaza' for explicit-ambiguity?


Ah, yes. That is actually what the Lojban sentence would have said. My "translation" of that part was a bit off.



http://inform7.com/ is a successful counterexample, to some degree: it gets used, and it's much closer to the vision than older attempts like Applescript and COBOL. On the other hand, it's bad at the kind of programs Dijkstra liked to think and write about. I'd say Dijkstra expressed a basically sound but overly narrow viewpoint.


I like Inform, but it boils down to a kind of Prolog augmented with procedure-like sections, and I'm not sure that I'd prefer it to something more terse. I started to write a non-trivial game in it, and it was actually a lot of fun, but I had to look up the syntax a lot because it wasn't always obvious how to do something that was otherwise conceptually simple. For example, you could write something like

    The Lobby is a room. The oak table is in the lobby. The cake is on the table. The cake is edible. Instead of taking the cake say, "Best wait until the party starts before digging in."
But even though it's "natural language," it still _feels like_ I'm writing

    is_room(the_lobby).
    contains(the_lobby, the_table).
    is_on(the_cake, the_table).
    on_taking(the_cake, { print "You can't get ye cake!"; }).
That said, Inform _does_ manage to make programming non-scary to non-programmers, which is a very good thing. It's probably the fact that I expect programming languages to be unlike natural languages that I felt it was awkward. (I personally ended up rewriting the game in Scheme. Never finished it. Maybe I should go back and work on it at some point.)


Does a specification count as native-tongue programming (using this term as NLP seems overloaded)? If it doesn't, why doesn't it? Would it be better if everybody on a project expressed their ideas in a computer programming language?

This seems like an eloquent diatribe against AppleScript, but Applescript is not what I imagine native-tongue programming to be.



And an analysis of Wolfram Alpha from a viewpoint similar to Dijkstra's: http://unqualified-reservations.blogspot.com/2009/07/wolfram...


If there is a significant enough demand, it will eventually be produced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: