Hacker News new | past | comments | ask | show | jobs | submit login
Hacker style versus the Dijkstra style
44 points by mad44 on Feb 11, 2009 | hide | past | favorite | 33 comments
Recently, I have been thinking about these two opposing styles: the hacker style ---rapid prototyping and incremental improvement---, and the Dijkstra (academic?) style ---think hard and get it right the first time---.

Given a problem, the hacker style is to: 1. Suggest a partial solution 2. Improve solution, add to solution to cover all cases 3. Repeat 2 as necessary.

In contrast the Dijkstra style is to: 1. Think hard to get the RIGHT solution 2. Justify that it is the right method, if failed throw it away 3. Go to 1 as necessary 4. Build solution using the method.

The hacker style makes you get started and enables you to make steady progress. Even though the resultant system is not always an elegant solution, you are blessed with the advantage of predictability, and always having a more-or-less working product.

Dijkstra style prevents you to commit the sin of over-specialization and to settle with a complex or suboptimal solution when there is an elegant solution that also covers the general case of the problem. The downside is that you may not able to make any progress at all, since you are not to accept a mediocre solution.

I am sure the HN community has a lot of success stories and arguments supporting Hacker style over the Dijkstra style. I am wondering if you had any cases where the Dijkstra style saved you, and I am interested in hearing your arguments/anecdotes supporting the Dijkstra style over the Hacker style.




Ok, but Dijkstra would be the last person to use a go to statement in step 3.


↑ Quote of the week.


I never ever upvote witty statements; I think that was the first time. Far too well done.


Point well taken :-)


HAHAHA nice one :D


The difference between the hacker style and the Dijkstra style is more apparent than real. Both rely on successive approximation.

See for example http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EW... where Dijkstra goes, in painstaking detail, through the process of solving the "8 Queens" problem, building up the solution piece by piece.

A hacker would probably come up with exactly the same algorithm, though probably written more quickly in another language, and with a few more compile/run/debug cycles.

The thing is: the mode of attack - breaking down the problem into simpler subproblems, solving one at a time, and in such a way that you're pretty sure that each step is correct - is used by both successfully. It reminds me of a study I read many years ago in Gerald Weinberg's Psychology of Computer Programming. The investigators studied the work patterns of many good programmers, and one of the variables they studied was number of runs per day (this was back in the day when one submitted ones program as a deck of punched cards to be compiled and (possibly) executed. They found that the good programmers tended to divide into two groups. The first (more akin to Dijkstra) mostly submitted a program which compiled cleanly, and executed, followed a significant time later by another. The second group submitted a program, got either compilation or runtime errors, quickly submitted a fixed version, perhaps iterating a few times, then went off for a while till they went through the quick cycle again. The point is: each group consisted of equally good (as far as could be measured) programmers. Both styles are usable, both styles are necessary to have in your arsenal: you need to be able to think through a problem, and prove your solution correct; and you need to be able to tackle it piecemeal and experimentally when it's too tough to tackle all at once.


Exactly, if you know what the problem is the Dijkstra style is great: system software, compilers etc..

If you are doing anything with business logic well that's where hacking takes precedence because the problem you are trying to solve isn't well understood.


Dijkstra advocated thinking about a problem until it is well understood.


But often times you don't have all the information. You can watch how people currently do the task, but that might not be the optimal way once software starts being used. It's better to get them using the software so that they can suggest features, or perhaps even whole new ways of doing the task.

Not all problems can be merely thought about until their understood there are quite a few that require experiments to get further information. That is the hacking approach rapid experimentation.


Prototyping helps you in understanding it.


Problem is someone usually has to pay the bills while you sit there and think.

In certain cases you have this luxury. Most businesses however, do not.


That's what pg have been saying all along.


MIT approach versus worse-is-better philosophy http://www.jwz.org/doc/worse-is-better.html


The easy argument supporting Hacker style over the Dijkstra style is the whole get it in front of the customer and make sure you are solving the right problem. You can solve the wrong problem as elegantly as you want no one will care.


"get it in front of the customer" is a great idea when you don't understand the problem. Now there are plenty of times when it's OK to not understand the problem, but if your building a driver and still don't understand the problem then something is wrong. Iteration as a process of discovering the question / need, but if the problems well understood it's not really the best solution.


A long time ago in a galaxy far, far away I spent a month to reengineer a billing program. Previously, it needed 24h to issue monthly bills; after the reengineering it finished in under a minute. It was an easy target, the previous guy had that "hacker" style of thinking which resulted in a program being a pile of quick fixes.


"Hacker-style," doesn't mean completely abandoning all planning and thoughtfulness. It means rapid prototyping, sure, and (of course) constant refactoring, not a constant deluge of slapdash quick fixes.

What you inherited was a garden variety mess, possible from any poor practitioner of any school.


That's because it was likely just a pile of quick fixes. At first, most accounting systems are nice and fast, but then one of the guys over in marketing needs this feature added, and one of the managers wants to be able to track his employees in it.

Over time what may have been a relatively quick, agile system has ground to a halt because of all these "Features" tacked on. Does this mean the original style of programming was wrong? Does this mean that every time a new feature or bug was found the entire system should have been reengineered? No. That's what they hired you to fix. It took you a month to recreate the software, with all of the current features built into it. Since the only other real alternative is to keep a developer on hand all the time to re-write the software when needed, the "hack some features on" method seems to be far more cost effective.


But - you had a perfect spec. You had a working program you could use as a reference implementation, and despite rewriting everything, you knew exactly what it needed to do.

The original implementer presumably did not. In reality, you were taking the "hacker" approach, but your iteration was to rewrite everything, because you knew exactly what was wrong with it. The first guy, if he was a true hacker, probably would have abandoned the fixes at some point and done the rewrite as well.


I suggest a bit of both. Start with "Dijkstra style" and finish with "hacker style". Adjust levels of each as appropriate for the project you are working on.


I prefer the opposite; hack together a prototype to get stuff moving. Encounter issues you didn't even think about during the planning stages. Eventually notice your code is a big morass of hacks that you don't want to look at. Enumerate all of the issues with the hacked-together software which could be done much better now that you've got hindsight and experience with the problem domain, and rewrite the software Dijkstra-style.


This relates to the "version 2.0" dilemma. Version 1 of a program works well enough to make money, so that team gets to make a version 2. But they try to "redo it right" by making their product do everything conceivable (think emacs), but it's slow and buggy and in the practical ways, worse than 1. Version 3 (where they learn from version 2 how to make it elegant) doesn't get written because the team gets split up because v2 was such a failure. Another example of the v2 problem is Winamp.


I agree. Working everything out is the best way to get all the details into your mind. Once you know all the details a good design should be much more obvious. You may need some extra coffee to finish the final version, it is tempting to leave it alone once you are intellectually satisfied since you already have a working version. If you managed a decent modularization of the first version, you may be able to prioritize and just do a final version of the important parts.


The Dijkstra argument is that looking at examples, learning about concrete cases may prevent you from thinking about the most general abstract problem, for which you may devise an elegant solution. So according to the Dijkstra argument the prototypes you create may bias/prejudice your thinking, and should be avoided if you want to get to an elegant/optimal/general solution.


That sounds right, I can think of some revamp/rewrite projects that felt stunted because we only really thought about the problem in terms we already created in the first iterations.

However, in practice I find that programmers don't necessarily know what the (software engineering) problem is, what feasible solutions are out there (feasible != possible), don't know what the drawbacks to their plan are, and generally feel unmotivated if they don't start banging out code relatively soon. It seems really easy for a project to stay indefinitely in the planning stages if you want to make sure it's a "perfect" solution, and it's very hard to know that a solution is "perfect" without encountering its drawbacks through experimentation, i.e. coding.


I often do this without even thinking about it -- I'll start a new program by writing a working prototype, but I won't even run it through the parser until I feel that it is minimally complete and have "proven it correct". It lets me focus on writing and thinking instead of editing and the 'trivial' matter of whether it works at all.

Then I spend some time fixing stupid syntax errors and namespace collisions so it parses and works, init version control, and after that I do super-short iterations.


It seems that a good balance would be to rapidly prototype as a method of understanding the solution (in fact, for most proofs I've done, this is exactly the method: do a few specific cases and try and generalize them).

This then lets you start hacking right away, but without loss of direction. It will also help you find the general solution.


Let me repeat a comment I made in response a similar comment above:

he Dijkstra argument is that looking at examples, learning about concrete cases may prevent you from thinking about the most general abstract problem, for which you may devise an elegant solution. So according to the Dijkstra argument the prototypes you create may bias/prejudice your thinking, and should be avoided if you want to get to an elegant/optimal/general solution.


Partly it boils down to the sort of problem you're trying to solve.

I find that if your problem is not very domain-specific; ie if you can strip out the labels, and consider what you're doing as manipulations of relatively abstract objects; rearranging lists and graphs and so on - then up-front thinking is repaid many times over.

There tend to be lots of corner cases (what if this list is empty, what if this graph has no nodes, ...) and accidental infinite loops you can fall into. Thinking about it and specifying the algorithm up front forces you to take a coherent view on your problem domain.

If you approach this sort of thing iteratively (well, if I approach this sort of think iteratively) I tend to find that even if I can get my initial usecase working quickly, I haven't got the problem well enough established in my own mind, so I end up fixing things in an ad-hoc manner, squashing bugs in one corner case only to find 5 or 6 cropping up elsewhere because eg in half the code I assume X can never be 0 while in the other half I've been using 0 as a marker value for invalidity.

Solving that sort of problem I find is much better done by thinking hard about it before hand and working out what you're trying to do. Of course, in any case, you should surround your method with lots of testcases documenting what it's doing as well.

This is very different I think from the argument of agile development against waterfall-style development. The hard thinking in this case is at most a few hours - it fits in entirely with agility.


Debugging comes to mind: I think most will agree that it's much better to take time to understand the underlying cause of a seemingly mysterious bug, and apply a patch that makes logical sense on the correct layer of abstract, rather than patching and repatching the layer of abstract where symptoms appear, without understanding the root of the problem, just to get the code to compile (or get someone off your back or whatever).


Often, the best approach is a blend of the two, I find.

Identify the aspects of the problem that you don't understand fully, and which aspects are likely to change. There's usually some part of the problem that you do understand.

Then spend a day thinking up an elegant framework that allows you to experiment without making a mess. Small classes that can be swapped out or added and removed as needed. Modules that can be customized and moved around. A strategy pattern that lets you tweak your algorithms as you get more data. Whatever makes sense for your application.

"Hacker style" is no excuse for sloppiness; nor is "Djikstra style" any excuse for slowness.


I second for that. Actually I think the OP's description of two styles is two extreme cases of one spectrum. Plus, the OP doesn't distinguish difference in the size of the problem domain.

Standing out academic papers looks perfect and it seems as if the authors come up with the final version immediately, but in reality, many problems dealt in academic papers are attacked progressively. For bigger problems, you'll find series of papers beginning from special case solutions to more general ones. Wiles gave a proof of only special cases of Taniyama-Shimura since that's enough to prove Fermat's last theorem (so I heard; I can't understand the actual paper).

On the other hand, if you're writing critical part of nonstop software and trying to fix a bug related to multithreads, you want to iterate every possible timing combinations to make sure things won't be possibly broken. It's much like the process of writing math proof.

And there's a problem domain. Djikstra style works well when applied to the problem whose domain is well understood. That's why it requires much thinking before actually writing solutions; you have to understand the problem clearly enough. For many software, the envionment in which it runs is more vaguely defined; anything that interacts with human has to incorporate this big unknown, unpredictable element which is called human. The software that supports business must adapt to the business process, in which many details are left to decisions of skilled human without any explicit rules. Software for such problems can't have the well defined domain before start writing in Djikstra style.


HackerMethod: SelectPartialSolution; GOSUB DijkstraMethod; REPEAT AddCaseToSolution; GOSUB DijkstraMethod; UNTIL AllCasesCovered.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: