Hacker News new | past | comments | ask | show | jobs | submit login
Systems Programmers Relax - Most Programming Advice Isn't For You (axisofeval.blogspot.com)
124 points by yewweitan on Nov 30, 2010 | hide | past | favorite | 34 comments



This is a huge problem with a lot of discussion about software development practices. Software spans an astronomically huge realm of scales and varieties, yet so often we talk about it as though it's a monolithic thing. As if you could talk about bacteriophages and pachyderms or cap guns and moon rockets in exactly identical terminology without any loss of fidelity in describing either.

It is very true that software development techniques and practices should differ, perhaps even greatly, depending on the nature of the project. It's most important to be able to have the experience and knowledge necessary to know when and where to use one technique over another rather than to just be a mindless sponge of advice.


I agree. But are teachers in K-12 aware of this when doing CS? I'm afraid they inadvertently enforce monolithic thinking in the youngsters.


I don't think K-12 is the problem here. Usually the problems I'm teaching are trivially small (find max of a list, etc). The only other CS class we offer is the AP class, and it's taught to the test - but it's an algorithms and data structures class, not a software engineering class.

It seems to me like the problem is at the college level. My school offered just one software engineering class, which taught one monolithic method of developing software. A couple of semesters developing different types of software using different methods of development would have been nice.


As much as I love to rag on education, psychologically you have to be of a certain age to really get nuance at all, and no amount of attempting to jam "nuance" into a fourth-grader's mind is going to work, no matter what you do. Monolithic, rigid thinking is endemic to pre-formal-operational humans (to borrow Piaget's terminology and to give you something you can google if you like).


Or what about the businesses? Certainly not all programmers can be amazing at smashing out good enough work and others can be good at some deeper long-lasting design.

I think there is an expectation that a 'seasoned' developer can be good at all types, but often that's not the case.

We certainly need a mix of both, and they absolutely need to get along (to some degree).


This is certainly a good counterpoint to the highly visible silicon valley status quo, but it's risky. Deep thinking might be the solution to a given problem, on the other hand it might give you insight a lot slower than iterating on a prototype. One of the benefits of computers is that experimentation is easier and cheaper than most other technical disciplines, so it's good to be able to whip up some experiments to glean practical information. Arriving at optimal solutions to difficult problems is not an endeavour that is well served by a monolithic approach.

In a way it's the contrapositive of one of my annoyances with TDD. Sometimes TDD advocates (along with anyone selling a methodology I suppose) get overzealous and start making ridiculous claims like "TDD ensures program correctness". When pressed they may backpedal and acknowledge that unit tests are no substitute for mathematical proof. But yet there remains a dogged obsession with one particular technique. Take the experiment of writing a Sudoku solver via strict TDD (http://xprogramming.com/xpmag/OkSudoku). It's obvious that TDD doesn't help you write better algorithms. It's a good discipline for ensuring test coverage, it gives good sanity checks, and it can even help the modularity of your program, but it sure as hell doesn't inherently lead to better code.

So whether you spend years thinking things through before writing a line of code, or most of your time is spent writing and rewriting tests to ensure the perfect coverage, it's always worth reflecting on the effectiveness of time spent and diligently avoid cargo-culting on methodologies.


I have to agree.

As a long time device driver and embedded systems programmer, I've found that thinking about the solution only gets you so far. Almost always, intelligent prototyping will make progress quickly. Especially when working with poorly documented hardware, or a system that will never be fully characterized (imagine a complex electromechanical device where the output lags the software commands by many seconds or milliseconds and not in an immediately obvious way), a quick experimental script is often the only way forward.


The biggest downside of experiment based engineering is that if you aren't careful you can easily slip into programming by coincidence rather than intentional engineering. This can lead to unstable systems that nobody understands.

The key question: when new requirements (an inevitability) force you to re-engineer a key element of your system (an inevitability) will you have to perform more "experiments" to figure out which random mutations of your previous design yield the desired results or will you be able to mostly plan ahead of time how to change your design to have the desired characteristics?

If it's the former then you may be faced with the prospect of a change of requirements that you are incapable of coping with, causing your business plan to die. Many software companies have met that fate.


While very useful, mathematical proofs are not substitutes for unit testing, either.

"Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth


After that whole thing I wrote, do you really think that's my opinion?


No: I wasn't trying to refute what you wrote. I was pointing out to the reader that though you were arguing that test cases are no replacement for proofs, and were right to argue that, the bugs that test cases catch may overlap with the bugs that a proof will catch, but the former set is not a subset of the latter set.

If I offended your sensibilities, my apologies.


I'm sorry, as someone who works on a production operating system that a planet full of people use, I respectfully disagree - the hard part of systems programming is making sure the application API is 100% semantically correct, since it is written in stone once you release it - architecture and programming advice is absolutely for you if you're one of these people.

Running usability tests with real developers on your API is extremely important to understanding how understandable and straightforward your design is - if you're an OS or a platform component, you don't get a 2nd chance to design an API; you do it right the 1st time or you live with the consequences for years to come.


I'm not sure you necessarily disagree with the article. He argues against the "do the simplest thing that could possibly work" principle. He basically says, don't release code that isn't properly thought through if you do something that many others will depend on. You are saying something quite similar, but you stress the importance of running usability tests _before_ releasing code as thinking alone might not be enough. I agree with that, but it's not the same as saying ship the simplest thing that _could_ possibly work. Quite the contrary.


you don't get a 2nd chance to design an API

I think that is exactly the author's point, at least if you think that the best way to get to the Right API is to "think about it really hard" instead of "write a bunch of crappy prototypes".


This is something I've been thinking about for awhile now that I'm pursuing my own startup. Coming from a background in HPC, my papers were published using prototypes and proof-of-concepts. On the other hand, a product needs to work at all times. Hence, the idea of an MVP from a systems standpoint is rather fanciful -- if an MVP works for 1TB of data (or worse, 5TB of data), but doesn't work for 10TB or 100TB, then re-normalizing after the fact is a huge issue.

As a question to other startups based on novel systems, did you feel that an MVP was the right way to go? If not, what did you do differently so as to not labor in "almost, but not quite ready" limbo.


As someone who is building a startup that deals with slightly-more-than-usual data and who has been involved with various startups that similarly pushed the data-boundaries in later phases: Yes.

Make it work. Make it right. Make it fast. In that order.

Restructuring data is, in most cases, a smaller issue than it may seem. Running systems don't normally crash and burn over night. More often you'll see it coming and have plenty time to apply bandaids while transitioning to a new system.

Doing so is much easier when you can actually see and measure the bottlenecks, rather than trying to predict them beforehand.

In other words: Embrace the limbo. ;-)


I think MVP does not mean "crappy product". MVP means just a version of product (which is working, tested, etc.) which can collect the maximum amount of validated learning with the least effort. In other words, my understanding is that MVP should cover main use case in order to validate if that is what your potential customers want (as my mentor was saying "it must not suck"). So, in your example, if scaling up to 100TB is the key value proposition then you have to have it. On the other hand, CLI, nice GUI, nice installation, etc. are not probably part of your MVP.


I have to say that this article succeeds at saying nothing.

It starts with a invalid premise (that 99% programming advice amounts to "Do The Simplest Thing That Could Possibly Work") and then "criticizes" DTSTTCPW by bringing up completely unrelated issues (thinking vs. coding ratios), along the way making vague and disconnected distinctions between systems and application programming (is MySQL a system or an application? What is the crucial characteristic between those that makes some programming advice applicable to one but not the other).

The opposite of of DTSTTCPW is either doing things that don't work or things that do work but are more complex than necessary. There really isn't room to disagree with DTSTTCPW.

Thinking vs. coding is a false dichotomy. At the end of the day I don't care if Richard Hipp thinks for 8 hours and types for 10 minutes or if he types for the whole day without thinking first, as long as SQLite is a kick-ass software.

Personally I found that there's a (very small) limit to how much I can design up front (thinking phase) before I start implementing my ideas (coding phase). I invariably find that coding improves my understanding of the problem (often forcing me to update my designs) as well as giving me more ideas (it's not like my brain shuts off during coding - thinking is going on in parallel).

This is the core of agile argument: we're not smart enough to think about everything up front so the only realistic option is to start small and simple and build more complex things as we go and learn.

The author might be content with the fact he hasn't shipped his software in 9 years but I wouldn't take his opinions as relevant to what I aspire to do: writing useful software and shipping it.


I hate this divide between "applications" programmers and "systems" programmers. Really good systems programmers will either have worked on applications themselves or will work in close proximity with application programmers. And really good application programmers will have deep knowledge of the systems stack under their app.

Which category would you put Linus Torvalds into? He wrote Linux, but he also wrote git. How about Jeff Dean? He wrote MapReduce and much other core Google infrastructure, but he also played a large part in the indexing & serving system. Guido van Rossum? He started Python, but he also wrote Mondrian.


Two interesting links in here on the history of unix pipes that I wasn't aware of, http://doc.cat-v.org/unix/pipes/ and http://cm.bell-labs.com/cm/cs/who/dmr/hist.html#pipes

The second link shows this as a formerly valid command:

  pr <"sort input"< >opr>
Ouch!


Best blog name ever.


I've seen this blog on HN a few times, and every time my first thought is 'I wish I could think of such a clever name for my blog/business/domain/etc.'. But so far, nada :(.


I accidentally downvoted, but I have to agree.


Upvoting is not for agreement. Upvote if you feel that the comment is well thought out, puts forward interesting points, or is otherwise worthy of the attention of fellow HNers. "Upvote" == "I want to see more comments like this".

Do not upvote just because you agree. I agree with the comment nominally, but I downvoted it, because it is a waste of space.

Simple test: if there were more comments like this, would HN be a better place? If yes, upvote. If no, downvote. If indifferent/not sure, move on to next comment.


> Simple test: if there were more comments like this, would HN be a better place? If yes, upvote. If no, downvote. If indifferent/not sure, move on to next comment.

I agree with you here. The problem is that this is not directly obvious by only observing the upvote arrow. It might communicate more things than this, like "I agree with this comment." So, unless you tell them (and keep reinforcing this meaning), different people will read it differently.

I would even argue that it would be a good idea replacing the upvote arrow with a link named "I want to see more comments like this". That would stimulate more well thought, relevant comments, while agreeable/funny comments would still be there, but at the bottom of the page. These might be harmless, but they don't deserve lots of upvotes.


Usually I read HN comments before the actual article, and as a result I was unaware of the blog's title before reading his comment. It provided value to me, at least.


The crux of DTSTTCPW is defining the problem. It's difficult to work in the abstract if you don't have a very strong sense of what your goals are. You can see in the development of pipes that they had McIlroy's "Summary - what's most important" memo, probably a multitude of other such memos, a strong team and eight years. Most people have themselves, a couple of scratched notes and a few months... Getting lost in the dark abysses of the abstract in these circumstances is not fun. Suboptimal is your saviour, if not the world's.


"Thus, programming advice of the DTSSTCPW variety (i.e. 99% of programming advice) always makes me a bit uneasy. Instead of thinking - shouldn't I just be crankin' out code, putting it online, and blogging about it on LJ?"

Trying to think what LJ is, but drawing a blank. LiveJournal? (even though his blog is on blogspot?) Anyone know?


Yes, LiveJournal. Presumably he's trying to make what he's doing sound silly since LJ is, y'know, this wacky online diary thing.

I'm kind of amazed that blogs got significantly more respect, but LJ remained firmly below MySpace in being taken seriously. That takes doing, you know?


It's a reference to the Facebook movie.


In another 10 years people ought to be wondering what FB stands for.


For critical stuff the current mainstream agile thinking is naive.

Some years ago I also believed that debugging was evil! That your software must work in your head, in paper and in the computer.

But, that's mainly an algorithm oriented thinking, when your software is integrated with a lot of third-party pieces it's impossible to progress without debugging.


I think the key, underlying, fundamental thing is applicable to all programming - as far as possible, write your code to be understood by other human beings.

To me all good practices stem from this - I often get annoyed by cargo-cult, dogmatic, programming practices which remain detached from this 'golden rule of programming'...


It's important to think, but sonetimes it better to move, especially when I just think to think. "Don't think, but feel." -- BL "It is same as rest, for fool to think." -- japanese common quote




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: