Hacker News new | past | comments | ask | show | jobs | submit login

> Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

So start with something simpler. Start by making a kernel that can run /bin/true, that never reclaims memory, that only boots on whichever VM you're using for testing. You absolutely can start with a kernel that's simple enough to write in a week, maybe even a day or hour, and work up from there. See http://www.hokstad.com/compiler for a good example of doing something in small pieces that you might think had to be written all at once before you could test any of it.

> I spent the past 3 weeks doing exactly that, refactoring a code base. I knew exactly where I wanted to go, but eventually it meant working for about a week on code without being able to compile, not even think of testing it, because everything was moving around and getting reorganized. However now I'm enjoying the fruits of that week; a much cleaner codebase, easier to work with and I even managed to eliminate some Voodoo code nobody knew why it was there, except that it made things work and things broke if you touched it.

Which is great until you put it back together and it doesn't work. Then what do you do? I've watched literally this happen at a previous job, and been called in to help fix it. It was a painful and terrifying experience that I never want to go through again.

In my experience with a little more thought you can do these things while keeping it working at the intermediate stage. It might mean writing a bit more code, writing shims and adapters and scaffolds that you know you're going to delete in a couple of weeks. But it's absolutely worth it.




If you haven't, you really need to see this post.[1] Starting simple and writing a test driven algorithm is not necessarily bad. However, realize that you are really just turning the act of finding the optimum solution into a search space where you have to assume mostly forward progress at all times. Not a safe assumption. At all.

And, because I love the solution, here is a link to my sudoku solver.[2] I will confess some more tests along the way would have been wise, though I was blessed by a problem I could just try out quickly. (That is, running the entire program is already fast. Not sure of the value on running the tiny parts faster.)

[1] http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...

[2] http://taeric.github.io/Sudoku.html


I've seen it, but it's just utterly alien to my experience. Partly the problems I encounter professionally don't look much like Sudoku; partly the things that are important on a large codebase are different from the things that are important in a small example. But mostly I think people realise they're not getting somewhere - and if they don't, others will point it out. That's partly why TDD tends to be found in the same place as agile with daily standups - you get that daily outside view that stops you just spinning your wheels the way that blogger did.


I have seen exactly this style of thinking happen in a large code base. Some of it was my own, sadly.

The odd thing to me, is you say this still of problem doesn't happen in a large code base. But, to me, this style of problem just happens many times in a large codebase. That is, large problems are just made up of smaller problems. Have I ever used the DLX problem? No. Do I appreciate that it is a good way to look at a problem you are working? Definitely. I wish I had more time to consider the implications there.

More subtly in your post, to me, is the idea that with the right people the problems don't happen. This leads me to this lovely piece.[1]

[1] http://www.ckwop.me.uk/Meditation-driven-development.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: