Hacker News new | past | comments | ask | show | jobs | submit login
Developing When Your Computer is too Fast (dustin.github.com)
52 points by gsteph22 on Nov 12, 2010 | hide | past | favorite | 31 comments



I don't believe I communicated clearly enough in the creation of this project.

This is not ``Yay, I don't ever have to worry about any other computer because I can simulate anything.''

This is me sitting next to my QA guy (who really understands what I did better than I do so far) and having him say, ``Why don't you just make it fast for the first ten minutes, then slow down socket operations briefly, then switch a pattern of speedup and slowdown on disk reads?''

I want to do more fault injection stuff. I'm being asked to do stuff like manipulate the data (sometimes), or lie about the results of certain operations.


I understand your motivations better now. Being able to run these ad hoc tests in a repeatable manner isn't the same as firing up IE6 on a Pentium 4 box with 256MB of RAM.


Fun, but the good thing about slow computers is that they're also the cheapest; it's far more economical to just buy an old box for a couple hundred dollars (or even cheaper).


Sure, but then I have to carry around another computer or two when I want to work within a particular assumption.

Now, I can say, ``What if this box were really slow to respond to network requests 10% of the time?'' and just do it.


Carry around, repair, keep patched, etc. It's only cheap if your time is cheap.


I have been programming since the late 80's and one thing I learned early was to always develop (or at least extensively test) on older hardware. My primary work machine is usually a generation or two old. At present I code on a T42 Thinkpad and there is a new T60 waiting to take over in a couple months. If you're forced to develop in a "slow" environment you learn to optimize your code from the get go, all the time. You automatically rely on faster techniques as the norm vs. going back and fixing later. As a result now I frequently see my applications running at client's offices and think to myself "holy crap, that's fast". Another side benefit is that it keeps all my "fun" applications over on another more current machine and separate from work.


I also started programming in late 80's, and I use that technique too. Currently, my netbook is more powerful than my primary laptop. :-)

My laptop is also able to variate CPU clock speed from 2GHz to 200MHz. It makes me easy to execute CPU-bound benchmarks of my programs.


The point he's missing is that you don't have to develop software on the same computer you test it on. The best solution is to have a separate test computer that's appropriately slow. In fact, most people probably already have an old one lying around that they could use.

And regardless of speed, it's good to test your software on a variety of systems.


I certainly do test on many different systems, but not all of them.

This doesn't take the place of real-world testing. It takes the place of setting up a network and putting a modem between my client and server then attaching a tape drive to my server to see what happens.

So far, it's been quite useful, but I've got a ways to go.


I recommend doing the same thing for web development.

There is a nice Firefox plugin (Firefox Throttle) that lets you throttle uploads and downloads with a single button click. If, like most web developers, you do development on localhost using the plugin lets you see how your user's experience uploading large files or downloading image rich pages with network latency.

Unfortunately its been pulled from the addons library search but you can download it directly (for now) using the ftp url in the comments here:

http://support.mozilla.com/en-US/questions/755876#answer-107...


Another site that rips off the HN comments verbatim?

Hi everybody, I'm doctor copyright-infringement!


That's Disqus (YC S07). Its "reactions" piece pulls in comments from sites like HN, Reddit, and Digg into the comments widget it puts on a given blog post.


I see. Does HN say somewhere that "anything you submit can be used for commercial purposes by YC startups"?


Interesting point, I googled for 'who owns copyright blog comments' and found this (Google cache, the site itself is down):

http://webcache.googleusercontent.com/search?q=cache:6GfoIFr...

I'd like to hear if anyone else agrees or disagrees with this article (it seems to be a bit of a grey area).


I think it was when talking to Jonathan from Plagiarism Today (http://www.plagiarismtoday.com) that I was told that forum posts should be regarded as the authors' own, legal copyright property. I may have some old notes on this around that I can dig up.

Some forums (and sites like YouTube) write some Terms of Use/EULAs to waive their content rights to the site owners (not for any nefarious reasons), although I doubt that these "agreements" hold up.

If I had any comments or forum posts that were copied verbatim in a manner that upset me, I would definitely pursue it legally. But I'm a fanatic like that.


Bleh. It looks pretty useless without threading.

EDIT: By threading, I mean conversation threading; this isn't a misplaced comment about the performance of La Brea.


I concur. I've never really found it useful.


It's useful in that it lets you know that there's a conversation going on somewhere else when you're looking at the comments on the page. It might be enough to say, ``active conversation as of x [unit of time] ago over here: [link]''


Nice work. But be careful about relying entirely on something like this. There are more than just seek time differences between hard disks and SSDs: they have different internal cache behavior, different latency variances, queuing differences (depending on how the devices are configured and how you're using them) etc.

This seems like it'll get you 80% of what you want. But it'd be useful to have an actual disk to test on as well.


To further this.

I think this is a great idea, but having worked on benchmarking database structures in the past, I'd be weary of using them for any type of real benchmark.

For one, trying to model a real disk would get very complicated very fast. Say, the access time of the disk will be a function of the position on disk, so it would be unrealistic to get random delays when scanning a large chunk of contiguous data or by just having a few delays in a random-access heavy load would also be extremely unfair.

In short, trying to model complicated disk latencies is pretty hard, and usually if you are programming with some model of disk in mind, building a disk latency simulator under that same model may end up giving you a false sense of security.

For what it's worth, I'd favor getting a cheap hard disk and trying the load there instead.


I actually have a dev box that's an older Athlon XP with 1 GB of RAM that is surprisingly useful for finding performance issues. Things that don't show up on my monster desktop show up -really- fast on the clunker.

I'm a firm believer that developers should have old hardware to test on. :)


Wouldn't all this depend on your target customers? If your target customer is big business that have fast computers you are okay.

If your customers are average joes, then yes I would say you should have a test environment in place that closely represents those customers.

Even after all that the question I have is which set of customers brings in the most money for you? Lets say you have a mix of big business that run fast machines, and average joes that run slow machines. If 90% of your money comes from big business, why bother trying to make software for the slow machines? Just slap on a system requirement and it should cover them, so-to-speak.


Incidentally, as a webdev who targets big businesses, they in fact usually have -slow- computers. Javascript in IE 6 is pokey, and typical use-cases of Outlook and Excel don't need much firepower.

A joe-blow at home on the other hand is either at least at the "corporate" level, or even higher for gaming or media-centering.


Great work, but I think a library to do the opposite would be more useful.


... to make your computer go faster than it really is?

That WOULD be pretty useful.

Or are you merely talking about libraries that do optimization for you?


The former!


I appreciate this sort of software, since our development and production hardware has vastly different performance profiles.

Development is on fast intel single or dual core machines with slow disks and production is slow many core sparc machines with fast disks.


What about developing or testing in a virtual machine?


VMs running on my box might not get a lot of CPU time, but will still get good IO performance.

It's going a bit beyond just simple performance, too. For example, I can add logging of reads on client network file descriptors like this:

https://github.com/dustin/labrea/blob/master/examples/lognet...


I can see this work, but honestly wouldn't it make more sense not to use an older computer? They are typically cheap and you don't have to suffer when you don't develop speed critical code.


You wouldn't want to be compiling on an old machine.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: