I don't believe I communicated clearly enough in the creation of this project.
This is not ``Yay, I don't ever have to worry about any other computer because I can simulate anything.''
This is me sitting next to my QA guy (who really understands what I did better than I do so far) and having him say, ``Why don't you just make it fast for the first ten minutes, then slow down socket operations briefly, then switch a pattern of speedup and slowdown on disk reads?''
I want to do more fault injection stuff. I'm being asked to do stuff like manipulate the data (sometimes), or lie about the results of certain operations.
I understand your motivations better now. Being able to run these ad hoc tests in a repeatable manner isn't the same as firing up IE6 on a Pentium 4 box with 256MB of RAM.
Fun, but the good thing about slow computers is that they're also the cheapest; it's far more economical to just buy an old box for a couple hundred dollars (or even cheaper).
I have been programming since the late 80's and one thing I learned early was to always develop (or at least extensively test) on older hardware. My primary work machine is usually a generation or two old. At present I code on a T42 Thinkpad and there is a new T60 waiting to take over in a couple months. If you're forced to develop in a "slow" environment you learn to optimize your code from the get go, all the time. You automatically rely on faster techniques as the norm vs. going back and fixing later. As a result now I frequently see my applications running at client's offices and think to myself "holy crap, that's fast". Another side benefit is that it keeps all my "fun" applications over on another more current machine and separate from work.
The point he's missing is that you don't have to develop software on the same computer you test it on. The best solution is to have a separate test computer that's appropriately slow. In fact, most people probably already have an old one lying around that they could use.
And regardless of speed, it's good to test your software on a variety of systems.
I certainly do test on many different systems, but not all of them.
This doesn't take the place of real-world testing. It takes the place of setting up a network and putting a modem between my client and server then attaching a tape drive to my server to see what happens.
So far, it's been quite useful, but I've got a ways to go.
I recommend doing the same thing for web development.
There is a nice Firefox plugin (Firefox Throttle) that lets you throttle uploads and downloads with a single button click. If, like most web developers, you do development on localhost using the plugin lets you see how your user's experience uploading large files or downloading image rich pages with network latency.
Unfortunately its been pulled from the addons library search but you can download it directly (for now) using the ftp url in the comments here:
That's Disqus (YC S07). Its "reactions" piece pulls in comments from sites like HN, Reddit, and Digg into the comments widget it puts on a given blog post.
I think it was when talking to Jonathan from Plagiarism Today (http://www.plagiarismtoday.com) that I was told that forum posts should be regarded as the authors' own, legal copyright property. I may have some old notes on this around that I can dig up.
Some forums (and sites like YouTube) write some Terms of Use/EULAs to waive their content rights to the site owners (not for any nefarious reasons), although I doubt that these "agreements" hold up.
If I had any comments or forum posts that were copied verbatim in a manner that upset me, I would definitely pursue it legally. But I'm a fanatic like that.
It's useful in that it lets you know that there's a conversation going on somewhere else when you're looking at the comments on the page. It might be enough to say, ``active conversation as of x [unit of time] ago over here: [link]''
Nice work. But be careful about relying entirely on something like this. There are more than just seek time differences between hard disks and SSDs: they have different internal cache behavior, different latency variances, queuing differences (depending on how the devices are configured and how you're using them) etc.
This seems like it'll get you 80% of what you want. But it'd be useful to have an actual disk to test on as well.
I think this is a great idea, but having worked on benchmarking database structures in the past, I'd be weary of using them for any type of real benchmark.
For one, trying to model a real disk would get very complicated very fast. Say, the access time of the disk will be a function of the position on disk, so it would be unrealistic to get random delays when scanning a large chunk of contiguous data or by just having a few delays in a random-access heavy load would also be extremely unfair.
In short, trying to model complicated disk latencies is pretty hard, and usually if you are programming with some model of disk in mind, building a disk latency simulator under that same model may end up giving you a false sense of security.
For what it's worth, I'd favor getting a cheap hard disk and trying the load there instead.
I actually have a dev box that's an older Athlon XP with 1 GB of RAM that is surprisingly useful for finding performance issues. Things that don't show up on my monster desktop show up -really- fast on the clunker.
I'm a firm believer that developers should have old hardware to test on. :)
Wouldn't all this depend on your target customers? If your target customer is big business that have fast computers you are okay.
If your customers are average joes, then yes I would say you should have a test environment in place that closely represents those customers.
Even after all that the question I have is which set of customers brings in the most money for you? Lets say you have a mix of big business that run fast machines, and average joes that run slow machines. If 90% of your money comes from big business, why bother trying to make software for the slow machines? Just slap on a system requirement and it should cover them, so-to-speak.
Incidentally, as a webdev who targets big businesses, they in fact usually have -slow- computers. Javascript in IE 6 is pokey, and typical use-cases of Outlook and Excel don't need much firepower.
A joe-blow at home on the other hand is either at least at the "corporate" level, or even higher for gaming or media-centering.
I can see this work, but honestly wouldn't it make more sense not to use an older computer? They are typically cheap and you don't have to suffer when you don't develop speed critical code.
This is not ``Yay, I don't ever have to worry about any other computer because I can simulate anything.''
This is me sitting next to my QA guy (who really understands what I did better than I do so far) and having him say, ``Why don't you just make it fast for the first ten minutes, then slow down socket operations briefly, then switch a pattern of speedup and slowdown on disk reads?''
I want to do more fault injection stuff. I'm being asked to do stuff like manipulate the data (sometimes), or lie about the results of certain operations.