I think this is a great idea, but having worked on benchmarking database structures in the past, I'd be weary of using them for any type of real benchmark.
For one, trying to model a real disk would get very complicated very fast. Say, the access time of the disk will be a function of the position on disk, so it would be unrealistic to get random delays when scanning a large chunk of contiguous data or by just having a few delays in a random-access heavy load would also be extremely unfair.
In short, trying to model complicated disk latencies is pretty hard, and usually if you are programming with some model of disk in mind, building a disk latency simulator under that same model may end up giving you a false sense of security.
For what it's worth, I'd favor getting a cheap hard disk and trying the load there instead.
I think this is a great idea, but having worked on benchmarking database structures in the past, I'd be weary of using them for any type of real benchmark.
For one, trying to model a real disk would get very complicated very fast. Say, the access time of the disk will be a function of the position on disk, so it would be unrealistic to get random delays when scanning a large chunk of contiguous data or by just having a few delays in a random-access heavy load would also be extremely unfair.
In short, trying to model complicated disk latencies is pretty hard, and usually if you are programming with some model of disk in mind, building a disk latency simulator under that same model may end up giving you a false sense of security.
For what it's worth, I'd favor getting a cheap hard disk and trying the load there instead.