Probably because it would be impossible without having a dedicated physical disk for each slice on the machine, or some kind of really fancy network storage array.
Maybe that will change once SSDs become the default hardware. Without the bottleneck of seek time, latency should be much easier to quantify. Also, since you wouldn't be penalized for "context switching" (I know this term doesn't really apply to disk IO, but I mean switching disk jobs often, which requires a head move on HDs) you could maybe someday slice up the SSD time like a CPU and guarantee it directly. (For instance, if the SSD is capable of 200mbps, you're slice could be guaranteed 10mbps. Or something more technically realistic, I am but an amateur.)
Maybe that will change once SSDs become the default hardware. Without the bottleneck of seek time, latency should be much easier to quantify. Also, since you wouldn't be penalized for "context switching" (I know this term doesn't really apply to disk IO, but I mean switching disk jobs often, which requires a head move on HDs) you could maybe someday slice up the SSD time like a CPU and guarantee it directly. (For instance, if the SSD is capable of 200mbps, you're slice could be guaranteed 10mbps. Or something more technically realistic, I am but an amateur.)