Yeah, we can't wait for diskstore. If you imagine the analytics use case for a moment: super high speed is great, but I'd imagine 99% of the read requests don't ask for anything older than a month. Older data could easy be pushed out to disk, saving us a lot of RAM. For now we can still operate pretty easily in RAM (we have a few machines dedicated to analytics and they're just storing counters or sets of small values), but it'd be great to know we can grow a lot more without needing to put more of our shards on their own physical machines.
We already run the latest from the 2.2 branch, I can/should go into how easy that is in a followup.
Yes can be a good use case, but the new set of design decisions have new limitations, nothing is for free :)
Example: with VM there were a number of problems, like keys must be in memory, super slow persistence, and so forth, but there was no speed penalty if you always write against a small working set.
Instead with diskstore data is on disk. The RAM is just a cache, and if you configure store with 'cache-flush-delay 60' you are telling Redis that at max in 60 seconds every given key that is dirty should be flushed on disk.
If there are many writes it is easy to hit the I/O write speed limit, and the system starts to be I/O bound.
So diskstore is surely a solution when there is a big data problem where writes are rare compared to reads. If writes are really a lot, there is to consider the total I/O.
The ideal solution in your scenario is IMHO to take data about the latest N hours in an in-memory Redis instance. And move the historical data into a diskstore-enabled instance. This way you have a full win, as the diskstore instance will be used only for reads, so will provide the maximum benefits. While the in-memory instance will have the usual predictable and low latency characteristic of the usual default Redis configuration.
We already run the latest from the 2.2 branch, I can/should go into how easy that is in a followup.