Agreed, the value add isn't very high, especially since the pricing is higher than EC2. Surely there are some economies that could be found on servers with no disks and reduced CPU requirements? I wonder if ECC could even be done away with if they checksummed all the stored values?
It would be a real killer if they had incremental pricing (per-GB-hour) with an adjustable high watermark / replica count, and a name/port-based endpoint that always worked and routed requests to the proper cache server.
I agree that the value add isn't significantly high for small cluster type of situations, and I would much rather just have a caching API available that charges on usage and allows you to specify an amount of redundancy.
However, the automatic failovers is very nice. From Vogel's blog: "Amazon ElastiCache automatically detects and replaces failed Cache Nodes to protect the cluster from those failure scenarios." That is definitely nice.
Elasticache does seem to be sitting in an awkward middle ground between renting of instances and paying for usage in an API.
Edit: after thinking about it more and reading some of the comments, I think an ideal setup would be an API to a memcached like datastore with buckets so I can specify max-size, redundancy, expiration methods, etc on a per bucket basis. Even nicer setup would be all of that plus redundancy and HA across availability zones and regions.
> I think an ideal setup would be an API to a memcached like datastore with buckets so I can specify max-size, redundancy, expiration methods, etc on a per bucket basis.
Raising the question of why Amazon didn't adapt its S3 API to the task, and then layer an optional memcached-compatible wrapper on top of it.
The problem is that you need to provide the other services as well. Having your database and application server in one datacenter, but your memcached instance in another would reduce the use-cases because of latency and bandwidth restrictions.
We recently got a step closer to integrating multiple clouds now that EC2 allows peering. http://aws.amazon.com/directconnect/ To really make it easy you'd probably want federated auth and billing, but I'm not sure that's in Amazon's interest.
At current RAM prices, one can build a cheap 2U SuperServer with 144GB of ECC DDR3 1333MHz RAM, a single quad-core 40W 2.13GHz Westmere processor, a 10gig-E card, and a 4GB CompactFlash drive for ~$4,000. Assuming the server has a 3-year lifespan and costs ~$4,000 to power, cool, house, network, and administer, that's around $0.0021/GB-hr considering 143GB of it is usable. Double that to ~$0.0043 for redundancy, and they could charge $0.02/GB-hr and make a healthy profit.
Edit: Note that a lot of that profit gets eaten up in underutilized capacity, this is just a simplistic analysis.
It would be a real killer if they had incremental pricing (per-GB-hour) with an adjustable high watermark / replica count, and a name/port-based endpoint that always worked and routed requests to the proper cache server.