Hacker News new | past | comments | ask | show | jobs | submit login
Guess What: iCloud Uses Windows Azure Services For Hosting Data (redmondpie.com)
114 points by martymatheny on June 14, 2011 | hide | past | favorite | 80 comments



http://gigaom.com/2011/06/10/apple-icloud-microsoft-azure-am...

This article from GigaOm explains it better - Apple is likely using Azure / AWS as a CDN, because their data centers are centrally located (however this is not fact - we still don't know the extent to which they leverage azure / aws)

I think its a win-win for everyone involved - Azure makes money, and bags a high profile customer which validates their platform. Apple can leverage Azure's distributed datacenters to deliver a great experience for their users.

If this is true, I wouldn't say that it was embarrassing for Microsoft - much like how Google Maps on the iPhone is valuable asset for Google , not an embarrassment.


This is a win all the way around. Azure needs customers and who better than Apple to stress it and force them to make it better. But I'd hate to be the Azure VP if Jobs calls you because something isn't to his liking.

At the same time Apple wins because they can focus on their core competency, UX, not cloud plumbing.

Is Apple or Samsung embarrassed because the chips, flash, and display are sourced from Samsung? Of course not. Samsung is proud that their parts are the ones Apple choose, and Apple sells the experience, not a collection of parts.


Apple often removes any mention of the suppliers on the chips, sometimes cheekily putting Apple logos on chips bought from others, so they appear to be somewhat embarrassed about it.

Similarly, Samsung seemed a trifle embarrassed when announcing that their Galaxy S II phone would ship in some territories with Tegra 2 chips rather than their own Exynos chips.


I'm nitpicking, but I wouldn't say Apple "removes" the logos, just special orders the chips with their logo instead.


Wouldn't this make a successful iCloud even more embarrassing for Microsoft? They have the technology, but can't market it or make it easy for consumers to use?


Of course not. It's akin to saying any successful Rails app is an embarrassment to 37signals because they had the technology to create it but didn't.


37signals takes Rails and makes the products they intend to make.

Microsoft has attempted to make a successful cloud platform - their commercials all proclaim, "To the Cloud!" - and yet, they fail, while someone else takes their technology and (potentially) succeeds.


>Microsoft has attempted to make a successful cloud platform - their commercials all proclaim, "To the Cloud!" - and yet, they fail, while someone else takes their technology and (potentially) succeeds.

I don't really think that MS has targeted Windows Azure towards the general consumer market, though. It's basically the same as Amazon Web Services. Apple using that tech gives them an incredibly strong marketing tool, especially for companies that are trying to decide between using Amazon or something else for cloud-based services.


AWS is cheaper. It matters less to me what companies the size of Apple do, as I'm not Apple.


Erm.. Apple uses both according to the article.


Not Azure per se, but Microsoft did try to bring cloud to consumers. Does anyone recall Livemesh

http://explore.live.com/windows-live-mesh?os=other

Am not sure how much traction it finally got


And yet they fail? They've got Apple using their service for iCloud, how is that a "fail"?


I think there is a difference. Microsoft has tried to build and market various synchronization/collaboration/cloud solutions for over a decade, without any success to show for.

If Apple succeeds with iCloud, and that's a big if given their own track record in the internet space, it'd ought to be pretty awkward for Microsoft.


Microsoft mostly marketed those to businesses, though -- Apple is marketing towards their base, viz. recreational computer users.


I'm not sure if Windows Live Mesh is their latest cloud offering or the one before that, or whatever, but Windows Live Mesh certainly is marketed towards regular consumers:

http://explore.live.com/windows-live-mesh


Yes, in the land where everyone is logical and only makes decisions based on pure reason, that's true. But on planet earth, people act irrationally.

From a strictly logical sense, it's meaningless. But symbolically, it says a lot. The fact that Microsoft failed at something and then got bested by someone using their technology is symbolic of how far Microsoft has fallen.


Microsoft shines when they provide infrastructure, not finished products (with a few exceptions).

Services like Azure are what Microsoft should be about.


I think Karl meant infrastructure, not technology.


Well, I think my reply stands. What I'm trying to say is that just because Apple made a successful product on top of Microsoft's infrastructure, it doesn't reflect badly on Microsoft that they didn't do it first - with Azure, it wasn't their intention to build a cloud-based product - it was their intention to build an infrastructure to support cloud-based products.


Your analogy would only make sense if 37Signals was actively trying to compete in the Rails startup's space.


I think a better analogy would be if Barnes & Noble built a more successful online bookstore than Amazon using Amazon's own AWS cloud.


Microsoft has 'cloud services' baked into Windows Phone, but doesn't have the market penetration for the product recognition.

In the current release, contacts are automatically synced, I'm not sure about calendars, and photos, video and I think office can be set to automatically sync.

From my understanding there is deeper integration in the next release (Mango).

So Microsoft is there and doing it, but they aren't competing in the same marketing space as Apple, and it seems few people know Windows Phone even exists.


And how bad would it be if it wasn't so successful? Apple gets to point fingers at Microsoft for not being able to keep up if things go south.


That would only matter if Apple was having infrastructure problems. If people just didn't want the product, those claims wouldn't make sense.


From a logical standpoint, you are correct. From a marketing or political (business, no government politics), Apple could blame Microsoft for iCloud failing, regardless of the truth, to a point.

To put it another way, I frequently tell people "Businesses choose Java because if it fails, it isn't because they choose Java." This means that if the Director of IT says to build with the industry standard of Java (or .Net), it is not the Directors fault if/when the project fails. The project manager, the developers, QA, middle management, etc failed to do it correctly. It is a safe choice since it is industry standard and thus well proven. If it takes longer and costs more, it means it was a much more difficult problem then they expected, but since they used Java they got it working.

If on the other hand, they went with Rails/Django/Node.js/Erlang/NoSQL/etc, and it fails, the Director chose to use "new and untested" technologies, he/she is the root cause. Regardless of the fact that it could just be that realistic goals/management were never used.

I am not trying to say Azure is industry standard or the safe choice, but the way to shift blame is the same.


Considering the amount of none trivial work about having to port your apps from one Rails version to the next I would be surprised if the Director didn't get part or most of the blame for forcing them to use Rails. And saying that they could just stay in an old version is not valid. When bugs are discovered in the old version most likely they will not be fixed because everybody has already moved on. Enterprise projects already have a high rate of failure and building it on a platform that has a history of making substantial changes on every new version is a really, really bad idea.

An honest risk analysis of the enterprise project would raise a huge red flag for using a technology that continues to change substantially in every version such that it is not backwards compatible.


> When bugs are discovered in the old version most likely they will not be fixed because everybody has already moved on.

Is this a real problem? If you encounter a bug in an older version of Rails, why not just fix it and carry on as usual? Fixing bugs is part of the job of being a developer. If you are afraid to fix bugs, your application isn't going to last long no matter what language, framework, or platform you choose.


I'm talking about bugs in the Rails framework. I think very few people would like to be having to maintain an outdated framework. And for the ones that actually do decide to maintain a framework that they did not develop themselves it will cost them a lot of money.


> I'm talking about bugs in the Rails framework.

I think skidooer was as well.

> I think very few people would like to be having to maintain an outdated framework. And for the ones that actually do decide to maintain a framework that they did not develop themselves [...]

How is maintaining an open-source framework that somebody else made different from maintaining a framework built in-house? At the end of the day, bugs will still be found and need to be fixed. Yes, your devs might be more familiar with your framework than with a third-party one, but I think that is only the case if you can keep your team small and prevent churn and specialization. Regardless of which direction you go, devs need to understand what is happening inside the framework - it can't be a magical black box.

> [...] it will cost them a lot of money.

More than writing something from scratch?


Well, I guess we just have to agree to disagree. When choosing a framework one of my criteria is that somebody else is doing all the work to maintain it so that I can benefit from their work. If I have to maintain it then what is the point? I'm trying to save time here, not add more time to my schedule.


You certainly make a valid point, but I must add:

Rails is a fast moving target because they are always looking for new ways to save you time. As I mentioned in a previous post, the Rails 1.0 API is painful compared to the current generation. You are saving massive amounts of time during development because the project has evolved so far.

Spending a few minutes patching a framework bug once every five years pales in comparison to the gains you are seeing in development time.

To each their own, but I'd rather have a framework that is better than have a framework that knows it could be better, but won't make the changes because it might break some several year old app.


I too was talking about the Rails framework. It's actually a really nice read. I've fixed a couple of bugs in it myself.

The thing to remember is that each major release of Rails is fairly well tested. The number of actual bugs you are going to encounter are low. The investment in fixing them is therefore going to be low, should you actually encounter any to begin with.

The Rails codebase is simply an extension of your own codebase. While it is not fun to fix bugs in any capacity, there is no reason to fear fixing bugs in third party code. It is no worse than fixing your own, which also costs a lot of money.

I've been using Rails since around the 0.8 release timeframe. I have seen many major changes along the way. They have all been positive improvements to my workflow and the theoretical issues you describe have never been real issues. I still have one app chugging away on Rails 1.1 and it works just fine. The only problem it has is that it is not nearly as fun to maintain because it doesn't have all the newer major improvements. It would be a sad day if we had to go back to, or were still using, the 1.0 API. The Rails people are doing the right thing.


And it was non-trivial to go from Java 1.4 to 1.5.


With all due respect, I don't know what you are talking about. Programs in 1.4 run fine in 1.5. Every time there is a new version of Java I install it in my machine and just continue working as usual. I would be really upset if I had to spend time trying to make everything work again with every new version. They deprecate methods and functionality but they do not remove it. Java is not that much older than Ruby/Rails and it is pretty stable. Ruby/Rails may be great but I really don't want to go through the pain of having to port my code to the next version. I rather spend my time on adding new features.

Even Android, which is many times younger, never does this. Everything is always backwards compatible. Android developers would be up in arms if they just decided to break all apps in a new version of Android. Granted, it could still happen because nobody is perfect. Although it has never happened in any of my apps and I don't know if it has ever happened at all except for the early Beta period. When something is no longer supported they deprecate the functionality but do not remove it.


1.5 did introduce at least one major backwards incompatible change. A new keyword which meant any variables named enum now broke the app.

Installing a new Java version on your desktop to run some app is different than rolling out a new major version of JVM/language for an enterprise application. There is a lot that went into ensuring everything still worked, including all dependencies, and that performance was acceptable. Maybe heap options needed to be changed because of how GC changed.


OK, I stand corrected. Still, that is almost insignificant. But yes, nobody is perfect.


> Java is not that much older than Ruby/Rails

According to wikipedia: Java = 1995, Ruby on Rails = 2004


It makes sense for Apple to outsource this but I wonder if this is going to limit them down the road. I think Google has a tremendous advantage with their in-house expertise and infrastructure here. My instinct is that it's harder to get that right in the long run than it is to build acceptably user-friendly interfaces. The real connaisseur may continue to prefer Apple UI but Google's good enough for the rest and I won't be surprised if cloud integration becomes the key differentiator down the road.

Sun was right on the money when they claimed that "the network is the computer". They were just a decade too early.


There is opensource software (via OpenStack) to run a distributed blob store like this at scale.

My guess is Apple just doesn't want the hassle of dealing with it and/or this is temporary while they get their hardware set up


That's like saying that Google's prowess with map-reduce is irrelevant because we also have Hadoop. This stuff is very hard to get right and nobody does it better than Google.


I wonder whether this is just a function of the beta while they are waiting for the paint to dry on their new data centre.


Dollars is dollars.


I don't think so.

Apple are most likely wanting to concentrate on their core competency (making the iCloud product) and not have to worry about (more) data centers/storage/etc to support the product.

As others mentioned it sounds like a good win for both parties; Apple get to concentrate on the product directly, Microsoft get to concentrate on their code competency (providing the hosting).


Just thinking – couldn't this be a temporary pre-launch solution because Apple's huge NC data center isn't fully operational yet?


And Apple is building Azure- or AWS-compatible APIs for iCloud to use in the new DC?


Probably. But, in order to run on Azure, don't things have to be written in C#/.NET? That would suggest that their production stack will also leverage these technologies, potentially on windows (unless Mono over Linux or BSD is actually mature enough for this sort of thing, I wouldn't know)


> But, in order to run on Azure, don't things have to be written in C#/.NET?

No, not at all. They also have support for Java, Ruby, and PHP.


More precisely, the Azure storage API is HTTP/REST-like. Hence it practically supports any language that has a HTTP stack.


I see. I mean, if they're just using the storage backend, I fully understand - they realize that solving the problem of storage provisioning and redundancy is not their core business, and that they should outsource it to somebody that's already very good at it.


No, you can actually run Ruby on Rails, and other languages on Azure. That's a goal for them, actually. It's not only access to the storage API.


To be clearer, Windows Azure can run any code that can run on Windows. So if you have x86/x64 code that runs in a Win2k8 OS, you're good to go. And most languages can do that, some with Cygwin. The major ones (Java, PHP, Python, Ruby, Erlang, etc) have phenomenal Windows support out of the box.

For this next bit, see my disclaimer below. The blog post mentions windows.net. That is the domain for the Windows Azure storage services (blobs, tables, queues, etc). In specific, the headers mentioned seem consistent with a storage request made to the blob service.

Disclaimer: I used to work on Windows Azure until a few weeks ago on strategic adoption among other things so I have to put this here. This is no comment on whether the OP's blog post is accurate or not, on any Apple-Microsoft relationship, etc.


You can just shell out to another process.


Me think so. If not what's the point of a hugh data center?


Building a datacenter doesn't make you a datacenter expert. I'm not saying that Apple can't do it, just that they're not going to be able to do it as well as Windows or Amazon.


> Building a datacenter doesn't make you a datacenter expert

Actually, you should hire experts before building the datacenter


I imagine the SVP for cloud computing at Apple is happy to pay for highly redundant services so he doesn't invoke the Wrath of Jobs (TM). Steve Jobs probably pushed for multi-vendor redundancy so he could confidently say at the keynote that they stuffed up the Mobile Me launch, but wouldn't make the same mistake.


Your reply makes a lot of sense, considering we had a major Amazon outage just a few months ago. Redundancy is the way to go.


Didn't Apple just build a couple huge data centers? I was under the assumption they were supposed to be for iCloud?


Perhaps they haven't gotten all of said data centers live yet and needed to use Azure and AWS as a temporary measure.


Perhaps they have a bunch of windows servers inside, but why windows.net?


windows.net is the root URL for all the Windows Azure storage services. Back in 2008, we knew that Windows Azure was going to have the 'Windows' part of it's name. Since Microsoft already owned windows.net, this made it an obvious choice. So we get blob.core.windows.net, database.windows.net, etc.


Windows Azure can be a number of things. Since they use AWS and the article shows http requests they might be using it as just a blob store or CDN with redundancy across providers. I doubt they're running Windows Azure on their servers not that it wouldn't be pragmatic to.


Wait, so what was that billion dollar data center [1] Apple built in North Carolina for?

[1] http://arstechnica.com/apple/news/2011/02/apples-nc-data-cen...


My guess is that AWS/Azure is only used for short term storage.


I'm guessing the third party services are for data hosting, and the data center does CPU and database intensive processing.


Relaying on their own data centers and their own cloud stuff would have created a dependency between iCloud and the data center level stuff. If the data center project would have been delayed for a reason or another.. too bad for the iCloud.

Also it might not be a good idea to start testing your inhouse cloud infrastructure with a high profile product like iCloud that is likely to attract quite a few users in coming months.

I think it absolutely makes sense to do the development and initial launch with outside services and then later on maybe migrate to your own data centers and own cloud stack.


Does it really matter what cloud services Apple uses? Yes, they have their own data centers but why should it be surprising if they also utilize others as well?

Microsoft is, in many ways, lots of little companies that all live together. If the Azure business unit makes money from Apple, it doesn't really matter if the Windows Live teams are failing in their products.

The oft-mentioned great thing about cloud services is not having to worry about managing hardware. Leave that to the people that are really good at it, and build your business on top.


This seems to be a logical multi-vendor play to me. Especially if Apple's data centers aren't geographically distributed, it needs to get other vendors involved.


Microsoft.com purportedly uses Akamai(Linux) for load-balancing and caching their websites[1]. I don't see anything wrong with using appropriate technologies.

[1] http://news.netcraft.com/archives/2003/08/17/wwwmicrosoftcom...


I find it funny how everybody here tries to spin this to portray Apple in a positive light, even putting the blame on Microsoft somehow.

Face it, this would be a big deal for any other large company. When a tech giant builds his latest project on out-of-house infrastructure, how can this be good? At best, it is a waste of development resources (when it is temporary), At worst, it is a business risk/reliability nightmare waiting to happen (when this is their final infrastructure).


You are aware that Apple doesn't own its own chip fabs? That its hardware is manufactured for it by third parties? That the building space for most Apple Stores are leased from malls and landlords?

Every enterprise builds on infrastructure owned by others. That is what contracts are for. What matters is the degree of lock-in and who controls the roadmap. And the whole point of modern "cloud" infrastructure is that you don't own the hardware, you don't have the capital costs of the datacenters, you have a nice temporary lease that you can nonrenew or perhaps even break if needed, and your internal architecture is probably generic enough to be ported to another cloud if needed.


i think this is all based on the VL2 paper, described here: http://perspectives.mvdirona.com/2009/10/05/VL2AScalableAndF... and the paper available here: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.156....

it boasts of some freakishly fast ability to shuffle 2.7Tb between 75 servers in 27 seconds...



Embarrassment or not, I'm surprised by that. This is a big deal and the first time we hear about it is after someone took a closer look at the headers.


Wasn't the big idea of cloud that you aren't tied to provider and can move your services anywhere?

Anyway, what other big cloud providers are out there -- ones that aren't direct competitors to Apple? Amazon's AWS is out of the question, as Amazon competes directly on those hot multimedia and e-book markets with Apple. Google's stuff, too, as Google is a major competitor on mobile and multimedia markets. Microsoft doesn't seem to directly compete with Apple in the hot markets -- at least not until they squeeze some Windows Phones from Nokia.


Read the article, it says it uses AWS too...


I think there's more to the story here because otherwise there was no point to Apple building the data centers.


I don't think the implication is that Apple has none of its own infrastructure, just that they've bought some pieces from third parties.


While this may just be temporary, I don't see why it would matter if it wasn't. Apple probably also uses Linux and FreeBSD for things.


Agree. Even though Apple have their own data-centres, they might use some Amazon or MS capacity just as a redundancy hedge.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: