Hacker News new | past | comments | ask | show | jobs | submit login
Racking Mac Pros (imgix.com)
388 points by zacman85 on May 6, 2015 | hide | past | favorite | 316 comments



This seems risky from a business perspective: it's voluntary vendor lock-in.

What if Apple decides to change the Mac Pro form factor for the next iteration? Then you have to retool and are left with a bunch of incompatible chassis. What if Apple stagnates with hardware upgrades? You'd be stuck running obsolete hardware. What if Apple discontinues the entire Mac Pro line? Not to mention the price premium of Apple hardware itself, then the time and expense incurred to design and fabricate this.

The fact that their software depends on Apple's graphics libraries doesn't seem like a good justification for doing this. What it says is they are willing to throw a ton of money and effort towards (very cool) custom hardware, but are unwilling to hire a person to write optimized OpenGL shaders for Linux, which would work on pretty much any other server they choose to build/buy/lease/cloudify. Certainly there will be other "debt" to overcome, especially if much of your codebase is non-portable Objective-C or GCD, but that has to be weighed against the possibility of your only vendor pulling the rug out from under you. And looking at Apple's history, that is a very real possibility...

Owning your hardware like this makes complete sense if your core business is directly tied to the platform itself, eg an iOS/OSX testing service. But as far as I can tell, imgix does image resizing and filters... their business is the software of image processing, and they're disowning that at the expense of making unrelated parts of the business more complicated. Not a good tradeoff, IMO.


This was pretty much my thought as well. The results of this cost-benefit analysis make me raise my eyebrow, and, same as you, I can only assume that OS X permeates the infrastructure from top to bottom, to an extent that makes pulling it out too painful to even contemplate. Image processing shaders of this type aren't that hard to write.

If you're worried about them not matching some piece of client software exactly (Quartz Composer, Photoshop, etc.), you still have options. And those options - e.g., webapp for previewing/something else/etc. - you'll probably want anyway, for the benefit of designers that don't run OS X.

(The filtering aspect of the system I find a little surprising anyway - the idea of an image-focussed client-aware DPI-aware CDN makes sense to me (and I like it!), but something that does your Photoshop filters in the cloud sounds less compelling. I would have expected people to prefer to do that locally, and upload the result. But... while I've worked with many artists and designers, I'm not one myself. So maybe they'll go for that. And/or maybe a lot of their customers take advantage of the fact that the processing appears to be free. And I'm prepared to contemplate the possibility, however unlikely, that they might know their customer base better than I do.)


(full disclosure: I work at imgix)

Uploading pre-edited images takes time/resources, and in general a lot of our customers rely on us to do all of their image processing so that they don't have to.

Additionally, creating edited versions of images in advance presents two problems: 1) Any future site redesigns or edits must now be applied en masse to the existing images or risk older images not complying with the new scheme, and 2) Instead of only managing the one original source image in the origin, now we're talking about maintaining all of the different edited versions, which is very inefficient from a storage and image management perspective.

There are many advantages to applying all of the image transformations on-demand, rather than in advance. Keep in mind that we are not simply photo filters, but a full end-to-end image processing offering (which applies everything from simple photoshop edits like cropping, face detecting, color correction, watermarks, etc. to automatic content negotiation and automatic resizing/responsive design) that works on the fly; this means that our customers now can make batch edits to their entire corpus of images through a few simple code edits.

This can become extremely cost-effective, but also helps in reducing page weight significantly.


I work at Hemnet, a Swedish real estate site, where we currently have about 300 000 - 400 000 new images each week. We recently investigated the on-demand approach but found that JPEG decompression was the biggest time consumer in our use case of scaling fairly large images down to sizes appropriate for web and mobile. This made us decide to do scaling in advance to all our sizes which resulted in overall CPU time savings.

It would be interesting to hear how img.ix solves this, since you are arguing for the resources savings in the on demand approach.


It is true that the time to fetch, render, and deliver an image the first time it is requested can be a bit longer, because of all the processing we're doing in the case of that single request, but because we then cache that fetched image, all subsequent requests are delivered an order of magnitude faster (50th percentile 45ms).

The majority of the time taken in the first request is actually in fetching the image from the origin source, so once it's cached in our system it becomes a much faster operation: and of course delivering the cached image without it traversing our stack is even faster.

So yes, the initial request can take time, but all of the subsequent requests of that image are much faster than the alternative. And when you take into account that our service makes it possible to send the correctly sized image for the display (instead of loading a preset size and displaying it smaller), and optimal file types based on device and browser (webp for chrome, etc), load times/page weights on all of those requests are significantly improved.

In general, anyone who serves multiple requests for their images over time will see a marked improvement on page weight/speed, compared to rolling their own solution where they have preset image sizes and deliver jpeg only.


Then why not have rackmount commodity mb based systems with dual xenon's stuffed with GPU's


skuhn has been replying to this all over the comments. For example:

https://news.ycombinator.com/item?id=9501800


Hackintoshes would make a lot more sense.


No, they wouldn't.

Look at the current state of Mackintoshes. People are having kernel panics and struggling to keep their machines running with current software. OS X moves pretty fast, possibly faster than Linux, and Apple builds it to support Mac hardware.... the teams who are porting hackintosh code have to support a lot more hardware variety and they have less resources than, say, linux.

Running mackintoshes in production makes no sense.

And I challenge the claim that you would save money.

Looking just at off the shelf costs of low end hardware does not tell you the TCO of serious machines that need to be running all the time.

To get comparable hardware quality to Apple products you have to spend more, generally, when going with "commodity" hardware.

The idea that Apple is expensive is a myth, born of two things-- people perpetuating it since 1980 (yes, 35 years this myth has been spread), with the vested interest of rationalizing their dislike of Apple... and the fact that Apple doesn't compete at the very low end.

In production, TCO is much more about reliability and other things than initial hardware cost.


You mean the professional market that that Apple is abandoning in favor of consumer devices.

I look after a mostly MAC environment and OX server is hopeless no migration from stand alone MAC's to Networked was the first shocker I found

And our Mac's are less stable than our windows 8 Box used for running hyperv VM's


> People are having kernel panics and struggling to keep their machines running with current software.

Not here. My Mac is at about 11 days of uptime and it's under constant use. At this moment, I can't say it's less reliable then my Linux machines.

In this specific case, however, I'd consider ditching the enclosure and ducting cold air through the internal chassis/heatsink. A Macpro is, essentially a heatsink with boards mounted on it and I'd just let the chassis do that part.


Is 11 days of uptime supposed to be impressive?


It's hardly the uptime of someone who's struggling with kernel panics and random crashes. Last power down was when I embarked on a trip. The previous one was a system update. In fact, I never saw a kernel panic with this machine.


Here's my 10.6.8 Snow Leopard MacBook Pro laptop:

sh-3.2# uptime 11:28 up 117 days, 19:13, 4 users, load averages: 0.91 0.98 0.95


I think the part you quoted was referring to hackintosh machines only.

Still, there's no doubt in my mind that Apple are doing some "move fast and break things" OS development.


One really shouldn't blame Apple for OSX not working on a Hackintosh.


> People are having kernel panics and struggling to keep their machines running with current software.

A guy I work with has been running several heavily used hackintosh servers without issue. They have been very reliable and he's happily converted existing Linux servers to hackintosh. He's been doing this for a while and knows exactly what hardware to use.


Of course they would.

Legally? Yeah, good luck.

Edit: I think the more obvious answer to this is that they would rehouse these babies in a more convenient, albeit likely custom form-factor.


It is appealing in a way, but I also think it would put the business at legal risk in a way that is totally unconscionable. We cannot run a Hackintosh in production for a single moment, the long term ramifications could be immense.

The chassis we designed represents my attempt to re-house the Mac Pros in something more suitable for the datacenter.


Yeah, the legal risk just isn't worth it.

It'd be interesting to see someone rip apart a Mac Pro and build an entire form-factor around its setup.

Don't get me wrong, what you guys have done is extremely beautiful in all ways, but I can't help to think that if someone wanted to do this with say a Mini... say you take a 1u rack, drill some new holes into it... hmm.


The Minis are incredibly simple machines inside, and their airflow is not great. You could pop their logic boards out and run them without the chassis. With some thought it wouldn't be too tough to significantly improve their airflow for this environment.

I do have an existing rack design that holds 64 of them (and other people have gone denser, with operational compromises I prefer not to make), so there's no great impetus on my side to rip them out of their enclosures.

My Mac Mini rack design is shown in a little more detail in our previous article: http://photos.imgix.com/building-a-graphics-card-for-the-int...


Thanks for the detailed response. Really cool stuff you guys have going on.

What other companies utilize Apple hardware in this way at this kind of scale? While not "out of this world" in comparison to some of the big players who have tackled scaling, it's definitely significant considering Apple hardware.


I'm not aware of anyone who specifically uses Mac Pros (although I've heard some private rumblings). I suspect part of the issue is that the old form factor was not very rack-friendly, and people haven't gotten comfortable with the new form factor. Maybe this chassis design will help move this forward.

Mac Minis are a little more common than it might seem at first glance, particularly for use cases that some other people have outlined in their comments throughout this thread. Mozilla uses them to test Firefox builds on OS X for instance. I would imagine that places like Sauce Labs must have a Mac Mini farm to facilitate browser tests on OS X.

I'm not aware of any other service that operates in the same space as imgix that runs outside of EC2, so they definitely aren't using OS X there. I think in general there's a sort of disregard for the particular graphics processing benefits that OS X provides (as evidenced by some of the comments in the thread).

I would also be remiss to not mention Mac Mini Colo (http://macminicolo.net/) who do co-located hosting. imgix started out with them, and they did a great job.

There's another interesting use case where you need to have OS X (or iOS): when you want to display photos taken on iOS devices with their applied filters (the images are stored pristine, and the filters are applied on top when you view them). To recreate these photos exactly as they were on the device, you ideally need to render it within an Apple environment. You can probably imagine the use case for a service that stores a lot of user generated photography, in a world where iPhones are the most popular cameras (https://www.flickr.com/cameras).

I also heard through the grapevine today that a certain film studio is interested in getting one of these chassis to test out, because they saw this article. That's pretty exciting to hear, even though we don't profit in any way from the sale of these chassis.


Why pay 5-10X as much to host on AWS? It's not for free.

Hosts have a really nice markup, compared to hosting yourself. Hosts make a lot of sense for small companies who can benefit from the aggregated demand and capital costs being spread over many clients.... but not when you're at the level of building your own datacenter, or even using a full rack.

It's funny how since 1980 people have been talking negatively about Apple as "vendor lock in". For most of that time it was advocating vendor lockin to windows.

The thing is when you build your system on an OS or hardware choice you're making "vendor lockin" to that platform. Build on Linux and you're locked into just Linux, unless you port.

There is little risk being "locked in" to the largest most successful company in the world. Plus the costs being dramatically lower given the rabidly higher performance of Apple's technology for this particular service more than covers the cost (in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.)

If you think Optimized OpenGL shaders would do this, you're not understanding what it is that they are doing. You're just assuming it's a trivial problem, it is not.

Owning your hardware makes a great deal of sense when you are operating at scale.


>Why pay 5-10X as much to host on AWS?

You have no idea what the comparison is, and I don't either. But again, the criticisim is around running a business off of a bunch of Apple "trash cans".

>advocating vendor lockin to windows.

Linux is no lockin, Windows lockin via software vs. Apple for hardware and software.

>"vendor lockin" to that platform

Java, Scala,or any other JVM language protects from that, and to a lesser degree Python, PHP does as well.

>There is little risk being "locked in" to the largest most successful company in the world.

Price gauging? Deciding not to support your platform anymore? Forcing you to upgrade?

>Build on Linux and you're locked into just Linux, unless you port.

Only that there are a bunch of Linux options to choose from, they are all open source so you can do whatever you want as far as upgrade paths and support, and if you use the JVM languages this isn't an issue.

>in fact I think one Mac Pro probably replaces 4 or 5 Linux boxes doing this.

There is no fact there, that's your delusional opinion.

>If you think Optimized OpenGL shaders would do this, you're not understanding what it is that they are doing. You're just assuming it's a trivial problem, it is not.

It's a CDN + image manipulation tool, you don't need 3D libraries. And if you use exiting libraries or tools, it is quite trivial. Here is their API: http://www.imgix.com/docs/reference


Platform lockin isn't exactly vendor lock-in, but there's a kind of lockin nonetheless. You're going to be dependent to some extent on your platform whatever your platform happens to be.


>Platform lockin isn't exactly vendor lock-in, but there's a kind of lockin nonetheless. You're going to be dependent to some extent on your platform whatever your platform happens to be.

So you making your own chips off of beach sand or something? /s After a certain point you get ridiculous.

JVM and C/C++ (python and other scripting languages to some degree) are the options if you want cross platform environments.

But on a scale of suckiness:

1) Hardware lock in

2) vendor lock in

3) service lock in

4) OS lock in.

5) app server lock in

6) framework lock in

7) Library lock in

8) programming platform lock in


Jeez, this post is brimming with strawmen. (Why am I even bothering...)

> Why pay 5-10X as much to host on AWS?

Nobody said anything about AWS...

> Hosts make a lot of sense for small companies

Sure, nobody is disputing their choice of colocating themselves.

> OS or hardware choice you're making "vendor lockin" to that platform

It is abundantly clear that the vendor lock-in refers to single sourcing your hardware. That problem is nonexistent on Windows, Linux, BSD, etc.

> I think one Mac Pro probably replaces 4 or 5 Linux boxes

Oh come on, now you're just talking crazy... see other posts in this thread for a cost/performance comparison.

> you're not understanding what it is that they are doing

On the contrary, I think I understand better than you. Do you perform a lot of image processing work on various platforms (including OSX and Linux)? I do.


> This seems risky from a business perspective: it's voluntary vendor lock-in.

If you want to run a business that builds/tests using the osx/ios ecosystem this is the only way to do it legally. Apple's licensing terms enforces this. Otherwise we'd be running OSX on generic pizza box servers since Apple's hardware is truly overpriced and not built efficiently at all for the datacenter (they work fine on desks). Apple really gimped the 2014 mac mini's btw. They perform worse than the high end 2012 mac minis.


You have to look at it as an entry barrier that protects them against competitors too. If they are the only ones offering this setup, they can charge very high prices and recover the initial investment relativelly quickly.


> look at it as an entry barrier that protects them against competitors too

What barrier to entry? Their customers don't care that OSX is running under the hood. You can offer an image processing service using any platform today. Sure, on Linux it probably wouldn't be as efficient, but it doesn't have to be. Scaling is a Good Problem to have.

Basically, as you grow, it helps to take a critical look at risk factors and the technical debt which contributes to that risk. The longer you wait to pare down that debt, the more expensive it is, and the more exposed you are to that risk. A little more work up-front saves a lot of work later on.


(I'm the datacenter manager at imgix, and I wrote the article)

I completely agree with your concerns, and I'm constantly evaluating our business for operational risks and inefficiencies. There's a lot of stuff that I can't share in public about this, but what I can say is that the math works out (for now): OS X graphics processing is worth the downsides. It may not always be the case, and we're built in a flexible way where we can make a change when it makes sense to do so.


> OS X graphics processing is worth the downsides

How different is it? Aren't they dependent on OpenGL and Nvidia/AMD GPUs/Drivers themselves? Wouldn't it make better sense/efficient to invest in becoming platform agnostic and optimize this.

I only say so cause it seems like Imgix could massively benefit from such a move and maybe look into other solutions which you currently can't consider (Custom ARM silicon - PowerVR-based servers, Professional AMD/Nvidia GPUs, etc)


Without going into too much detail about our stack, there's quite a bit more to Core Image than OpenGL+graphics card driver.

Those are important components, and we're not talking about splitting the atom here, but Apple has had a number of smart people working on graphics technologies for a long time now. imgix also has a bunch of smart people working on this, but for a much shorter period of time.


> What if Apple decides to change the Mac Pro form factor for the next iteration?

It seems as though they're prepared for this. This version 2 of their process is already moving away from an existing Apple form factor to a new one. It doesn't seem to be a leap in logic to consider that, should a new form-factor be released, they'll modify their rack cases again.


Worse.

What happens if a random upgrade causes major performance issues, or worse, just flat-out breaks their use case?

Looking at you, PS3 clusters.


"What if Apple decides to change the Mac Pro form factor for the next iteration?"

Given recent history, that's not going to be for a number of years.

"What it says is they are willing to throw a ton of money and effort towards (very cool) custom hardware, but are unwilling to hire a person to write optimized OpenGL shaders for Linux, which would work on pretty much any other server they choose to build/buy/lease/cloudify."

Hardware will almost always cost less than engineers.


> Given recent history, that's not going to be for a number of years.

That is something that no one outside of Apple can say for certain. It doesn't even have to be a major change, but something like rearranging ports, adjusting taper or extrusions on the chassis, etc. Those kinds of adjustments happen all the time on consumer hardware, and most people don't notice, but may be an issue if you're trying to fit into precision machined slots.

> Hardware will almost always cost less than engineers.

For commodity, off-the-shelf hardware, absolutely. This is anything but, and still requires engineering effort to design, fabricate and assemble. And it's not always about the immediate dollars: sometimes a fundamental reworking means sacrificing short-term savings in favor of the long-term: flexibility, risk mitigation, reduced operational complexity, and cost over successive generations of hardware.


There are definitely risks, but I do want to gently re-iterate that we're not blind to them.

About the only thing that Apple could do that would render this chassis obsolete is to substantially change the exterior dimensions of the Mac Pro. Obviously if it's a different shape, we would have to adjust things.

If they kept the same shape but modified it somehow, the only dimensional change that would be truly tough to accommodate is an increase to circumference. This is the dimension with the least wiggle room built-in, and it would cause some headache. We would probably have to sacrifice some density by removing 1 chassis from the rack.

Otherwise, changes to ports or minor adjustments to the length of the chassis can all be accommodated for in this chassis design.


> There are definitely risks, but I do want to gently re-iterate that we're not blind to them.

Yep, I was just responding to the assertion that it wasn't a risk.

For what it's worth, it does sound like you've thought this through really carefully, and thanks for taking the time to explain so thoroughly and respond to everyone.


I have two conflicting responses to what I am seeing here ...

First, this is awesome. Just like I want to live in a world where people are paying picodollars for cloud storage[1], I also want to live in a world where a bunch of mac pro cylinders are racked up in a datacenter. Very cool.

Second, this is complete silliness. I'm not going to go down the rabbithole of flops per dollar, but there is no way that you can't build a hackintosh 1U with dual cpus and multiple GPUs and not come out big money ahead. Whatever management overhead gets added by playing the hackintosh cat and mouse is certainly less than building new things out of sheet metal.

Let me say one other thing: right around mid 2000 was when certain companies started selling fancy third-party rack chassis gizmos for the Sun e4500, which was the cadillac of datacenter servers at the time. Huge specs on paper, way underpowered for the money they cost ($250k+) and the epitome of Suns brand-value. And there were suddenly new and fancy ways to rack and roll them.

This reminds me a lot of that time, and that time didn't last long...

[1] Our esteemed competitor, tarsnap.


Obviously, but running OSX on non-Apple hardware is a violation of its EULA.

I have contacted a lawyer for this (I wanted to run Hackintosh in the office), the language is very clear. The author of the software has the full power to license its use to you with any restrictions they find necessary no matter how ridiculous. If Apple only sells you the license if you promise not to run it on a thursday, you'll be in violation of their terms if you run it on a thursday.


> you'll be in violation if you run it on a Thursday

Indeed, there is a JS library opensourced by Microsoft to decode Excel files, named xlsx.js. In the license, it is written... that it cannot run on another OS than Windows. It means even though it's Javascript, the page hosting it cannot be viewed on a Mac or Linux.

Long story short, Stuart Knightley created a clean room implementation named js-xlsx to do the same thing, without the lawyer string attached.

https://github.com/stephen-hardy/xlsx.js/issues/8


Thanks for the mention but I only created JSZip[0], the library that powers the zip/unzip of js-xlsx[1], but not js-xlsx itself!

[0] https://github.com/Stuk/jszip [1] https://github.com/SheetJS/js-xlsx


This is correct, and a huge impetus for our use of Apple hardware. We simply cannot risk our business on saving some money at the expense of violating Apple's EULA.


> We simply cannot risk our business on saving some money at the expense of violating Apple's EULA.

I believe people are questioning that Apple hardware/software is a requirement of your business (and that it's not "some money", but a lot of money you'd be saving).

It's difficult to fathom Apple hardware/software being a hard requirement to operate any business (as-in you can't operate without it). Both Windows and Linux have a plethora of image utilities, audio, etc...

Sure, OSX might have some optimized image processing stuff, but couldn't the massive savings be used to scale wider with more generic hardware and still come out ahead?


Not at the moment. The cost overhead to OS X is minimized by my approach to datacenter design and operation, and the benefit of OS X's image processing software stack is maximized by the imaging engineers on staff at imgix who are making the most of it.

The math may not work out this way forever, and when it doesn't, we'll make a change.


Apple gave up on the server market. They don't care about number-crunching or scientific computing. To build a server-side based business around OS X doesn't make sense to me. Did you try writing your image processing code for Linux and CUDA / OpenCL? Is there anything specific about the OS X frameworks which means you can't develop a non-OS X solution?


Depends on where you live. At least in Germany (and I think the whole EU), EULAs are meaningless.


I can't find that for EULA's proper (would surprise me, too, as that would allow anyone to pirate any shrink-wrapped software), but EULAs cannot prohibit selling your license: that was upheld for software bought by download, too: http://curia.europa.eu/jcms/upload/docs/application/pdf/2012...:

"Where the copyright holder makes available to his customer a copy – tangible or intangible – and at the same time concludes, in return form payment of a fee, a licence agreement granting the customer the right to use that copy for an unlimited period, that rightholder sells the copy to the customer and thus exhausts his exclusive distribution right. Such a transaction involves a transfer of the right of ownership of the copy. Therefore, even if the licence agreement prohibits a further transfer, the rightholder can no longer oppose the resale of that copy"

You can even buy the right to download future updates:

"Therefore the new acquirer of the user licence, such as a customer of UsedSoft, may, as a lawful acquirer of the corrected and updated copy of the computer program concerned, download that copy from the copyright holder’s website."


Afaik, if the EULA is only shown after the sale has already taken place, it is basically meaningless around here. But IANAL and only relaying what I heard on the internet.


The lawyer I spoke to was Dutch. There is some unclarity about EULA's sold to consumers, I think the general idea of the consumer protection is that the consumer should have seen the EULA before they paid for the product.

That said, it says on the box that the software is only for Apple hardware, and I think even only as an upgrade for an existing OSX install.

If you're a company, then EULAs are definitely binding, no matter where you are.


Although this depends on the country. If Apple says that you can't run your Mac Pro on Thursday, it would probably break some Australia Consumer Law.

Although the "Only run on Apple Hardware" would probably be fine.


Even if that were not the case, I doubt the performance of the OS X image pipeline is tuned for non-Apple hardware if it supports it at all.


Non-Apple hardware does not differ in any meaningful way from Apple hardware. The performance of anything in OSX is perfectly tuned for any generic desktop computer. It also supports most hardware straight from the box.


> any generic desktop computer

Any generic desktop computer with the same hardware. It sounds like they're using Apple's image pipeline, which I imagine would be designed around the specific graphics hardware in the Mac Pro. Sure it could work on other hardware, but when you know exactly the hardware you're running on you can do a lot of low-level optimizations you couldn't otherwise do.


Apple supports Intel, AMD, and NVidia GPUs. Their graphics pipeline has over the years supported substantially all of the graphics chips produced by those vendors in the past 10-15 years. Their current full feature set may only be supported on GPUs that admit an OpenCL implementation, but that's still every bit as broad as the generic desktop computer GPU market—about two microarchitecture per vendor. Apple's not getting any optimization benefits from a narrow pool of hardware, for GPUs or anything else. The only benefit they get along the lines of narrow hardware support is that they don't have to deal with all the various motherboard firmware bugs from dozens of OEMs.


The GPUs aren't made by Apple, and I assume that is what this code is using (because otherwise it's a huge waste of money.)


which hardware is "Apple"?


The small subset of third party hardware that they ship, and optimise for?


(I'm the datacenter manager at imgix, and I wrote the article)

I've alluded to this elsewhere, but the math doesn't add up to your gut reaction. It's cheaper, but not by a significant enough margin relative to the engineering costs, to go with commodity servers and GPUs.

Building things out of sheet metal is actually easier than migrating to Linux, for one big reason: we can pay someone else to do it, because it isn't part of our core competency. In fact, I'm pushing to open source the design of this chassis, in tandem with our design partner (Racklive). Not sure if it will happen, but I'd love to see it.


It strikes me that OS X is a solid server platform. Linux is always a more flexible choice, but if OS X works for you, it works for you. That being said, is sticking cylindrical mac pros on their side into square racks really the best solution.

There are 2 problems I see with this design:

1: You are placing the Mac Pros on their side, which may lead to premature bearing failure on the main cooling fan. Apple designed the cooling fan to be as silent as possible, which means that they optimized the bearing and the fan to work in vertical orientation. Bearings designed for thrust (vertical) orientation may not work so well if placed horizontally for a long time.

2: You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes, resulting in significant loss of space density.

Considering that Apple is a huge company that owns huge data centers, combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS, and combined with the above mentioned problems with using Mac Pros as server "logs" (because you cant call them blades), I would assume that Apple has internally OSX servers designed in the traditional blade configuration.

They may not sell or advertise them, but they MUST have them. Given that you guys are buying a ton of hardware, and are located nearby, and would be actively promoting running Apple hardware, wouldn't it be wise to at least approach Apple and see if they would be kind enough to sell you some of those blade form factor servers they simply must have.

I may be completely wrong here, but apple did brag about how Swift is the new language thats so flexible that you can make a Mobile app in it, or a Desktop app, or even a full blown social network. If that's the case, they must have some plans for the server market? No?

Any way, in the end it's a cool design, but I would seriously consider at least stacking the Mac Pros vertically to avoid fan issues. You can actually get a tighter form factor that way as well, unless space is not the issue. And if it's not, then hell, what's wrong with just placing a bunch of Pros on an Ikea shelf in a well air-conditioned room :)


1. That's certainly a possibility, and one that we won't really have hard numbers on for some time to come. However, the fan is about a $60 part, so provided that we don't have coordinated, catastrophic failures and that they live for at least a year, we're doing alright. Do note that Apple specifically says that the Mac Pro may be operated on its side. https://support.apple.com/en-us/HT201379

2. True, but 1U per server is not bad density by any stretch. For my app servers, they effectively occupy 0.5U; database and storage effectively occupy 1U. So this puts the Mac Pros on par with the larger server class. Were we to deploy renderers in conventional server chassis, a similar system would occupy at least an effective 1U if not a full 2U.

What Apple does internally is, of course, shrouded in mystery. I know some people there, and we talk to people when we can, but they just aren't the kind of company that is going to tell you how they make the sausage.

From what I've heard and my sense from speaking with them over the years, they do not use OS X in production. They used to use Darwin and Solaris, and now almost exclusively use Linux (presumably Solaris is still around to run Oracle). They did used to use Xserves internally, but even at their scale it isn't worth building them just for their own use.


Fascinating, I had no idea Apple approves using the Mac Pros on their side. It would be interesting to find out what happens with the fans.

It's also fascinating that they are running Linux internally nowadays, for their server side stuff. What next, I find out that all of the Microsoft data centers run Debian :) Considering that they employ all of those Objective-C and Swift engineers, you would thing that they would want to leverage their workforce write Obj-C or Swift backend code as well. For most backend tasks either Swift or Obj-C is as good of a language as any other.

Any way, rackable OS X systems are a missed opportunity for Apple. They can sell them to a company like yours, movie production houses, and even design some libraries and make a play for the web app market with Swift. Not sure how successful the last one would be. As for the economies of scale, they don't even need to manufacture or design the system, take an off the shelf rack mount server from another manufacturer, fiddle around with the casing a bit to give it that Apple feel, and load OSX on it. Perhaps the margins in the server side hardware are way too slim.


> fascinating that they are running Linux internally

Not really. For a server software, why not run it on a mature, industry standard server OS?

> What next, I find out that all of the Microsoft data centers run Debian :)

Apple don't sell server software, not really. MS does.

https://www.apple.com/osx/server/features/


It is entirely possible that they do use ObjC and Swift on Linux.


They sold Xserve rack mounted macs for many years. They stopped doing it, presumably because the market was not worth their attention.


I feel quite lucky to have got a few before they stopped selling them. I have three dual-cpu xServes still running as our main app servers and they've been some of the most reliable boxes we have.


And I don't believe that they only started using unix operating systems not their own "nowadays" - when OS X was too immature during development and its long maturation what did you think they were using?


I thought they were using NeXTSTEP, hence all the NS API calls in Obj-C. Back in 1989 I am guessing NeXT would be built on some kind of Unix system first. Considering that OS X is a descendent of NeXT, I would think that before OS X, they would use it to run code, servers, etc.


Once upon a time Apple did try to make their own UNIX, A/UX.


"combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS"

Why would you assume that? There are a ton of things that linux does better than OS X - and it would be extremely stupid for any company regardless of size to not use the right tool for the job. For example, even IBM uses, sells, and supports Linux instead of AIX or OS/360 on their line of servers and mainframes. I think that your assumption is just really old fashioned.

Internally Apple does use Linux, just as Microsoft uses a blend of OS's - supporting Linux on Azure, for example. I read that they actually use Linux as a host for their Hadoop service on Azure.


At it's core OS X is Unix. In what way would Linux be a better choice? I am not saying that Linux is a worse choice, but for a company that writes an OS as one of it's core businesses, it only makes sense to run that OS in as many places as possible. For one, by running OS X as a server OS they would necessarily spend more time on development and improvement of the OS core. This would pay off in the long run by further improving stability and reliability of OS X.

I am not arguing that OS X is the perfect solution in most circumstances, but it can be a good solution in many situations, especially if you are Apple, and have the full source and the capability to adopt the OS as necessary.

Microsoft, especially nowadays, tries to be very cross compatible, so it's not surprising that Azure supports Linux apps and guests. But Azure RUNS on Windows Server 2008, not Linux, not Unix.


Because it isn't really about the OS, it's about the software. OS X is fine as a server platform, but it doesn't have the same software and support ecosystem for data center usage. Apple dumped that market with the Xserve because it didn't work for them.

Red Hat/Suse/Oracle etc. all sell tailored solutions for that usage that are Linux specific technology (mostly, some stuff gets ported to other Unix derivatives but most doesn't). Sure Apple could do all that too, but they don't want to. It isn't their market, so why sink money and effort into engineering OS X to do it when they can just buy high quality products ready to go?


What tools is OS X lacking? From my experience most of the development and server tools are available natively on OS X. It lacks support for containers, but that would be a worthy addition, and I would say worth spending money and time on. The rest is already there for the most part. Developing further their server infrastructure would allow Apple to make a play for the corporate market. Any way, it's a silly argument. I thought they ran most of their backend on OS X, it looks like was wrong.


It's not small server stuff like Apache that they are missing. It's stuff like distributed failover, exotic driver support, SAN, management etc. that they are missing. Big data center stuff, the kind of thing companies like Red Hat make.

Those kinds of products are huge investments. Sure Apple might be able to market towards the enterprise, but they simply don't think there is any money to be made. They used to have for instance Xserve that tried to stay afloat in that market, but which made little money. Since they canceled it, Mac OS has only been developed as a small to medium server (which it isn't half bad at). But big time data centers are a different world.

For instance, as a very basic example, does Mac OS support Infiniband or the more exotic high-speed ethernet network interfaces? For Infiniband, the answer is no and in the other case the answer is "kinda, but not really."


My background:7 Xserves still in production here in K-12 education, 1000+ users in OpenDirectory

In the pipeline:Migrating to the new shiny Mac Pros along with OS X Server

Reasons: Thunderbolt 2 connectivity is amazing and works fine to connect FibreChannel RAIDs. OS X Server: Though it's correct that the GUI got simplified a bit, it's the same server package and complex as it always has been, however easy enough to support. And if configured correctly, a solid workhorse for many scenarios: network accounts for lab use, calendar and contacts server, along with some helper tools it works in heterogene environments fine, supports huge amounts of users in via LDAP..just to name some reasons. for 20 bucks the best server os to support Mac and iOS clients. And because the underlying foundation is UNIX, it's friendly with any networking stuff such as RADIUS for your WP2-Enterprise wi-fi needs..just to name a view.

One thing that is not quite right in the post above: SAN support exists via XSAN.


Ah, my bad. I thought Xsan had been retired, but it seems not.


> In what way would Linux be a better choice?

Well, it's a supported operating system on machines that aren't cylindrical.


Well, so is OS X. It runs on Mac Mini, no :)


Apple are probably using Linux via AWS & Azure: http://daringfireball.net/linked/2014/02/04/icloud-azure


I'm not trying to lure you down the flops per dollar rabbithole... but when i was researching the mac pro before purchasing, putting together a computer with GPUs providing the same flop performance was something like 80-90% of the cost of the mac pro. the D700's are really good. This is just an anecdote based on my memory, so take with a grain of salt, but the comparison is probably not as bad as you are imagining.


This is what I found as well, and allude to in other comments in the thread. Particularly when you look at GPGPUs like the NVidia Tesla, they are generally pretty terrible when priced per-gflop because they live in a niche market that wants to handle finite element analysis and supercomputing tasks.

Image processing doesn't require double precision, so we don't need GPUs tuned for it, which means we can use Fire Pro's and similar workstation or server grade cards.


> there is no way that you can't build a hackintosh

Have you ever personally run a Hackintosh, full-time for a prolonged period of time?

It's anecdotal, but I can assure that once you're used to how OS X and the Apple hardware work together and never, ever, ever crash, using a Hackintosh is an exercise in frustration.

I had one of the known-best Hackintosh configurations in existence, and it didn't hold a candle to the MBP I had prior to it in terms of "it just works".

Sure, it was cheaper.

Guess what I did when that Hackintosh needed replacing? I walked in and dropped the coin on genuine Apple hardware without a second thought. I have never regretted it, and I'll never go back.


I'm surprised at how much attention 'hackintosh' is getting in this thread. It's a completely naive sub-topic. If you are a US corporation 'hachintosh' is completely taboo, beyond taboo, it's illegal. If hachintosh is how some cooperations run, yikes, let me know so I can never be their customer. If a company is that cheap with their hardware and their morals, I would hate to see how they treat their employees. (It's also naive to think if a company saves money on their hardware by hacking the shit out of it, that money saved will be siphoned in to workers paychecks.)


Agreed. I had a hackintosh for a year and something always freaking broke or it wouldn't boot up.


I have been using a Hackintosh as my primary rig going on 4+ years now. It can be frustrating if you are trying to use the very latest hardware but I find the small issues a decent tradeoff.

It's not a matter of cheaper for me, but a matter of fitting my needs. I don't want to run AMD graphics cards, I need PCI-E, I want lots of internal storage, I want really high single threaded CPU performance.

I can't buy that from Apple in a desktop form-factor. So I have my Hackintosh.

That being said, I don't disagree that Apple hardware is nice. I have a rMBP 13 and intend on replacing it with a newer model Apple notebook soon.


> Have you ever personally run a Hackintosh, full-time for a prolonged period of time?

I did. But now I'm running Yosemite under KVM, VT-d motherboard, dedicated videocard and USB3 hub.

You can get to a point where "it just works".


> You can get to a point where "it just works".

For months and months on end of heavy usage without a single restart or issue?

EDIT: to expand a little - I was developing/compiling all day long on my ~2008 MBP with it plugged into an external monitor, network, mouse, kb. I'd close the lid and walk home with it, then watch movies, torrent, develop some more, surf etc. Close lid, and repeat for months on end. The only time I ever restarted was for OS updates, I never had a single app even crash in ~2 years of doing that.

My hackintosh (and the windows 7&8 HP machines here at work) don't hold a candle to that.


> OS X and the Apple hardware work together and never, ever, ever crash, using a Hackintosh is an exercise in frustration.

Well all I can say that there are no crashes and no causes for frustration on my end.

"It just works" for me is - I don't have to think about it, it does not get in a way.

As a bonus: configuration of the VM can be put in a VCS, whole virtual disk can be snapshotted and reverted if needed.


That is your experience. A properly set up Hackintosh (with a custom DSDT and the proper kexts as needed) can be as reliable as a genuine Mac.

Regarding Windows machines, I've had desktops that would be used for months at a time (mostly rendering) without a restart and never crash.

A pretty good way to test for reliability is to let Prime95 and Memtest86 run for a week or so and see if it fails somewhere along the line (obviously proper cooling is a must), many consumer machines will fail this test.


You sound pretty confident in it, so here's a question from a perspective more relevant to the discussion:

Would you found a company and make your primary product hackintosh servers? Are you willing to stand behind your 'perfect' configuration and give those customers years of support?

These guys are running a real startup. A vendor with that exact promise and a failed delivery could tank them.


Currently, no.

1. Apples EULA does not allow OSX on non-apple hardware. 2. Some major updates can break customizations and require some modifications (bootloaders, etc) to be re-installed

I have no problem helping a friend set up a Hackintosh when they want to save a few thousand dollars (I have set up a few already) with the understanding that they need to backup before doing any system updates and expect things to break after updating.

While Hackintosh's work well for personal use as long as you are somewhat techy and pick the hardware carefully, (putting aside the EULA issue) it does not make sense for anything large scale.


Sure, you can build such a hackintosh, but you're behind on OSX updates, you can't update automatically (it may break some of the custom hacks required), you probably can't use auto update. Also, some of the drivers may have additional bugs due to the ever-so-slight hardware change. In the end, your system may freeze more often, or display weird behaviour. All that apart from the EULA issue that was already mentioned. Maybe imgix already tested such hackintosh systems and realized that they're just not stable enough for continous high workloads.


terhechte is right about this. To clarify: the reason you can't update automatically is although most hardware is supported just fine, there are many minor adjustments needed, mostly to text files but sometimes to binaries as well to make OSX recognize your hardware. Every time an OSX update comes out there's updates to drivers that overwrite your modified files.

An example of the sort of hack I'm talking about would be a graphics driver that says it's for the NVidia model E532D. Your graphics card is an E532E. You looked on the internet, and you found out they are exactly identical except for branding, so you dive in the driver and simply flip a bit to make OSX recognize it.


re your hackintosh idea: I have no idea what the current state of play is, but I looked at this in some detail when they came out - and there was no way you could build a directly comparable machine without spending very similar money. The GPUs would just kill you, far more than any other components.


Apple fans often justify the prices with this argument, but in reality, there is no reason to buy a machine directly comparable in all aspects, just the ones I care about. And that can be had for a fraction of the cost.

It's unlikely that this company needs all the hardware features of the Mac Pro - probably just the beefy GPUs. That's combined with the power density problems (and higher monthly costs) of this solution, compared to modern rack or blade servers, making it far worse value.

Compare this also to John Siracusa's woes over buying a new Mac: He wants a graphics card powerful enough to game on, and remain useful for a number of years. He wants to be able to get a retina display. He's for now stuck with a 2007 Mac Pro as Apple don't sell a suitable machine.


Well sure, if you don't need the machine, you don't need it and that' fine.

When I was comparing hardware, you really couldn't buy comparable cost/flop GPUs for any significant savings (and you'd spend more on some similar builds), which was my point. No idea if that's still true. The idea that you could get the same thing for half the money just wasn't true.

Your comment about John Siracusa's problem doesn't seem relevent to the OP, although it something to consider if you were buying a machine for home use.


The GPUs in the Mac Pro are significantly more cost effective per gflop than the equivalent AMD Fire Pro server models.


That matches what I found, looking at this a while ago. Not perfect for every application, of course.


I had the exact same thoughts. On one hand I love the idea of racking mac pros and having drawers full of Wera tools to work with, but on the other hand, it just seems like the kind of silly expenditures that come with too much funding.


I've answered the concern about Wera hand tools before, but to reiterate my position: a $6 screwdriver doesn't matter if it helps you do your job.

Not having a screwdriver when you need it in a pinch is penny wise and pound foolish. At best you're now out 30 minutes while you drive to Home Depot, potentially during some sort of catastrophe. At worst maybe you simply cannot do the task that you need to do, because it's 2am and you're in Frankfurt. I've worked in a lot of datacenters that didn't stock basic tools to perform tasks, and frankly it sucked.

I keep a log of all of the purchases I made for the current datacenter build. Non-server / non-structural expenses account for less than $3000, which is less than the cost of a single server. This includes storage bins, carts, shelves, workbenches, chairs, supplies and tools.


Am I the only to think that sheet metal is quite cheap to bend?


Indeed, it is. Servers are expensive; everything built for a datacenter is expensive. It's very easy to make a passive chassis for far less than conventional datacenter gear prices.

A lot of cost estimates have been thrown around (here and elsewhere). The highest that I've seen is $4000 per unit. That is simply absurd. The initial run of prototypes was far less per unit, and this was a small batch made to iron out the kinks. Economies of scale and design tweaks will drive this down even further.

The chassis design is actually quite elegant from a manufacturing standpoint. That's something that I hope will be made evident by follow-up posts that delve into more technical detail.


"but there is no way that you can't build a hackintosh 1U with dual cpus and multiple GPUs and not come out big money ahead."

Licensing costs, and legal costs when you get sued for violating the license?


Three possible reasons I can think of for doing this over using PCs or Linux servers:

1. Using the same operating system as the developers of the software, plus access to Apple's fantastic imaging libraries.

2. The Mac Pro, whilst expensive, is good value for money. The dual graphics cards inside it are not cheap at all. As servers with GPUs are fairly niche, this might actually be a cheaper solution.

3. The form factor. Even if you could create PCs that are cheaper with the same spec, they'll use more power, possibly require more cooling (Mac Pro has a great cooling architecture) and will take up a lot more space.

I'd be very interested in hearing how they manage updates and provisioning, however. I can't imagine that'd be much fun on OS X but perhaps there's a way of doing it with OS X Server.


(I'm the datacenter manager at imgix, and I wrote this article)

1. Yeah, the OS X graphics pipeline is at the heart of our desire to use Macs in production. It's also pretty sweet to be able to prototype features in Quartz Composer, and use this whole ecosystem of tools that straight up don't exist on Linux.

2. I mentioned this elsewhere already, but it is actually a pretty good value. The chassis itself is not a terrible expense, and it's totally passive. It really boils down to the fact that we want to use OS X, and the Mac Pros are the best value per gflop in Apple's lineup. They're also still a good value when compared against conventional servers with GPUs, although they do have some drawbacks.

3. I would love it if they weren't little cylinders, but they do seem to handle cooling quite well. The power draw related to cooling for this rack versus a rack of conventional servers is about 1-5/th to 1/10th as much.

In terms of provisioning, we're currently using OS X Server's NetRestore functionality to deploy the OS. It's on my to-do list to replicate this functionality on Linux, which should be possible. You can supposedly make ISC DHCPd behave like a BSDP server sufficiently to interoperate with the Mac's EFI loader.

We don't generally do software updates in-place, we just reinstall to a new image. However, we have occasionally upgraded OS X versions, which can be done with CLI utilities.


Why not unassemble the cylinders and re-assemble into rectangle chasis? Im sure that would give you a more dense layout.. Sure it would void warranty and resale value.. but do you really care?


The whole machine's custom built to fit inside the cylindrical case... the best you could do would be to take the outer case off, and then you've just got a slightly smaller cylinder.

Electrically, everything's built around a round "central" PCB using a custom interconnect. You're not going to be able to reassemble the thing into a rectangle and still get a functioning machine (not without tons of custom design work, at least).

See https://www.ifixit.com/Teardown/Mac+Pro+Late+2013+Teardown/2...


This actually came up during the design phase, and it was tempting. However, you'd have to figure out how to connect the boards together, and you'd have to figure out where to put heatsinks and where to direct airflow.

Since we were able to get the Pros to the point where they effectively occupy 1U, there wasn't really any incentive to doing a disassembly style integration. Maybe if Apple announces the next Mac Pro comes as a triangle.

To your other point about the warranty and re-sale: we do care, but only a little. I budget machines to have a usable lifespan of 3 years, but the reality is that Apple hardware historically has significant value on the used market for much longer than that. So if we can recoup $500-1000 per machine after 3 years of service, that would be great.


> The power draw related to cooling for this rack versus a rack of conventional servers is about 1-5/th to 1/10th as much.

Do you mean your Mac Pros dissipate 1/5 to 1/10th as much heat as other x86 server hardware, or is there there some other factor in play that makes your AC 5-10x more power efficient?


I understand "related to cooling" as Mac Pro's cooling in this setup is 5-10x more efficient.


Sorry, just some off-the-cuff math. We use Supermicro FatTwin systems for Linux stuff, and they run a lot of fans at much higher RPMs to maintain proper airflow relative to the Mac Pro design (which runs one fan at pretty low RPMs most of the time).

As a result, I'm calculating that the Mac Pros draw a lot less power for cooling purposes than the Linux systems due to their chassis design. However, serviceability and other factors are definitely superior on the Supermicro FatTwins.


What's so good about this OS X graphics pipeline that isn't on anything else? I'm now super curious.


Core Image:

https://developer.apple.com/library/mac/documentation/Graphi...

http://en.m.wikipedia.org/wiki/Core_Image

I'm not super familiar with it or the competition, but I assume this is what they're talking about.


So, it's basically the MESA Intel graphics pipeline?

EDIT:

For the downvoters and the unclear, the relevant bit talks about compiling exactly the instructions needed to change the image. As I understand it, this JIT recompilation of pixel shaders is effectively what was implemented in the mesa drivers for Intel chipsets.


Compiling the shaders is a big win, since it allows us to do almost all operations in one pass rather than multiple passes. The service is intended to function on-demand and in real time, so latency matters a lot.


Thanks for replying and thanks for the article too - great read with some fantastic photography.

Really interesting to hear how you provision servers, had no idea that OS X Server came with tools for that, but it certainly makes sense. I wouldn't have thought Apple would have put much time or thought into creating tools for large deployments, but glad to hear that they have.


Thanks, the photography was done by our lead designer, Miguel. I am super impressed at what he's been able to capture in an environment that can easily come off as utilitarian and sterile.

He has some other work online that you might enjoy, not related to Macs or imgix: http://photos.miggi.me/


What's the noise level like with these machines? The typical pizza-box servers aren't exactly quiet.


They're pretty much silent relative to datacenter stuff.

One of the goals of the next revision is to have LED power indicators (maybe plugged in to the front USB ports) or LCD panels built into the front of the chassis. Right now you actually can't tell that the rack is powered on unless you walk to the hot aisle and look at the power readouts, it's that quiet.


Is fan failure reported through management APIs?


We wrote a little tool to probe SMC and graph the output, so we know CPU temp and fan speeds and whatnot. If a fan were to fail, it shows up as 0 rpm speed (in my experience thus far), so we can tell and take the host offline.

Even if you can't see when the fan itself has failed, the CPU core temp should eventually go out of the acceptable range without any forced air at all, which is also helpful to determine that hardware maintenance is required.

So far nothing has actually failed on any of our Mac Pros though. When and if that happens, the entire Pro will get swapped out as one field replaceable unit, and then put in the repair queue.


BSDPy, AutoNBI, and Imagr provides a bleeding edge OS X deployment solution that runs entirely on Linux. OS images can be generated with AutoDMG, and Munki will keep them configured and updated afterwards.

Pop into ##osx-server on freenode if you want to talk to the devs.


Thanks, I was aware of AutoDMB and Munki, but the rest are news to me. We'll check them out.


> It really boils down to the fact that we want to use OS X,

How the hell did you guys get funding to do this? I can't imagine any sane person wanting to put money behind this. Could I have their contact information?


The real question to me is: why would anyone fund doing this in EC2?

Here's the quick math on cost per gflop, including all network and datacenter costs:

  Mac Pro: $5/gflop
  EC2 g2.xlarge: $21.19/gflop


Not sure where you got ec2 out of my comment.

I also think you need to redo your math on the price per gflop for a Mac pro, ypou seem to be at least half the price of my back of the envelope work. Unless you have some crazy good supplier.


Exposing more detail behind this math is unfortunately not something that I'm ready to do, but I'm pretty comfortable with it in broad strokes. EC2 really is that much more expensive, when you factor in things like network bandwidth.

As I noted elsewhere, I mention EC2 because all of our (funded) competitors run there. We can split hairs over whether I could save 10% on Linux systems vs Mac systems, but the elephant in the room are all of the companies trying to make this sort of service work in EC2. You can't do it, and make money at the same time. Even if you can make money at small scale, you will eventually be crushed by your own success.

My overriding goal for imgix's datacenter strategy (and elsewhere in the company) is to build for success. To do that, we have to get the economies of scale right. I believe we have done so.


The choice isn't between a Mac Pro and EC2. You can rack up x86 boxes chock full of GPUs far more easily than Mac Pros.


I mention it because AFAIK, all of imgix's direct competitors run in EC2.


How long will it take to amortize the costs of the hardware based on EC2 g2.xlarge savings?


Not certain if I understand your question, but I'll take a shot at answering:

I expect a useful life span for any datacenter equipment of 3 years. A Mac Pros list price is about $4000. We pay less but I'll use public figures throughout. Using equipment leasing, I can pay that $4000 over the 3 year period, with let's say a 5% interest rate and no residual value (to keep this simple). So over 3 years, I spend $4315 in total per machine to get 2200 gflop/s.

Over 3 years with EC2, a g2.xlarge is $7410 up front (to secure a 57% discount) for 2300 gflop/s.

So I can pay over time, save $3100 over a 3 year period, and probably still resell the Mac Pro for $500 at the end of its life span. That's pretty compelling math to me. There are costs involved with building and operating a datacenter, and that evens things out a bit. What really kills EC2 though is the network bandwidth costs. It is just insane.


It'll be REAL f'in expensive in EC2, that's for sure.


The Mac Pro isn't a great value in the datacenter space. It's a single socket server that's limited to 64 GB of RAM. It's not unusual anymore to throw GPUs in rack mount systems; most of them already have the PCIe bandwidth necessary to support 4 big GPUs so it's often just a matter of getting the right riser cards.

Compare a Mac Pro to an HP DL360 that can hold 4 8-core Xeons (32 cores total) and over 200GB of RAM along with a few FirePro or Titan GPGPUs, and the HP will give you far greater density (though a rack mount system with 4 8-core Xeons and 4 GTX Titans would be a power and cooling nightmare!). That said, the Mac Pro isn't as far behind as I would have expected.

But OS X also kicks ass at multithreading, especially if you use Apple's graphics libraries. It's entirely possible they get much greater performance from OS X than a Linux or Windows based solution could provide.


No sane person is putting GTX cards into a configuration with that level of power density, you'd have reliability issues from day one. This use case is exactly why Nvidia makes Tesla cards.


OS X does not have NUMA. It has some nice libraries for multi-threading, but that doesn't really matter that much when you're saturating your memory bus because the CPUs are doing too many cross-zone memory requests.


NUMA is irrelevant anyway because there are currently no multi-socket OS X machines. Multiple cores on the same package share a memory controller.


While OS X doesn't have something like Linux's NUMA interface to explicitly lock a thread to a core, 10.5 shipped a thread affinity API which allows you to help the scheduler make better placement decisions:

“OS X does not export interfaces that identify processors or control thread placement—explicit thread to processor binding is not supported. Instead, the kernel manages all thread placement. Applications expect that the scheduler will, under most circumstances, run its threads using a good processor placement with respect to cache affinity.

However, the application itself knows the detailed caching characteristics of its threads and its data—in particular, the organization of threads as disjoint sets characterized by their association with (affinity to) distinct shared data.

While threads within such a set exhibit affinity with each other via shared data, they share a disaffinity or negative affinity with respect to other sets. In other words, a set expresses an affinity with an L2 cache and the scheduler should seek to run threads in a set on processors sharing that L2 cache.”

https://developer.apple.com/library/mac/releasenotes/Perform...


The manager of the datacenter kicked in a comment 9 minutes ago or so defending the decision (with facts) at the same level of your comment, if you want to check that out.


Well, they're fitting 4 sockets and 8 GPUS in the space that's normally used by 4 sockets only.

Also, if you're trying to sync raw images between OS X clients and the cloud, then you're going to need OS X servers in the cloud.

It'll greatly complicate the clients workflow if they can't use their built in raw converters.


At the scale imgix is going for, and given they're already doing a lot of custom architecture work, something like Supermicro's GPGPU chassis [1] would allow the same server density, plus use GPUs and CPUs that are 1-2 generations ahead of Apple's offerings. Regarding raw images, you don't need OS X servers to do that, just programs that can read the raw formats. That could be a windows box, or an OpenCL-enabled program like darktable [2]. Really the biggest issue here is engineering time for porting the app, and given the costs of the hardware they're using, I'd take a good hard look at how long it would take to port the software; I'd bet that they'd save money after deploying a few boxes.

[1] http://www.supermicro.com/products/system/2u/2028/SYS-2028GR...

[2] https://www.darktable.org/


(I'm the datacenter manager at imgix, and I wrote this article)

I mentioned this elsewhere, but considering alternative solutions was definitely a part of this project. Supermicro's GPGPU chassis was one of them, as well as some of the 2U FatTwin options (which we use for all of our other system types).

While it would probably have long term cost savings, it definitely isn't something that we could realize within deploying just a few systems. It would be a pretty time and labor intensive process on the software side, in order to save labor on the operations side that isn't particularly problematic for us. So, maybe in another few generations of our image renderers this will make sense, but it doesn't today.


Every raw image processor is different. You could use a different raw processor (which would complicate a client workflow), but the results would look different from the native OS X client raw image processor.

If you want a solution that exactly matches OS X client, you need OS X.


Right; which is why I was surprised that OS X stacks up as favorably as it does. But I wouldn't call it a great value since it's still much more expensive than a traditional rack mount setup. From a density perspective it's really not bad at all, which is a testament to how well-engineered the Mac Pro is.


I'd say it's not even as expensive as something like IBM's 4-socket rack servers like the x3850 X6.


In fairness, 4 socket servers are pretty serious money. Just the difference in cost for 4 socket capable Xeons alone puts them out of reach for many use cases.

  E5-2658 v2 (dual cpu): $1440 per part
  E5-4650 v2 (quad cpu): $3616 per part
As a result, I stick to 2 socket servers for Linux machines. I think the scaling out paradigm just works out a lot better, particularly for Internet services.


Yeah, except IBM's x86 hardware has always been stupid expensive for no good reason other than it's IBM. And didn't they spin off their xSeries server business to Lenovo once the market settled on HP systems that cost half what IBM's did?


Even an HP 4 socket system like the DL980 G7 is going to be as expensive.

None of these servers are going to be cheap.


Also the DL980 G7 is an unholy piece of crap. HP doesn't know how to build or fix them. I've gone through countless service requests on just a dozen machines or thereabouts.

It's the worst piece of any kind of hardware I've ever used, hands down.


2. The Mac Pro, whilst expensive, is good value for money. The dual graphics cards inside it are not cheap at all. As servers with GPUs are fairly niche, this might actually be a cheaper solution.

I'd actually qualify this ever-so-slightly by saying "It's a good value for money if you need the specific features it offers." Which it evidently does to the OP! But many of us would prefer something with, say, one video card, one mainstream-ish desktop processor, and one mechanical hard drive, an way lower costs.


Yes, you're absolutely right. We use conventional Linux servers for application, database and storage for exactly this reason. The Mac Pros are a good value for an image rendering machine, but not for general purpose server stuff (in my opinion).

It's also a bit dear for use as a desktop machine, but it is pretty nice to have one hanging out for on your desk for a few weeks.


From the site :

"Building on OS X technologies means we’re dependent on Apple hardware for this part of the service, but we aren’t necessarily limited to Mac Minis. Apple’s redesigned Mac Pro seemed like an ideal replacement, as long as we could reliably operate it in a datacenter environment."


Given all of the effort spent to use Quartz's graphics operations, I was curious as to how they actually performed. I opened an account and tried out the upsampling, and was a bit disappointed.

http://chen.imgix.net/rose.png?w=560

What other upsamplers look like: https://github.com/haasn/mpvhq-upscalers/blob/master/Rose.md

Looking at the other operations available, I fail to see what is done better by Quartz than just by imagemagick.


Upsampling is a pretty unusual operation, though. A more useful comparison would be something like the common website task of scaling an image down to a thumbnail and adding a bit of sharpening and auto contrast/levels, etc.


I assume 99% of users are using downsampling, to get thumbnails.


CHROME ON MAC USERS: Use the imgur links instead of the 0x0 links, some users seem to be reporting crashes related to TLS.

The downsampling also isn't that great.

Original image: https://raw.githubusercontent.com/haasn/cms/master/rings_lg_...

Downsampled with imgix: http://chen.imgix.net/rings_lg_orig.png?w=400

Downsampled with imagemagick: https://0x0.st/1-.png http://i.imgur.com/Nvl7tAm.png

Downsampled with imagemagick, gamma correct: https://0x0.st/1i.png http://i.imgur.com/Hrm4COb.png

Note how the luminance becomes square in the center (step back a bit if you can't see it), and also the edge pixels on the imgix version.


Just FYI both of those 0x0.st images crash Chrome (Version 42.0.2311.135 (64-bit)) for me and several colleagues...

You don't even have to click the link, just simply get Chrome to load it into memory

edit: looks like something to do with Chrome's pre-fetching and https cert parsing, I think they're literally parsing the "0x0" string within the cert as a memory location


Ironically enough, it looks like the crash only happens on OS X.


Opening the 0x0.st image crashes Chrome on Mac. Just Googling 0x0.st causes whole browser to crash!


What about this downsampler? I think it's a bit better http://res.cloudinary.com/rancloud2/image/fetch/w_400/https:...


That one looks extremely close to the imagemagick one, slightly less aliasing and slighly more blur - it's down to a matter of preference.

Most importantly, it doesn't have the horrible box window that the imgix resampler has.


Warning: clicking on the 3rd link in your comment repeatably crashes Chrome (the entire browser, not just the tab) for me.


As I recall, the basic idea was for something lighter-weight than spinning up and spinning down imagemagick.

Then again, one wonders why not just use FreeImage or something?


Is running imagemagick really that intensive?

Isn't running shaders intensive as it needs to be compiled on the fly and handed off to the GPU driver?


Switching between shaders is orders of magnitudes cheaper than spinning up and spinning down a process.


How does FreeImage compare to ImageMagick?


It handles image loading, unloading, and basic manipulation options.

I imagine, especially if the traffic is mainly for downsampling, it'd be sufficient. If it's not, then writing some custom code to do the image transforms on a GPU and bring them back shouldn't be that gnarly--and if you can afford to stick a shitton of macs in a data center, you can afford a graphics programmer to get that done.


Building on OSX seems like it must add a ton of complexity to your workflow, despite getting access to some of Apple's GPU-optimized image code.

Then again, it's often cheaper to throw silicon at problems than people. If you have in-house expertise in Apple's graphics libraries, that might be cheaper than hiring someone who could write the whole thing to run under a lower-cost Linux solution.

Alternatively, OS X might give you automatic access to patent licenses for some of the more expensive image formats.

Have they ever blogged about why they've gone down this path?


(I'm the datacenter manager at imgix and wrote this post)

From a pure hardware perspective, I would love to move this part of the service to Linux systems with GPUs. I spent some time evaluating this before we committed to the Mac Pro solution -- built some prototype hardware and did a cost analysis. It just wasn't the right move, because of the engineering cost for us. OS X's graphics pipeline is really strong, and we've built a lot of cool things with it. There is no analog whatsoever on Linux -- we would have to commit a lot of resources to re-build what we already have, and it would in the best scenario not be a customer-visible change. As a lean startup, we have to be ruthless with the work do: if it doesn't move the needle for our customers, it's probably not the right thing to do right now.

So instead, I've spent some time (and engaged with partners like Racklive) to get the Mac Pros to be as operationally acceptable as possible. This rack design and the chassis we designed go a long way towards achieving that goal. Airflow is taken care of, and the rack hits my power quota almost exactly (at full load). Cabling and networking and host layout follow our patterns from our conventional server racks. USB and HDMI ports on the front allow me to easily use a crash cart.

The lack of IPMI is my biggest operational headache. We have individual power outlet control and can install the OS over the network, so that's something at least.

The OS itself is also challenging. I'm not a fan of launchd. Finding legitimate information about how to do something on OS X is pretty tough, given that most of the discussions are focused around desktop users (who may be prone to pass on theories of how things work rather than facts). We've gotten it to a point where things work pretty well -- we disable a lot of services, run our apps out of circus, use the same config management system as on Linux, and so forth. We treat the Macs as individual worker units, so they're basically a GPU connected to a network card from the perspective of our stack.


> who may be prone to pass on theories of how things work rather than facts

This is the biggest nightmare about working with OS X, to me.

Any forum discussion you find on Macrumors or the Apple forums is hilariously misguided with pathetically bad "theories" on why something isn't working and how to fix it.

"Zap the PRAM!" can be found in any/every thread, and that's a mild example.


Zapping the PRAM is a pretty frequent joke around the office.

There are some OS X groups that are more focused on automated deployments for IT type stuff, so those can often be a source of more enlightened discourse, even though it still isn't exactly catering to our niche.


Zap the PRAM is the mac version of "Have you tried switching it off and on again?"


(you should get Apple promotional sponsorship while you're at it. this blog post is great for the mac pro, which is a fantastic piece of hardware.)


This should be at the top of the article to hush folks like myself who might be curious from the get go why another approach wasn't taken and why the Mac Pros were selected. Thanks for the insight as to the why.


It was something that I struggled with while writing the article -- you don't want to introduce your solution as "well, this sucks in various ways, but keep reading...". It does boil down to viewing things pragmatically though. The Macs present challenges, but by our math, they're worth it relative to the cost of doing something else and the benefits they provide.

And we actually run a lot of them in production, so I've figured out how to do it and not pull my hair out constantly. That's something I'd like to write on as well, but it would be in a different medium. More technical depth, less pretty pictures.


You do want to introduce your solution with "this has problems A, B and C, but we went with it because of X, Y and Z." It becomes much more interesting, because you're explaining not just your solution, but the problem space. Just the solution is usually less interesting than why the solution, even with its downsides, solves your problem.

By the way, thanks for clearly, completely and patiently responding to people in this thread.


Agreed, I didn't want to sugar coat things either. I think there is a bit of time given over to the downsides, but finding the right balance is tricky, and perhaps I erred by focusing too strongly on the "this is awesome!" side of things.

I want to explore the design decisions around the chassis in a follow-up, and we have one interview in the can already with the industrial designer. Hopefully that article will be a little faster to get out; this one was written about 3 months ago.

The other angle that I'd love to explore in a more in-depth article is how we actually do this stuff in production, and what we've learned about it. This would delve more into the ugly OS X stuff that we painted over to get things nice and pretty in production.


In some cases, you have no choice.

One project I worked on was where we needed to use proprietary software that only worked on OSX that would take a video, perform waveform analysis on the audio, and the output would be a properly timed closed captioned master with the text having been provided separately.

This was of course a small project, and only had a few Mac Minis rack mounted for the task, but I can easily see situations similar where you're tied to the platform for one reason or another.


Another is that Macs have their own raw image convertors, and if you're trying to sync photos to the cloud with raw files, and you want to match images with mac users, you're going to need OS X in the cloud.

If you don't have OS X in the cloud, then you're going to have to write your own raw image converter, and that means you can't sync with the OS X client native raw converter, complicating the workflow...


You're violating OSX license terms if you attempt to run OSX "in the cloud". That was the first idea I wanted to explore when the project was brought up, but for various legal reasons, we just bought the hardware. (Sidenote: Nothing is more frustrating than sitting in a technical meeting where an attorney has joined in, and explains you can't do something legally even though its the perfect technical solution).

Not to say you can't run OSX virtualized...


You actually can (legally) run OS X "in the cloud", but it has to be on Apple hardware... which kind of defeats the purpose in this case :) But it can be quite useful for development / testing purposes.


Yes you can.

http://www.apple.com/legal/sla/

"(iii) to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software, for purposes of: (a) software development; (b) testing during software development; (c) using OS X Server; or (d) personal, non-commercial use."

might be different for each release though


Hey, I'd like to know more about your speech-to-text project, if you're willing to share.


The software product was called MacCaption (http://www.cpcweb.com/download/MacCaptionBetaVerInfo.htm).

A process would drop a video file and a text file in a directory, and then a script would execute the MacCaption binary for each file with a list of parameters to get the result we wanted. A captioned video file, as well as a WebVTT caption file, would be the results of the process. Those were then put into another workflow for dissemination.

Straightforward, although MacCaption was a terrible product to work with. They're owned by Telestream now (www.telestream.net/captioning/compare.htm).


I'd be interested in this, too. I love OS X as a desktop, but always find it fascinating when such a polished "prosumer" solution that generates so much profit turns out to be also the best bang-for-the-buck in a large deployment such as this -- despite the costs of designing your own rack solution!


>>it's often cheaper to throw silicon at problems than people

Except they needed to build and maintain that silicon.


It's really kind of mind-boggling that Apple makes and sells the Pro, which can be upgraded to a really nice high performance GPU workstation, but then doesn't sell the same hardware in rack mountable forms for clusterable computing.

I'm sure they've performed some kind of market analysis for this, but there's enough differences between OSX and Linux solutions that for people who use HPC solutions (a growing market) a cleaner path from OSX to HPC would be very helpful.


(I'm the datacenter manager at imgix, and wrote this post)

It is pretty frustrating. We've joked around about how Apple will probably announce a new Xserve at WWDC next month, now that we've done the work to get the Pros happy in production.

I don't really see them re-entering this space though. Apple already has a LOT of businesses that they are clearly bored with. iPods, the Thunderbolt Display, their mice, and so on. They seem to be unable to get engineering motivation behind "unsexy" products, which I definitely think a new Xserve would classify as.

Plus, just making it rack mountable wouldn't necessarily cover our use case. What if it didn't have GPUs, or couldn't fit the ones we wanted? A lot of server class GPUs can't fit in a 1U enclosure, they need 1.5U or 2U chassis for airflow and heatsinks and whatnot.


I think the problem isn't "unsexy", but service and scale.

Buyers of rackmounts require a totally different kind of service. It's not just about the iron, it's a largely separate operation from the consumer PC business. You don't exactly take your Xserve to the Genuis bar...

There simply isn't enough demand for Xserves to make it worth the investment for Apple. (As far as I remember, many companies that bought the original Xserves phased them out again because Apple couldn't deliver that kind of service.)


Also true. Apple is not a server company, and they never will be.

I try to lean on vendor support as little as possible, because it does me no good to point a finger at a vendor when something goes wrong -- I just want it fixed, even if I have to do it in-house. But you still need someone to go back to when push comes to shove, and I just don't see Apple being set up for that kind of support.

In fact, Apple isn't even set up for the kind of purchasing that goes along with it. They're a really old, staid organization when it comes to the sales structure. We wound up going with a VAR rather than direct, simply to improve the experience.


I get the sense that very few people that want GPU clusters, want them all the time. Most want, either at the low level, to rent some g2.2xlarge EC2 boxes, or at the high level, to pay a full-stack "render-farm-as-a-service" company.

One can certainly imagine Pixar or whoever having a data-centre of Macs, but at their scale, where they also write all the software for their rendering pipeline, they can easily make that software cross-platform such that developers can test-render on a Mac, then grid-render on a Linux farm without any friction.


There used to be a ton of 3rd-party solutions for racking Apple products. Marathon Computer made a number of them, including replacement top brackets for a G3-style system: http://www.macobserver.com/news/99/june/990610/marathoncompu...

I used to have a tape measure from Marathon that was marked out in U, but I haven't seen it in years. They were a pretty cool company at the time.


They used to. No one bought them.


Apple used to sell the rack mounted X-Serve, but discontinued that a few years go.


Yeah that was years ago. IIR there were all kinds of administrative headaches with those, though I never dealt with it myself.


>> but discontinued that a few years go.

11 years ago.



They were last updated in 2009 though, so those Xserves they were selling in 2011 were pretty dated.


Having worked with some rack mountable apple servers, I have a feeling they either don't care about having their hardware installed in the data center or don't know how to do it well. Believe it's the case of don't care.

I personally felt it was a disgrace to see Apple logo on the apple's rack mount servers.

Considering how little rack mounted equipment is replaced versus consumer hardware, I can see why.


Sonnet sells this workaround that puts 2 Mac Pros in 4U http://www.sonnettech.com/product/rackmacpro.html


(I'm the datacenter manager at imgix, and wrote this post)

Yep, there are lots of other options out there. I considered at least 4 or 5 off-the-shelf ones before committing to designing and building our own.

In Sonnet's case, it is super expensive and no denser than this: http://mk1manufacturing.com/store/cart.php?m=product_detail&...

We're able to achieve twice that density, which put it right on target with where I wanted to be. 44 of 48 switch ports utilized, almost all CDU outlets utilized, and ~13kW out of 14kW utilized under load.


Yeah I saw it was half the density. Too bad they're so expensive. On the plus side... you can actually buy them :)


True, but now you can buy our chassis from our rack integrator (in sufficient quantities), and I'm hoping they'll be able to open source the design. If you're in the market, contact me.


Apple excels at human-facing industrial design and software user experience.

Neither of these are relevant in a rack-mounted environment with heavily customer-written backend/batch software with no user interface


The first cluster I managed was an SGE cluster running on rack mounted xerves.


The mechanics of this are pretty neat. But the photography in the article is incredibly distracting. Does every shot have to be at an off-kilter angle? If this is a story about engineering, how about some head-on shots of the engineered thing.

I get that the Mac Pro is a beautiful object, but this isn't about the mac. It's about the rack, and none of these photos let me understand it in one shot.


We added another wider shot of the chassis, but here are two others that didn't go in to hopefully give you a better sense of how the chassis actually looks:

https://www.dropbox.com/s/6jwqwxsu50zvhrw/_1090440.jpg?dl=0

https://www.dropbox.com/s/n7rej2uusmg2a3w/_1110402.jpg?dl=0


I agree, I think. To me the interesting part about putting Mac Pros in a rack is integrating its relatively unusual approach to air flow.

None of these pictures really show how that is accomplished here. In fact many of them seem to be deliberately hiding that specific aspect.


(I'm the datacenter manager at imgix, and I wrote this article)

I had originally intended there to be a totally disassembled chassis with an airflow overlay on top, but it turned into a lot of work. All of the chassis were already assembled by the time we took the pictures.

The high level view is that air is drawn in to the vent on the front right, which has a separate channel that all 4 Pros sit in. They are sealed in place, so the air has to pass into each Pro's air intake to go anywhere. The other side of the chassis is open to the back of the rack and holds each Pro's exhaust vent.

I'll go through the photos we took and see if there's something that would help to illustrate this better.


They're really nice pics and you did a great job explaining everything!


I was bothered by the extreme depth of field in images next to the text.


Do the two pictures at the very end, which are a profile and rear view of the rack, convey what you're looking for?


Apparently you can fit a round peg in a square hole and achieve high efficiency density while you're at it.


Is there a summary somewhere that explains what makes "OS X’s graphics frameworks" worth going to all this trouble?


We haven't done any in-depth technical articles yet, and there's the worry of giving away our secret sauce, but it is something that I'd like to explore more in the future.


This seems really crazy to me. I get it, when you're a startup sometimes you end up with bubblegum and scotchtape solutions like this and sometimes that really makes the most sense on many levels.

But usually you keep that to yourself! To me, this reads sorta like: "Well, it was really hard to find someone who knew how to build a replacement bridge across the creek. We were pressed for time, and Bob didn't know anything about bridges, but luckily, he used to be in the Air Force and we have a bunch of venture capital. ... So we bought a helicopter instead. We only cross a few times a year, so for now we're coming out ahead and it works out for us. Plus the pictures are nice..."


Wow, learn something new everyday. I thought everyone who did image processing and cared about performance used NVidia cards for the CUDA libraries. I never knew apple [GPU image libraries](https://developer.apple.com/library/mac/documentation/Graphi...) made AMD a competitive choice.

It is much more expensive, though a lot less engineering work, than buying some used Tesla's on ebay: http://www.ebay.com/sch/i.html?_from=R40&_trksid=p2050601.m5...

or even brand new


Going with a Tesla solution would actually be way way more expensive (when bought new). A Tesla K20 is 3520 gflop/s for $2900 (and then you need a server to put it in). The Mac Pro is 3500gflop/s for $4000.

The Tesla card does have a significant advantage in terms of double precision math, but that isn't the kind of workload we're doing. If we were to go with GPUs on Linux systems, the NVidia GRID card or AMD FirePro server cards are probably a better fit. Or maybe even NVidia Quadro or GTX, although they don't have the proper fan layout and there would be some tears shed over getting the power sockets cabled.


How come these used Tesla's are so cheap? (e.g. $150 for an M2090) http://www.ebay.com/itm/NVIDIA-Tesla-M2090-6GB-GDDR5-PCIe-x1...


I theorize that it's because they're server grade equipment, and the used market for server gear is not that large. Most established businesses don't want to risk buying something that straight up doesn't work or will fail later, even at a 50-75% cost benefit. It just isn't worth the time spent dealing with it.

If you're a one person startup, then you do what you have to do to survive. Eventually you get to the point where free stuff actually costs you more than just paying for it in the first place.


I'm really impressed by the quality of engineers on this forum. It's amazing, it seems that just about everyone here knows how to do skuhn's job better than he does!


I think the big picture is that this looks like a joke, and everyone's having fun trying to articulate their feelings about it.


Some previous discussion here about using OS X, mounting Mac Minis, etc: https://news.ycombinator.com/item?id=8138791


Thanks for posting this. This is the second article in the series (although it took forever to finalize).

We're also working on a third, which I think will be in the format of an interview with the Mac Pro chassis's designer.


Seems like you could have gotten higher density going vertical instead of horizontal. It would have been 50% taller (6U instead of 4U) but it could have held 100% more Mac Pros.

A Mac Pro is 9.9 inches tall and 6.6 inches in diameter. 9.9 / 1.75 = 5.65 and 6.6 / 1.75 = 3.77 https://www.apple.com/mac-pro/specs/


This was one of our initial ideas for the design, but it boiled down to an airflow concern. There are some products that do this, such as http://www.h-sq.com/products/mprack/index.html

If you look at how the airflow works on that shelf, I think you'll see why I don't have confidence in that solution. The air paths to each system seem to be based on wishful thinking.

We also didn't need to go that dense after considering each host's power draw at full load. I design towards a 208v/3ph/50a circuit on each rack, and 44 Mac Pros at full load (plus a switch) are about 13.5kW in my testing. So we would need to build for 60A circuits, or not completely fill the rack, to make the vertical orientation worthwhile.


That product you linked to isn't all that bad, provided that you run with a front plate to actually block off the rest of the cold isle and force the air to flow to the hot isle. But that doesn't seem to be included anywhere. So I agree that it's wishful thinking.

The reality of the power budget makes the most sense really. There's no point in cramming extra units in if you're going to have to rewire for them. Systems engineering!


H-Squared's product isn't terrible by any means, but I see it as phase 1 of at least a 2 or 3 phase solution. If you were running one shelf of Pros in a rack, it wouldn't matter much -- but at 10 racks of 88 Pros each, you'll run into cooling issues unless you put more work into it.

On the topic of density: our chassis was originally specced to support 6 units rather than 4. I vetoed that because it would require a second top-of-rack switch, and would have been too power dense for our current site design.

44 turned out to be the magic number this time around. The design is also flexible enough that if the specification changes dramatically in future Mac Pros, we can tweak as necessary to achieve ideal density.


I suspect that the sideways orientation is motivated by cooling. The mac pro sucks air in the bottom and shoots it out the top, so if you stack them, the top machine is sucking in the bottom machine's hot air. In this orientation, the whole rack is sucking air in from one exposed side and pushing it out the other exposed side; much easier to engineer around.


You're right, it is entirely about airflow.


Except that this would put the hot chambers next to the cold ones of the next layer.


Would it really have killed Apple to keep on making rack mountable OS X servers? I bought and configured quite a few of them back in the day and was quite fond of them.

I realize it's not the Apple Way™ but considering just how bizarre and niche the current trash-can Mac Pro line is, it hardly seems more niche than that.


Apple doesn't even make standalone desktops anymore. The Mac Mini 2014 is twice as slow as the Mac Mini 2012!


At this point, Apple should just license OS X for VMWare installations on non-Apple hardware so we can skip all this foolishness.


They would probably run their own OSX Based EC2 competitor before doing this


I'm an Apple fan (well, actually a NeXT fan that went with the flow), and I can think of no way I'd trust an EC2 competitor from Apple given their history with the cloud. I don't doubt your right, but I just couldn't see using it.


It's deep in the comments and I really like the sentence:

x0054: "You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes."

And place them horizontally. And without additional fans!

And surprisingly, if you read skuhn's answers here, for them it all still has sense, financially.

And also surprisingly, Apple says it's OK to use the Mac Pros horizontally:

https://support.apple.com/en-us/HT201379

Fascinating.


I wish that I could share some of the internal cost analysis that was a big part of the decision process; I've dropped breadcrumbs here and there, but exposing the whole thing just isn't a possibility.

Physically, the Mac Pro itself is really densely constructed. Even with some empty space inside our Mac Pro chassis, the solution is effectively 1U per 2 GPUs. That's pretty dense, and it hits our power target for the current site design, so going denser would only lead to stranding space ahead of power (which leads to cost inefficiencies).

But, let's consider some hypothetical configs with list prices that I just looked up. Anyone can do this, and these are not reflective of my costs (you can always do better than list). In reality, I would do a lot more digging on the Linux side, but this is a reasonable config that is analogous in performance and fits into my server ecosystem.

I'm excluding costs that would exist either way: the rack itself, CDUs, top-of-rack switch, cabling, and integration labor are all identical or at least very similar. Density is very similar, so there's no appreciable difference in terms of amortized datacenter overhead.

  Mac Pro config (4 systems in a 4U chassis):
    - 4x Mac Pro ($4600)
      - Intel E5-1650 v2
      - 16GB RAM
      - 256GB SSD
      - 2 x D700
    - Our custom chassis

  Capex only: $0.70 per gflop

  Linux config (4 systems in a 4U chassis):
    - SuperMicro F627G2-FT+ ($4900)
      - 4x Intel E5-2643 v2 - 1 CPU each ($1600)
      - 8x 8GB DIMMs - 16GB each ($200)
      - 8x 500GB 7200rpm (RAID1) HDD - 500GB RAID1 boot drive ($300)
      - 8x AMD FirePro S9050 - dual GPU ($1650)

  Capex only: $1.03/gflop
For comparison, I'll give EC2 pricing as well. It's a tad unfair, since we aren't including on-going maintenance and electricity for the Mac or Linux options -- but 3 years of power is also not nearly equal to the cost of a server. EC2's pricing becomes truly atrocious when you consider network costs -- there is simply no comparison between 95th percentile billing and per-byte billing.

  EC2:
    - g2.2xlarge @ 3 year reserved pricing discount ($7410)

  Instance operating cost only: $3.23/gflop
The Linux config for sure offers many more hardware options and greater flexibility -- and it also requires us to rewrite our imaging stack that is working out pretty well for us and our customers.

I firmly believe that we've made a pragmatic and sensible choice for our image rendering platform today. imgix has a number of smart and talented people constantly evaluating and improving our platforms, and I'm confident we will keep making the right decisions in the future (regardless of how nicely the Mac Pro may photograph).


It is hard not to think that a great deal of time an money would have been saved by removing those "Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance."


Unless of course you had to re-implement most of OS X's graphics framework on Linux, which could take both a lot of time and money.


The video and imaging pipeline on OS X is light years beyond anything you could roll yourself in a reasonable timeframe. It's really good stuff.


I imagine more advanced things like face recognition and such are not so simple, but from my experience writing a raw converter, a lot of image processing is far simpler than you'd expect.


A lot of the complexity comes down to not just doing the operation, but doing it correctly and quickly. ImageMagick does multiple passes, for instance; this is sub-optimal for both quality and speed.


IF that's what you want, that's a fairly ingenious solution. If you look at https://macstadium.com/mac-pro you'd think you can just build the racks with the Mac Pro opening back and front. But it's 6.6" inches wide so you can only have two, three would be more than 19" (and you can't put 19" of equipment inside a rack). But this way you can squeeze four in the same space and since the cylinder is 9.9" high if you'd need you could squeeze in quite a few external hard disks as well although that would require using some fans to help moving air. You perhaps have 6-8 inches free space, one 3.5" HDD is 5.75 inch high so you could stand it on the shorter 4" edge and put in 6 taking 1" from the 19" and probably have two banks of this to arrive to 3 disks per Mac Pro and leave 4-6 inches still for moving air. It might not be impossible to squeeze in 6 disks per Mac Pro but cooling would need to be very impressive for that.


For our use case, the disk portion doesn't live on the Mac Pros (we have Linux systems that act as storage servers).

We toyed with open shelf type solutions that would let us mount the systems front-to-back, but as you noted, anything above 2 Mac Pros across won't fit in a 19" rack. We also thought about mounting 23" rails in our standard cabinet, but ultimately settled on this chassis and orientation.

One of our early design ideas: https://www.dropbox.com/s/15u19aivay4hfiu/2014-01-13%2017.14...


This is interesting -- they actually manage to get greater density out of this setup than many traditional rack mount systems offer.

And to those questioning "Why would you use such expensive systems when commodity hardware is just as fast at half the price?" I would reply that the Mac Pro isn't all that expensive compared to most rack mount servers. If you're talking about a difference of $2000 per server, even across a full rack you're talking less than $100k depreciated over 5 years.

Though Apple is sorely lacking a datacenter-capable rack mount solution. I've always felt they should just partner with system builders like HP or SuperMicro to build a "supported" OS X (e.g. certified hardware / drivers, management backplane, etc.) configuration for the datacenter market. It's kind of against the Apple way, but if this is a market they remotely care about, channel sales is the way to go.


This is interesting -- they actually manage to get greater density out of this setup than many traditional rack mount systems offer.

If they are GPU limited...

A full 4U rack of Mac Pros is 8 AMD Fire GPUs (6GB VRAM each), 256GB main RAM, 48 2.7GHz Xeon cores (using the 12-core option), and 4TB of SSD. 10G Ethernet via Thunderbolt2.

Let's set aside differences in GPU and processor performance; we're just looking at the base stats. All for about $36K USD, not including the rack itself.

An alternative is the SuperMicro 4027GR-TR:

http://www.supermicro.com/products/system/4U/4027/SYS-4027GR...

So, maxed out, you've got 8 Nvidia Tesla K80 cards (dual GPU), 1.5TB RAM, 28 2.6GHz Xeon cores, and a lot of storage (24 hot-swap bays). That's in a 4U rack too.

Call it about $13K USD for the server, and $5K per GPU. Plus a little storage, call it about $56K USD with 10G Ethernet.

The SuperMicro system is designed to be remotely managed. Each GPU has double the VRAM of the AMD Fire ones (12GB vs. 6GB).

I don't know the exact performance figures of the AMD Fire vs. the Kepler GK210, but I'm sure the Fire it isn't nearly as good. And you've got twice as many Nvidia chips on top of that.

At some point its going to get cheaper to re-write the software...


The Tesla K80 didn't exist when I started this project, but to do some quick math:

K80 gflop/s: 8740 2x FirePro D500 gflop/s: 3500

K80 runs about $4900 a card, whereas the entire Mac Pro (list price) is $4000. So it's 2.5x the performance at easily 2x the cost if not more.

You're right that there is a cost advantage to going with commodity server hardware, but I don't think it's as great as most people think in this particular case. It's also far from free for us to do the necessary engineering work, and not just in terms of money. It would basically mean pressing pause on feature development at a crucial time in the company's life, and that just isn't the right move.


2x FirePro D500 gflop/s: 3500

That 3500 gflop/s for the D700? It is instead 2200 for the D500.

http://www.amd.com/en-gb/solutions/workstations/d-series

K80 runs about $4900 a card, whereas the entire Mac Pro (list price) is $4000. So it's 2.5x the performance at easily 2x the cost if not more.

The 6GB VRAM version with the D700 costs another $600 USD each.

The K80 has 12GB VRAM per GPU (24GB total per card).

If your code can use the additional memory, that is a huge difference.

Anyway, 3500 gflop/s times 8 is 28 tflop/s for the Mac Pros.

With 8 K80s, you're at 70 tflop/s. Single precision. So that's double the raw performance, and double the memory. Actual performance for a given workload? I wouldn't care to say.

I'd be concerned about thermal issues too. I wouldn't be surprised that the Mac Pro gets throttled after a while when running it hard. The kind of server you can put the K80 in usually has additional (server-grade) cooling.

I'm not disrespecting you guys, if you've got a solution that works, and makes you money, more power to you!

But I stand by my claim that at some point, it will be cheaper to rewrite the software for the render pipeline. Not this year I guess, and who knows, maybe not next year either.


Sorry, I do have this evaluation in a spreadsheet somewhere (except against the Tesla K20, K80 wasn't out then), but I just quickly looked up the Mac Pro specs. We do use the D500, so I should have quoted those gflops. There is a benefit to off-the-shelf GPUs, but I don't see it as a make-or-break kind of situation for imgix right now.

I agree that some day in the future, it does seem like it will make sense to bite the bullet and rewrite for Linux. It probably won't solely come down to a cost rationale though, because there are a TON of business risks involved in hitting pause on new features (or doubling team size, or some combination thereof).

Fundamentally I don't believe in doing large projects that have a best case scenario of going unnoticed by your customers (because the external behavior has not changed, unless you screwed up), unless you absolutely have to.

The real reason to migrate to Linux would have to be a combination of at least three things:

  1. Better hardware, in terms of functionality or price/performance
  2. Lower operational overhead
  3. The ability to support features or operations that we can't do any other way
Much more likely, we would adopt a hybrid approach where we still use OS X for certain things and Linux for other things.


We do use the D500, so I should have quoted those gflops.

Well now I'm curious as to why you aren't using the D700s. The extra gflops seem like a good value to me. Approximately 60% greater GPU performance for a 15% increase in cost, everything else being equal.

But you probably have to get some work done, rather than answer random questions from the Internet. :-)

Good luck!


It is intriguing, and we have one D700 Mac Pro for test purposes. At the time we ordered the Pros for the prototype rack that is the subject of this article, we found that other parts of our pipeline were preventing us from taking full advantage of the increased GPU performance. So we ratcheted down to the D500.

Keep in mind that either of them offer significantly higher gflop/s per system than the best GPU ever shipped on a Mac Mini (480 vs 2200 vs 3500).

However, we have fixed bottlenecks in our pipeline as we identified them, so it is probably time to re-evaluate. I actually just had a conversation with an engineer a minute ago who is going to jump on this in the next few days. Higher throughput and better $/gflop is always the goal, just have to make sure we can actually see the improvement in practice.


Actually, I realized that we were both wrong on the math.

2200 gflop or 3500 gflop are the specs for just one of the Fire Pro cards. Whoops, I was writing a lot of comments that day.

So a Mac Pro with D700 GPUs has 7000 gflop/s and runs $4600 (list), whereas the Tesla K80 has 8740 gflop/s and runs $4900 or so. Since you still need a whole server to go with the K80, I stand by my thinking that it's not a great deal. We also don't need 12GB of VRAM for our use case, so that's a bit of a waste.

In Nvidia's product line, price/gflop is not at its best in their highest end cards. AWS uses the Nvidia GRID K2, for instance. You're paying a lot for the double precision performance in the Teslas, and imaging doesn't need it.


> it will be cheaper to rewrite the software for the render pipeline.

You don't even have to rewrite it, Linux imagemagick + OpenCV can handle the use cases of cropping and sizing trivially. They can keep the rest of the code (device mappings and CDN related I guess) unless that was implemented using ObjectiveC (this is another thing that I would think is crazy)


Cropping and sizing are just two (common) operations that imgix can perform. There's a lot of other stuff as well: http://www.imgix.com/docs/reference

Not to say that it's totally impossible to do these types of operations on ImageMagick, but it wouldn't work nearly as well as our current solution does. ImageMagick is a shockingly awful tool for use in server-land for a variety of reasons, some of which are handled better in GraphicsMagick. IM was the bane of my existence at more than one previous company.


Most of the armchair folks on these threads haven't internalized Fred Brooks, especially regarding diminishing returns as the team size grows.

You as the server guy hiring a couple people to figure out how to squeeze another 10% value out of the system by hacking hardware is not fungible with hiring two more devs to try to avoid racking custom hardware. As if two devs could pull that feat off anyway.


So for $56k USD you get a system that is roughly twice as fast as the 4x Mac Pro solution... but also costs twice as much? The numbers actually don't work out too badly. At least, better than I would have initially assumed. Density is really the only area where the 4x Mac Pro solution loses hands down.


There's also local storage. The SuperMicro box can host a lot more storage locally than the Mac Pros can do easily (you'd need external Thunderbolt2 drives), and it can make sense to run RAID-10 or something to get more speed.


Unfortunate for many who wish otherwise, it's very clearly a market they do not remotely care about.


I think the opposite is true. They made the Rack mountable xServe for almost a decade.

The fact they discontinued it shows that it's clearly not a market - customers didn't want it in enough volume to justify the product.


It's not that there wasn't a market, it's that the Xserve wasn't sold in the way that companies that buy lots of rack mount gear handle procurement. If they certified specific configurations of commodity hardware for OS X and sold them through existing reseller channels as a new SKU, it would be much easier.

That said, there's probably not much of a market for it anymore since we've gone a few years without an OS X rack mount machine and people have found other solutions.


Just last night I was asking why with so many mobile app companies no one is building their server side in Objective C. Wouldn't that have the same personnel advantages as Node.js (supposedly) offers the web world? I haven't looked to see if there is a decent Objective C web framework, but if it's just an API I guess you don't need too much.

I mean I can think of lots of reasons to stick with Rails/whatever (and that's what I'd do), but I'm surprised it is quite so unheard of. You'd get much better performance. Skipping garbage collection with ARC would be awesome. Coding time is still pretty fast, and it's not as unsafe as C/C++.

Just a crazy idea for anyone about to start a mobile app company. :-)


This seems like a company destined to fail:

1) Massive premium for compute

2) They're at the mercy of Apple, a single completely unpredictable vendor.

3) Apple changes it's form-factors to the latest "design" way to frequently

4) Apple sucks to manage in mass


>Apple changes it's form-factors to the latest "design" way to frequently

Going on past history, I don't think they will have to worry about the Mac Pro being updated too often.


1) They did a cost analysis and it was alright, I can't imagine high-end servers with dual GPUs being much cheaper or much more expensive myself.

2) This isn't the 90s where Apple was at risk of folding and going away.

3) They actually don't other than phones. Go back to all their Pro desktop lines starting with the PowerMacs. The previous MacPro case lasted quite long and came from the PowerMac G5.


The longevity of the previous Mac Pro form factor definitely gives me hope, although one can never be sure when it comes to Apple.

Consider Mac Rumor's lifespan of the various Mac Pro models: http://buyersguide.macrumors.com/#Mac_Pro

The previous form factor (silver tower) lived from August 2006 to December 2013. If we see that kind of longevity out of the black cylinder form factor, I'd be thrilled (although preferably with more internal updates). However, there's nothing stopping us from adapting our design to whatever new models come out.

We have current rack designs for Mac Minis and Mac Pros now, and we can add a third if the need arises.


> 2) This isn't the 90s where Apple was at risk of folding and going away.

No, but they are seriously unpredictable. Just ask anybody that built expensive workflows around Final Cut only to find out the new version wasn't backwards compatible with project files.


Isnt this ridiculously expensive? Couldn't you achieve the same thing using cheaper PCs?


(I'm the datacenter manager at imgix and wrote this article)

Nope, it's not ridiculously expensive. The GPUs in the Mac Pro are actually an exceptionally good value per gflop (when I last did a comparison a few months ago). GPUs that will work in servers are not cheap -- a comparable AMD FirePro S7000 is $1000, and the Mac Pro has two of them.

There's the cost of having these Mac Pro chassis fabricated, but they're passive hunks of metal with some cabling run. Nothing too expensive there, and economies of scale are on our side.

The Mac Pros are at least 5x more cost effective than Mac Minis (per gflop, total operating cost), and they're substantially more cost effective per gflop than doing something like EC2 G2 instances. My estimate is that moving to Linux servers would save us about 10-15% per gflop, but that could easily be eaten up by the engineering time needed to migrate.


>Couldn't you achieve the same thing using cheaper PCs?

They say "Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance". So they couldn't achieve the "same thing" in the sense of running their software on racked computers, because it won't run on PCs, and if you're thinking about expense you'd have to consider the cost of making the software run equivalently well on PCs.


I'm really curious about any study/compassion between OS X's graphics frameworks vs other open/closed source solutions available. How 'output quality' is measured?


At this scale people do all sorts of ridiculously expensive things, like purchase six+ figure software licenses, or buy hardware from OEMs that charge an arm and a leg for "server" quality when it's no different from those charging 33% less.

So that's to say, that if there's an actual use for OS X at this scale, it's far less financially crazy than a lot of things that go on in data centers.


All discussed in the article:

"Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance... Building on OS X technologies means we’re dependent on Apple hardware for this part of the service, but we aren’t necessarily limited to Mac Minis."


Ahh gotcha.. Makes sense.


Except it doesn't make sense to base a server architecture around this in the first place, so, no.


In a word: yeah!


This reminds me of the scene from the matrix where all those humans are suspended in cords as their energy is being drained off by a dark power.


How do you remote provision OS X? I mean, how do you get to the boot menu to choose network boof? With the crashcart?


The crash cart is the method of last resort, and it's come into play a fair amount as we were figuring out how to do this.

The better solution is to have a NetRestore server on the network, and configure the Macs to boot from a particular server with the bless and nvram commands. Then on the server, you control if the image gets served or not based on some external logic (in my case, an attribute in my machine database).

At the moment, NetRestore is running on an OS X Server machine hanging out on the network, but integrating it with our existing Linux netboot environment is on my to-do list.


Have you considered a thin-imaging solution like DeployStudio?


I think our solution is pretty similar in concept. We just deploy the base OS (with enough config done to ssh in later) via NetRestore. Then whatever packages or setup tasks are required is done in a post-install step using Ansible.


Really interesting post; the Mac Mini rack looks insanely cool.

This has myself and a colleague wondering what Apple run in their data centres. Can anybody hazard a guess? Is it Apple hardware with OSX? Is it custom/third-party hardware running *nix? I seem to remember somebody mentioning Azure not too long ago.


I mentioned this in another comment, but my understanding (without having worked there) is that they do not use OS X or their own hardware internally.

I think that things can be quite different between orgs though, with some adopting a more enterprise-y appliance setup (NetApp Filers, InfoBlox, etc.) and others building services more like an Internet company (Linux servers and open source based services).


A beautiful demonstration by people who like macs and don't understand geometry or economics.


Reminds me of a tube amp.


Right?

Server hardware evaluation by how well it comes in in photos.


This seems like an odd choice to me.

I think OS X has been the best all-round Desktop OS for many years now, but what does it give you as a server that a linux-based system can't, and that's worth the trouble of custom racks, vendor lock-in and high costs?

In fact, if you're working with OpenGL, OS X can be frustrating since it only ever supports an OpenGL version a few years behind the latest release - IMO one of the platform's biggest drawbacks.

Then again, I've seen some pretty strange errors on server machines doing GPU-heavy work on linux machines with Nvidia cards, and it's probably easier to get support on a standardised SW/HW system such as the Mac Pro...


It would be interesting, once these have been in use for a while, to see some stats on the relative temperatures (+ fan speeds) of each machine within the enclosure.

I can imagine the dynamics of 4 machines scavenging air from a single chamber, with an opening on one end, will result in the machines nearer the warm aisle having to work harder to keep cool...

I also wonder what kind of ducting could be implemented to minimize this effect.

Anyway, a very cool project ending in what looks to be a fantastic end product. I wish I had the chance to work on something like this!


I'm graphing this type of data, but it's too early to draw firm conclusions. I was mainly concerned that the upper, rear units might not fit within my desired thermal envelope, but so far it hasn't been an issue at all. This might make it into our follow-up post, which is intended to explore the design decisions behind the chassis.

If heat or airflow did become a problem, we could add fans to the chassis (either in the intake tunnel or along the exhaust vent). The ideal solution is probably to also attach a chimney to the rear of the rack, but so far it hasn't been necessary.


> I wish I had the chance to work on something like this!

I should have added, for anyone reading: you do! imgix is hiring, and if you don't see a job description that appeals to you, just reach out and let's see. I'm writing ops-type job descriptions right now.


Ever since the new mac pros came out, I was curious how they were going to solve the rack problem with the crazy round design. Especially in regards to cooling.


Having just finished rolling out a largish scale Thumbor implementation has anyone compared it to Imgix?

The feature set I required is served by both equally, so it comes down to performance/ddos prevention/cost for me mostly. I am unlikely to modify what I have just done since it is working fine, but for the future would love to know if anyone has experience with this.


I'm getting the how, just not the why


It's designed for believing Apple fans, silly.


Wow, that's cool and yet doesn't seem to be the best power vs cost solution.

Vendor lock-in is a bitch.



We use MK1's Mac Mini shelf (as seen at http://photos.imgix.com/building-a-graphics-card-for-the-int...), which is pictured at that page. I just wasn't in love with their Mac Pro shelf design, so we went a different direction.

Keep in mind also that 8 Pros in 7U = 48 in a 44U space. So it's a pretty similar density, but I don't think it is as ideal in terms of airflow. Instead, it's more ideal for working on the systems individually (such as in a colocation environment), but that isn't a particular concern of ours.


I'm sorry, but those look absolutely ridiculous. That being said, I want one.


Apropos to nothing, but when you stuff a bunch of Mac Pros in a box, they begin to look like enlarged vacuum tubes/capacitors. I can almost imagine them being "screwed into" the rack chassis.


You do have to employ a bit of a twisting motion to remove them, since they have some gasketing in place. I wanted to add a dry ice smoke machine and blue LEDs, but alas...


> Parts of our technology are built using OS X’s graphics frameworks, which offer high quality output and excellent performance.

I'm really curious about any study/compassion between OS X's graphics frameworks vs. other open/closed source solutions available. How 'output quality' is measured? Is really that great and unique? I hardly think that simple image operations like cropping/blurring/masks implemented in OS X framework are significantly faster and with 'better quality' than the same algorithms implemented in Linux/Windows. Not mentioning that you can boost your computation using cuda/opencl on Linux practically seamlessly. But again, citation is needed here.


Very well organized datacenter indeed, clean and well designed.


Thanks! A previous post in the series went into a little more detail about the datacenter itself: http://photos.imgix.com/building-a-graphics-card-for-the-int...


Heads-up: Using the default Iceweasel (Debian's Firefox) User-Agent, the content of the article doesn't show up. If I switch to a Firefox User-Agent, it does.


Thanks, the article is hosted by Exposure (one of our awesome customers), so I'll pass this note along to them.


If this is for anything else than PR then good God...


What a horrible solution! The power to thermodynamic waste (=heat) ration must be quite horrible)


Definitely clicked the link thinking someone had rack mounted an army of MacBook Pros


I didn't know about the cylindrical Mac Pros. They are quite attractive.


As a recent Mac Pro owner, those things are beautiful beasts.


Great read. Nice website. Great Photos.


apple's licensing terms force abominations such as these


Mac porn :)


LOL


This is a joke right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: