Hacker News new | past | comments | ask | show | jobs | submit login

The "insightfulness" of these points is overblown, especially if you've worked at Google, where the solution to "our bloated application server takes 20 minutes to recompile" is "then use our distributed compiler that runs on 100 machines," and the solution to "sometimes a worker machine takes a long time to come up or doesn't come up at all" is "then use redundant workers and fire up 200 machines."

I wouldn't be surprised if Google does pull off some snazzy new real-time architecture to use internally, but so far I think their strategy of farming out execution to huge numbers of crappy machines, while innovative and very successful, has pretty much exactly the problems you'd expect it to have.




the solution to "our bloated application server takes 20 minutes to recompile" is "then use our distributed compiler that runs on 100 machines,"

This is a bit disingenuous. The app servers are bloated because every library is rebuilt and statically linked, avoiding deployment problems and apps that use massively-outdated libraries. Continuous integration is expensive, but we can afford it.


You shifted the problem to the network and disk storage. Distributing compiles, objects, executables, debug images, packages, etc can take longer than a compile on a single machine unless you are quite careful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: