Hacker News new | past | comments | ask | show | jobs | submit login
The Cloud Is Raining Cash on Amazon, Google, and Microsoft (bloomberg.com)
115 points by adventured on Oct 23, 2015 | hide | past | favorite | 76 comments



Microsoft must be minting money from their subscription and Azure services. We replaced three servers with Office 365 online and Azure virtual machines and Azure AD. It has several advantages but cost is not one. Anyone that says moving to the cloud saves you money is likely wrong.


If you're just replacing hardware, yeah, cost isn't an advantage. Because you still have your datacenter (or not enough hardware to have ever required one), you still have your IT department (or not enough systems to have ever required one), you still have to have your devops team (or not enough software/instances to have ever require one), etc.

The advantage of a lot of these is that you don't need any of that. You can have just a dev team, nothing else, and be up and running at any scale. At that point cost comparisons may come out in favor of a cloud solution (or at any rate are so close that the tradeoffs become acceptable, hence their popularity)


> You can have just a dev team, nothing else, and be up and running at any scale.

Has anyone done this in practice at significant scale? Every shop I know of with a big cloud deployment has armies of devops people managing the deployment, and further reserve armies on pager-duty standby in case something blows up. They're certainly doing different things than if you had an in-house datacenter (less hardware maintenance, more cloud orchestration), but I'm not convinced the sysadmin/devops headcount has actually gone down.


There are plenty of examples at this point. A few I can think of off the top of my head -

TwitPic relied on it and operated as basically a one or few person shop the entire time. At the peak, before Twitter came up with their own solution, it was a relatively large service.

Instagram, Imgur, and Reddit all had/have (Instagram of course moved to FB) small teams operating at vast scale with the help of AWS.

Slack has probably benefited a lot from leaning on AWS for scaling purposes, given their rapid growth. I'd place a bet that they have managed to achieve their scale with a relatively small team managing their infrastructure.


You mean Imgur and Reddit that keep going down and have always been notoriously bad at staying up? Not a good example at all.

Also Twitpic didn't reduce their sysops, they simply kept it low, AWS didn't do that, a server which serves much more complex stuff and many more people than twitpic can be run by one person, see stackoverflow.

There are a lot of people out there who don't realize what can be done with one or two dedicated servers and one or a handful of good developers without ever mucking around with cloud. Cloud can just add yet another point of failure to a small business if it wasn't worth it in the first place.


I'm not sure where you're getting the assumption that Imgur "keeps going down". They maintained a 99.99707% uptime last year.

Sure, its not 5 9's, but its a far cry from "notoriously bad at staying up"


If you're just using a cloud provider as a remote VM farm, you'll still need devops, sure.

If you're using an automatic container scaling solution, such as AWS' Elastic Beanstalk, you can still benefit from devops, but I'd argue you don't need it; all the difficulty resides in structuring your application to be able to handle that environment, a problem for your software devs. The devops burden is low, and the time to communicate what a developer needs to the devops is likely going to trump the time taken for the developer to just do it.

If you're looking to use a containerless solution using cloud resources (like AWS' Lambda, API Gateway, and Dynamo to create a CRUD app), you don't need devops. All your difficulty resides in reducing and/or handling state between functions (well, and any other shortcomings in the actual implementation of the service); again, a software problem.

Basically, Amazon, at least (what I have the most experience with) seems to be looking to remove the need for devops by creating standard workflows and mechanisms to bind services together with arbitrary code, and to be able to inherently scale out. The remaining devops burden is sufficiently small, and so tightly integrated with the nature of the software involved, that it's oftentimes more effective to just have the devs handle it. While sufficient amounts of code might turn that into a devops role, what I meant by scaling out is a particular app handling a given amount of load. In a classical datacenter environment, moving from an app on one box to an app that spans many is something both the software devs and devops have to concern themselves with, but in the cloud it's mostly just a dev consideration; if the app is written to handle multiple copies of itself, spinning up those extra copies should be close to if not actually trivial. That was all that I was saying; the move from one to many doesn't require devops any longer, as "how do I make sure all of these boxes are set up properly, get deployed onto, are kept in sync, load is shared between them, etc" are problems that cloud providers have provided tools to solve, and what they leave out doesn't require dedicated devops to address.


Last I heard, Snapchat runs entirely in the cloud: http://www.businessinsider.com/snapchat-is-built-on-googles-...

There's over 400 million Snapchats sent per day. I'd say that's pretty big scale, all done in the cloud.


I'm not saying you can't run a large business in the cloud, what I'm skeptical of is whether you can run a large business in the cloud without devops staff, having only developers while the cloud 100% takes care of devops for you.


That's not how I interpreted the comment you responded to. I thought it meant that you didn't mean IT staff, which is true. When you use cloud services, someone else is yanking dead drives and replacing them, someone else has a 24/7 on-call rotation for dealing with power outages and fiber cuts, someone else is responsible for hardware at all levels and lower-level software stuff. That's how I understood that comment.

Now you're talking about DevOps, which is short for Development Operations, which is a developer role, not an IT role. DevOps people automate your buildchain and that kind of stuff. No one is saying that using cloud services means you don't need DevOps, though there are some cloud services that will handle at least part of what is traditionally in the realm of DevOps for you.


Hiya. One of the authors of the article here. We also reported this week that Azure alone likely did about $400m in revenue for Microsoft in June quarter, versus for Amazon on $1.8 billion. http://www.bloomberg.com/news/articles/2015-10-23/microsoft-...


Not alone there, Microsoft is especially expensive for what you actually get but the more common players such as AWS and Rackspace are highly cost ineffective in many situations. I really wish people would stop calling it 'the cloud' and call it what it is 'outsourced hardware'.


And a whole bunch of other capabilities. My company couldn't exist if we had to buy & configure our own servers, storage, switching, load balancers, databases, monitoring &c. Characterising the AWS & Azure clouds as "outsourced hardware" demonstrates a real lack of awareness of the services that are available. As a third-party offering, they are about as far from old-style mainframe bureau computing as you can get, in every dimension from the architectural to the commercial.

Cloud computing means the entire DC is programmable, and I use it as such.

Moreover the unprecedented level of automation means I can spend a lot more time on creating customer value rather than faffing around with admin. The shift I've seen in the last three decades* has been phenomenal. Teams aren't smaller but they are vastly more productive.

* yes I have been in tech that long :~


If you host your own servers or even PaaS wherever, and you don't have an API and automation framework then yes - you will see benefits from having those things provided to you. If you already have these things and you're not spending a lot of time to make sure they continue to exist then you have a lot more freedom than locking into the tools that your 'cloud' provider has given you.


What lock-in? I'm deploying standards-based applications to standards-compliant platforms. Cloud services are simply saving us heaps of time & money. There's no loss of freedom, far from it; the disposability of cloud infrastructure provides enormous opportunity for adaptation and change.

In 100% of my experience to date, "cloud lock-in" is a myth trotted out by server huggers and hardware salesmen. Some SaaS providers may be data prisons, sure, but that's a different conversation.

If the economics of establishing and operating off-cloud resources ever made sense for us, we'd go for it, but it looks increasingly unlikely.


One example of that I've seen have spent significant time on tools like cloudformation can only (as far as I know) be used on AWS. Another would be where when you need decent storage performance - we have several cases where we use 20-40K IOP/s quite easily and doing that kind of work on the current cloud offerings is very expensive and usually involves significantly increase complexity if say you need this in your database layer as you suddenly have the need to scale horizontally while maintaining consistency and durability which is difficult. We can provision 6TB of networked 1M 4K random read IOP/s storage in a highly redundant, load balanced form that's easy to upgrade and scale for less than $500/month and it has little to no management overhead. Now while this may not be what your average startup requires it opens up a world of opportunity to how and what you do with your data.

Edit: I should note that we do, where appropriate use 'cloud' services including Rackspace, AWS and Azure where appropriate. Azure has had significant performance issues and has a lot of provable downtime especially due to internal network routing and DNS problems that they fail to acknowledge and we've found their support to be a joke if you know what you need / are doing, even their own O365 service has weekly outages that can take several minutes to resolve. Rackspace's support has been good but they do have a lot of small outages again often network related. AWS' has been alright be very costly unless you're doing either very small deployments or at the other end of the scale massive, horizontally scalable deployments, however their storage performance is woeful. For our mission critical or high performance deployments using our internally hosted platform is significantly fast and almost always cheaper. Our uptime across the platform is fantastic and it generally 'just works' while we watch our cloud hosted services suffer from inconsistent performance and service 'blips'.


Huh? Networked I/O at 1M random iops for $500/month? I feel like at least one of these numbers is off. Can you detail your setup a bit more? How many machines or drives are you striped across and how much of your 10gige link are you assuming you can dedicate to this?

(I'm genuinely curious but I feel like there's a missing upfront cost that's not being included here)


True for cloudformation , when spinning up multicloud systems it pays to invest in a tool like terraform.Currently we are running a hybrid Google Cloud AWS deployment and it helps to keep the infrastructure consistent.


Ever used spot instances? If you use "cloudy" strategies and shop your work to the cheapest AZ's, and only run the jobs when the spot price is right, you can run some pretty nice instances on AWS for cheaper than any other provider. However, if you just want a rack of servers running 24/7, regardless of utilization, you might as well go back to the colo.


Compared to what? Running a rack of servers? Sure. Running a 3 datacentre presence with low latency interconnect and enough spare capacity to deal with failure? I'm not so sure.


  Anyone that says moving to the 
  cloud saves you money is likely
  wrong.
Evidently you've never worked at a place where database disk space costs $31,000 for a terabyte. :)


> $31,000 for a terabyte

What kind of storage are we talking here? Even 1TB server SSDs don't cost nearly that much.


Factor in geo-replication, HA/redundancy, tiers of backup, archiving, ACL management and auditing.. I could well believe it.


Well you're probably saving on manpower though right?


Some times people do, often they don't. We learnt that dealing with 3rd party vendors is a very time consuming and costly ordeal, especially when you realise that they don't care about your business, only about your money.


Makes sense. After EC2 evaporates our cash it has to precipitate somewhere…


Invest in AMZN to pay for your EC2 bill - feedback loop.


pretty sure amazon's Q3 earnings can be attributed to the site i launched on EC2 that accidentally had a little bit too much power, load balancing, and backup instances. fun bill.


Amazon's customer support tends to be pretty awesome and forgiving. I would definitely try giving them a call and see if you can get some of your cash back.


They also put an automatic limit on resources if they explode your billing and alert you if something goes bonkers.


I am very thankful for the Cloud2Butt extension when stories like this appear.


On common, reporters. IBM has a cloud too!


"At IBM, the future doesn't look so bright. Shares dropped to a five-year low after the company cut its profit forecast earlier this week."


Strange considering IBM invented the utility model. http://www.computerworld.com/article/2578752/it-management/i... The thing to remember about the utility model (think electrical utilities) is that once you control the supply you control the price. During this adoption phase the prices will remain low, but they will likely go the way of electricity costs if adoption reaches critical mass. I think right now they want to get the message out that the "cloud" is profitable because I am hearing a lot of "trough of disillusionment" from decision makers. For the startup or web company the cloud has a lot of attractive use cases, but for the enterprise data center it's a bit more sketchy. And, today it's not hard for sysadmins to build their own private cloud. The only thing that will tip the scales for the cloud and attract the enterprise data center is if we can no longer purchase computing hardware. I certainly hope that day never arrives, but look at the common light bulb. We've been fooled before.


I was surprised they neglected to mention Soft Layer. I consider it one of the major players in cloud services and IBM is expanding at a crazy rate.


Azure, EC2, and Google Cloud are overpriced in most cases. You can do the same with cheaper (and often similarly reliable) VPSes managed with tools like Puppet/Chef, Consul, Vault, Docker, etc. Plus you avoid lock in. Your stuff is yours and can be deployed anywhere.

I don't see the allure of the costlier cloud other than the "nobody ever got fired for" factor so common in enterprise purchasing. Amazon is the only one that might have a stronger case for it based on its huge managed service stack, but much of that is not too terribly hard to duplicate with other tools and more a la carte services. There also really isn't a reason you can't use some of Amazon's stuff while also using more commodity options.

On a more principled level I'm starting to see huge proprietary cloud as a potential threat to the open Internet. It's not quite there yet but at some point I could see it, especially with the walled garden plays you see around IoT.


One unique thing about Google Cloud is that most managed services like Load Balancer, PubSub, Datastore, BigQuery etc do not charge you for variability and high-availability. AWS and Azure WILL charge you 10x to scale up and another 3x for redundancy. Because Google's managed services are often based on Google's internal stack, they just scale. Good luck scaling Kafka to millions of messages per second - with PubSub you get it out of the box. PubSub, BigQuery, and others are geographically high-available out of the box. These things are difficult to replace on EC2, and nearly impossible on players like Digital Ocean.

Edit: BigQuery, for example, allows you to rent 10,000 cores for 5 seconds at a time. This type of stuff is impossible to do with VMs at all.


Not really related to your bigger point, which I have no opinion on, but Kafka & PubSub have different delivery contracts, Kafka's are generally more strict. Therefore comparing the scalability of the 2 is somewhat problematic.


Can you elaborate on that? PubSub is a fully-managed service, which means that Google SREs are on call making sure things are up. In addition, Pubsub has "guaranteed at-least-once message delivery". In a sense, Google's SREs guarantee delivery.

PubSub is also a GLOBAL service. Not only are you protected from zone downtime, you are protected from regional downtime. Is there an equivalent to this level of service anywhere in the world?

I'm not too familiar with Kafka's fully managed service, but Kafka-on-VM is a whole other ball game. YOU manage the service. YOU guarantee delivery, not Kafka.


Kafka promises strictly ordered delivery, PubSub promises mostly ordered. The differences between those promises are what drive PubSubs ability to scale throughput and global availability.

From an availability standpoint, I don't disagree with anything you mention, but the difference between the consistency models means that PubSub is solving a different set of problems than Kafka, thus my opinion that comparing them is problematic.


That's a fair point. But remember, Kafka promises this as long as the underlying VM infrastructure is alive and well. PubSub completely removes this worry, or even the concept of VMs.

There are several ways to look at it, but I'd opine that a "mostly ordered" fully-managed truly-global service that's easy to unscramble on the receiving end is more "guaranteed" than something that is single-zone and relies on the health of underlying VMs that YOU have to manage.

edit: Kafka and PubSub have a lot of overlap, but they each have qualities the other one doesn't. I suppose you gotta choose which qualities are more important for you.


If you can design your protocol such that it can work in a mostly ordered fashion, I'd highly recommend that you do. It opens up your choices for technology stack tremendously. But, if you require ordered delivery, your choices start shrinking dramatically.

Also, just so we are on the same page. Kafka is a software product that can be run on hardware or VMs, not a managed service. Possibly, you are thinking of the Amazon Kinesis product which does offer a managed service with strict ordering.


Agree on first point.

No confusion on second point. My argument was that Kafka adds significant complexity and delivery risk because it's software that you must run on hardware/VMs, rather than a fully-managed service. You have to pay a whole lot of eng time to make Kafka truly "guaranteed delivery" because there's always risk of underlying hardware/VM/LB dying.

Pubsub guarantees delivery regardless of what happens with underlying infrastructure. In a sense, the bar has been raised dramatically.


> PubSub is also a GLOBAL service. Not only are you protected from zone downtime, you are protected from regional downtime. Is there an equivalent to this level of service anywhere in the world?

Could you point to some of the documentation that describes more about its reliability model and SLA? I glanced through the documentation and couldn't find out any information about this.

It seems like a service that has this kind of global availability would have to make a trade off in latency for writes and potentially reads. If it's a multi-region service, then all writes need to block until they're acknowledge by at least a second region, right? It seems like that will add latency to every request and may not necessarily be a good thing. Similarly, at read time, latency could fluctuate depending on which region you query, and whether your usual region has the data yet. I'm just speculating though, not having read any more about the service. It does sound nice to have the choice to fall back to another region and take the latency hit, instead of an outage. On the other hand, regions are already highly available at existing cloud providers (with zones being a more common failure point).

Is PubSub mature? The FAQ suggests that you should authenticate that Google made the requests to your HTTPS endpoint by adding a secret parameter, rather than relying on any form of HTTP-level authentication.

> If you additionally would like to verify that the messages originated from Google Cloud Pub/Sub, you could configure your endpoint to only accept messages that are accompanied by a secret token argument, for example, https://myapp.mydomain.com/myhandler?token=application-secre....

This feels rather haphazard. If I'm exposing an HTTPS endpoint in my application that will trigger actual behavior upon the receipt of an HTTP request, then of course I "would like to verify that the messages originated from Google Cloud Pub/Sub", so that they're not coming from some random bot or deliberate attacker who happened to learn my URL.


PubSub is not a product I work on, so apologize for lack of detail:

- PubSub is used by Google internally to power everything from Android notifications to Hangouts messages. So it's certainly proven.

- A lot of your questions are answered in docs:

https://cloud.google.com/pubsub/

https://cloud.google.com/pubsub/docs

You can always reach out to me, and I can get you in touch with a PubSub SME.


I didn't see anything in the docs that touches on those subjects in detail (I did skim the docs looking for sections and pages that might contain answers to my questions before I posted), but please point me to the page that does if you know of one and I'd be interested to read it! I trust that your perceptions and information are accurate, but cite-able and reference-able information is also valuable.


I see this: https://cloud.google.com/pubsub/subscriber

In the "Delivery contract" section:

"For the most part Pub/Sub delivers each message once, and in the order in which it was published. However, once-only and in-order delivery are not guaranteed: it may happen that a message is delivered more than once, and out of order."

So it is at-least-once delivery as far as I see.


If you need 10000 cores for 5 seconds, that I'd awesome. My post was aimed at the 95% who don't.


If you have a SQL query that takes 50,000 core-seconds, it's probably more useful to execute that query using 10,000 cores in 5 seconds rather than 10 cores and 5000 seconds, especially if cost is the same. Even better if you never have to spin up a VM or worry about scale. This benefit is tangible and applicable to anyone who runs SQL. The reason this isn't prevalent is because it's economically and technologically prohibitive. BigQuery tips that scale in the other direction.

Point is, higher-level cloud-native services unlock very interesting use cases that are applicable for both small-scale startups and large companies, use cases that are impossible with just VMs.


I'm not really disagreeing (much), but very few things fit that criteria. More common are simple problems so overengineered that they sprawl across two Amazon availability zones when a straightforward implementation could serve the whole customer base off a $20 a month VPS. This is more depressingly common than you think. Also depressingly common is a 50000 CPU second operation that could be a 1 CPU second operation with a few indexes and a smarter algorithm. AWS adds a lot of carbon to the atmosphere cranking through crap code. Trust me I've seen it.

What Amazon and kin have done is offer developers a new sexy way of over engineering. The AWS stack is the new Java OOP design patterns book. Yes, there is occasionally a time when an AbstractSingletonFactory is a good thing but I guarantee you most of those you see in the wild are not those times.

The real genius was to build a jungle gym for sophomore programmers to indulge their need to develop carpal tunnel syndrome where everything bills by the instance, hour, and transaction. If Sun had found a way to bill for every interface implemented and every use of the singleton pattern they would have been the ones buying Oracle.


Likewise, but I think you're getting into the philosophical, not the practical. You may choose to live in a single-CPU world for your database, but you're simply disqualifying yourself from a whole lot of interesting use cases. Index+algo only solves a sliver of analytic use cases. And, ultimately, I'm afraid you're creating a world where you cannot effectively understand the shape of your data and you cannot effectively test your hypotheses, so you go with gut feel. And, perhaps more importantly, you cannot create software that learns from its data.

Your argument can be summarized thus as this - do not give people incredible computing capacity at never-before-seen economic efficiency, because they will use it inefficiently. I'm afraid this argument gets made every time the world gets disrupted technologically (horse vs car anyone?).

Edit: I may argue that if "carbon footprint" is your prerogative, then economies of scale + power efficiency should tilt the scale towards cloud, no? AWS is certainly on the dirtier side, but there are other, greener clouds.


I'm not saying what you think I am saying. The thread was about how the cloud is immensely profitable, and I'm saying that a good chunk of that is built on waste and monetization of programmers' naive tendencies to overcomplicate problems.

I am not arguing that there are no great use cases for these systems. But I would be willing to bet that those are less than half the total load.

It's like big trucks. How many people who drive big trucks actually need big trucks? Personally I like my company's Prius of an infrastructure. :) And of course we've architected it so it can be a fleet or an armada of Priuses if need be, with maybe just a bit of work but if we get there I will be happy to have that problem.


If availability and scale are not important, and you can tolerate having to engage a human in the event of a hardware failure, then sure a $20 VPS might suffice. You could also run a single virtual machine in one zone in the cloud.

But I think you might underestimate the amount of use-cases that do legitimately benefit from and desire a greater degree of reliability and automation. When one of my machines dies, I don't want to be notified, and I don't want to have to do anything about it. I want a new virtual machine to come online with the same software and pick up the slack. Similarly, as my system's traffic grows over time, I want to be able to gradually add machines to a fleet, to handle my scaling problem, or even instruct the system to do that for me.

Plenty of use-cases may not require this, but I'm not convinced that the majority of systems in the cloud do not. Every system benefits from reliability, and it's great to get it cheaply and in a hands-off way. In the cloud, I can build a system where my virtual machine runs on a virtual disk, and if there's a hardware failure, my VM gets restarted on another physical machine and keeps on trucking without my involvement. As an engineer and scientist, I can accomplish a lot more with a foundation like this. I can build systems that require nearly zero maintenance and management to keep running, even over long time scales.

I don't think I disagree with you that some people overengineer systems, but I think I disagree with you about how much effort it requires to achieve solid availability and a high level of automation. It's not a lot of effort or cost, and it's a huge advantage. Once I build a system I never want to touch it again.

A certain segment of users are adopting these technologies because they want to be prepared to scale. One of the advantages of "big data" products even for small use-cases is: all successful use-cases grow over time. If you plan for success and growth, then you may exceed the capabilities of a traditional technology. If you use a "big" technology from the beginning, then you can be confident that you'll be able to solve increases in demand by scaling up, rather than by rearchitecting. As these platforms mature and become easier to use, the scales begin to tip, and they no longer require more engineering time than the alternatives; a strong hosted platform actually requires less time in total, especially when you consider setup and maintenance. Many of these technologies do an excellent job "scaling down" for simple use-cases too. While they have been difficult to use, they're getting easier. For example, MapReduce-paradigm technologies are becoming fairly easy with Apache Hive, and fast with Spark. They're becoming easier to set up due to hosted variants like AWS's ElasticMapReduce or Google Cloud Dataproc, etc.


I don't think you shouldn't make the capability available, but I wish more people would stop to ask, "do I need this?"

Since I do data analysis and machine learning (sometimes), a common one I see is people using "big-data analytics" stacks when they don't have anything remotely in the range of a big-data problem. Everyone really seems to want to have a big-data problem, but then it turns out they have like, single-digit gigabytes of data (or less). And they want Hadoop on top of some infrastructure to scale a fleet of AWS VMs, so they can plot some basic analytics charts on a few gigs of data? They would be better served by the revolutionary new big-data solution "R on a laptop". But somehow many people have convinced themselves they really need Hadoop on AWS.

Though I haven't used it yet, BigQuery does seem interesting in comparison, because it at least seems like it doesn't hurt you much. The Hadoop-on-VMs thing is objectionable rather than merely unnecessary, because you get this complex, over-architected system for what is not a complex problem. BigQuery at least seems like, at worst you end up with basically a cloud-hosted RDBMS with scaling features you don't need, which isn't the end of the world as long as the pricing works for you.

edit: Just to clarify, I'm not the person you were replying to, just someone who also has opinions on this. :)


I agree with you :)


One is welcome to use staff that cost $20k per month (e.g. DevOps engineers who understand those four technologies well enough to use them in production) to shave ~50% off one's AWS bill, but one needs minimally two to three of them, so your friendly neighborhood insurance company should probably pay their $15k or whatever a month without blinking.


In many (perhaps most) areas of the US, DevOps staff do not cost $20k per month. For example, 92% of Boston rates are less than half that[1].

This implies that a $40k per month bill for AWS[2] could pay for three DevOps engineers and save approximately $10k per month in the vast majority of the US.

1 - http://www.indeed.com/q-Devops-Engineer-l-Boston,-MA-jobs.ht...

2 - derived from your statement that a staff of $20k per month would save "~50% off one's AWS bill"


Not to quible, but fully loaded costs and salary costs can be dramatically different.


Quite true regarding company (fully loaded) costs.

IMHO, a reasonable estimation for fully loaded cost per employee (excluding facilities expendetures) is approximately 1.4 * ES, where "ES" is the employee salary.

The "three DevOps engineers and save $10k" estimation was based on working backward from the 92% of available jobs in Boston being less than half of the stated $20k per month cost. Assuming a Gaussian distribution where 0.5 * $20k per month represents the high end of two standard deviations (since Boston ranks quite highly in S/W salary nationally), most DevOps engineers will be paid roughly half of that as well.

This yielded an estimation of $6.5k per month per DevOps employee or $19.5k per month for three.

Since all of this was off-the-cuff, I figured it best to throw in a bit of "fudge factor" and present a $10k per month savings.

As always, YMMV and I could be completely wrong about all of this :-).


It's not that hard, and if you are so huge that your devops takes a team you have a good problem. If you are a startup then architect your software so it can be scaled but otherwise just stick it somewhere and worry about product market fit. You can decompose and refactor and distribute once your product has enough users for it to matter.

A lot of the difficulty also comes from over engineering and premature scalability obsession. You often just don't need all that. I swear over engineering is the bane of software and devops these days. We've gone from java factoryfactorysingletons to "how many distributed systems fads can I use in one stack?"


Does anyone trust Google enough to build a business on their product? They've shown a propensity for terminating projects when they no longer feel like supporting them. I'd hate to have thousands of servers relying on Google's services and API's and have them say "Oh hey sorry, we're shutting down, you have 90 days to migrate starting tomorrow".


What popular paid service has Google terminated? Any?

We built our recommendation engine for Recent News (https://recent.io/) on Google App Engine in Python. There was some tricky engineering involved in making sure that it would work inside that particular environment, but it's paid off in terms of scalability. We're not worried about Google shutting down App Engine; in fact it's being continuously improved.



I never used Google Maps Engine myself, so I'm not really familiar with it. But it looks like Google was trying to consolidate and improve its paid maps offerings, not discontinue them:

http://www.gearthblog.com/blog/archives/2015/01/google-maps-... The move should be seen as Google transitioning customers to already existing alternative products, especially Google My Maps (formerly Maps Engine Lite) which has come of age and now has most of the important features of Google Maps Engine

A better example would be if Google were to discontinue its paid Google Maps API?



Given that Snapchat's business proposition is based on deliberately ephemeral data....


Not anymore. They recently transformed into a PPV DVR.


What cloud platform services have they shutdown? Project churn in other parts of the business doesn't mean I shouldn't trust their container service or load balancers.


Wave. Of course that depends on your definition of cloud platform services.


Migrating off Google would be much less painful than trying to migrate off of Amazon's large group of proprietary services. e.g. Google's Container Engine is based on Kubernetes.


> They've shown a propensity for terminating projects when they no longer feel like supporting them.

The only companies that don't terminate projects when it it no longer serves a strategic business purpose to support them are companies that instead terminate the projects because they go out of business.

Google is, if anything, unusually good at providing warning and a migration path off a product when they decide to terminate it.


Will have to keep this in mind as our company contemplates BigQuery.


I work on BigQuery. I am a little biased, but "canceled product" is FUD, especially for paid services at Google. Especially for BigQuery, which has no equal.

Feel free to ping me if you want more info :)


Can you let powers be know that Google Cloud Storage is expensive compared to Cloudfront & S3 :)

Cloud Storage considers retrieving an object through its HTTP url is considered a class B XML request type which is priced at $0.01/10,000 ops. This is 1.3x-2.5x more expensive than cloudfront and s3 respectively. I think this is the only google cloud service which is more expensive than the AWS equivalent


"Don't use Google to run your business because I'm sad Google Reader got shut down"


Yes, more so than Microsoft. And please let go of this Google Reader meme you like to perpetuate. I doubt you even used it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: