How would you build a video streaming service without a CDN? A CDN doesn't just solve the content distribution problem - it also solves the bandwidth problem.
With a 1Gig connection you can only serve ~45 concurrents streaming 1080p. If you wanted to support more than 45 concurrents, you would need
1. To distribute content to separate instances
2. A routing infrastructure to reliably route connections to that content.
And then, congrats, you have just built your own CDN - albeit in one datacenter. If you don't want your users on the other side of the country constantly complaining that your site is slow compared to Vimeo, you would have to build it on the other side as well.
If you believe your hodgepodge CDN is going to be much cheaper than Cloudfront (on the order of 1 Euro/month, where you could support it on ~1.5M views/month), then you should probably skip the whole video sharing nonsense, and get rich becoming a CDN provider :)
My point is, you don't need an expensive global CDN if your end goal is to optimize for ad-revenue within the U.S market. I don't know why you stuck on the CDN, and it sounds like you are using it as a hammer to achieve scalability.
I'm stuck on the CDN because you need something like a CDN to serve any meaningful number of concurrents. The numbers I quoted for cloudfront were for the US.
If you believe otherwise, please enlighten me on how you can build a HD video streaming service for 100 concurrents without a CDN. Like I mentioned, on a 1GbE connection, you can serve a theoretical max of 45 users on one node. Where do you go from there without something that looks like CDN?
Good article. I want to offer my thoughts on a couple of things from my personal experience.
> If the change deployed is small, there is less code to look through in case of a problem. If you only deploy new software every three weeks, there is a lot more code that could be causing a problem.
That's relative. Pushing out an accumulated amount of small changes once a week will most likely have the same end the result. The difference is, if you commit more than one breaking change you are dynamically expanding the window of service degradation. One release with three breaking changes is better than three broken pushes.
> If a problem can’t be found or fixed quickly, it is also a lot easier to revert a small deploy than a large deploy.
It is also harder to revert two non-consecutive deploys out of three.
> If I deploy a new feature as soon as it is ready, everything about it is fresh in my mind. So if there is a problem, trouble shooting is easier than if I have worked on other features in between.
Personally, I favor stability vs. easier troubleshooting. This works for some products and not others.
> It is also frees up mental energy to be completely done with a feature (including deployed to production).
Anecdotal evidence, but my team would usually catch and correct bugs when they have to come back to green light a production push. Engineers that ship clean and fast are rare.
> All things being equal, the faster a feature reaches the customer, the better. Having a feature ready for production, but not deploying it, is wasteful.
Something like this would usually be pushed out manually to align with other non-engineering parties within your company. Pushing broken features to the customer faster is not a good thing. Unless you can assume 100% success rate; which is not possible.
> The sooner the customer starts using the new feature, the sooner you hear what works, what doesn’t work, and what improvements they would like.
This depends on the stage of the company, the product, and your customers.
> Furthermore, as valuable as testing is, it is never as good as running new code in production. The configuration and data in the production environment will reveal problems that you would never find in testing.
All of the environments I govern match production 1:1 (sans data sanitation) in every way possible. I feel pretty strongly about this, if you can't test your code without pushing it into production, you should not be automating anything.
> Continuous delivery works best when the developers creating the new features are the ones deploying them. There are no hand-offs – the same person writes the code, tests, deploys and debugs if necessary. This quote (from Werner Vogels, CTO of Amazon) sums it up perfectly: “You built it, you run it.”
Don't compare a start-up to Amazon. Amazon has dedicated teams to govern the process and you most likely not replicate that. Also, hiring people that 'just send it' without doing damage takes money, time and a lot of training. It's expensive.
> One release with three breaking changes is better than three broken pushes.
Why? Each of those pushes you have one thing to check, and if it is messed up only one thing to revert. With a batched release you have multiple things to check, and are reverting other people's working stuff when you have to revert. Even worse, you have to choose between reverting slowly (but checking every feature) and possibly having to revert a second time because there was another bug you missed!
> Personally, I favor stability vs. easier troubleshooting. This works for some products and not others.
I don't understand. If you make the same number of changes with the same number of breakages, is packing them into a smaller window really more stable? Even worse the more time it takes you to fix those breakages, the less uptime you have... The opposite of stability.
> All of the environments I govern match production 1:1 (sans data sanitation) in every way possible. I feel pretty strongly about this, if you can't test your code without pushing it into production, you should not be automating anything
I agree with this! But... Then why are you advocating for staging to digress further from production waiting for a big release?
I briefed the original story, and one thing that stood out to me is that the author threw a 'bomb' named collaboration into the mix AFTER firing so-called Rick.
What author fails to understand, is that the problem could have been addressed by collaboration as well.
I caught a 'scrum-master' CTO wanna-be that wrote an article about (omitting my name) how he is happy I was gone because I was hard to manage. This guy showed up and hanged his scrum-master certificate on the wall and promoted a (fresh out of college) junior developer to management because he was there for a year longer than me and proceeded to enable and reward the most idiotic technical decisions I have ever seen while the rest of us was battling scalability problems.
He never talked to me; he was in the room with his new director of engineering (1 year of non-management experience, seriously) trying to come up with a strategy on how to do things and then try to run with it without getting any feedback.
Obviously, I shot them down, and it got to the point where they would come up with this stuff (no communication) and could not provide any details (why will this work? why is it better?).
I simply quit and never looked back at that point, they probably did collaborate a lot more. And by collaboration, I mean circle jerk whatever ideas sound great and force them on junior developers that do not know any better.
I'm sure it was dinged for not using mild-mannered business vocabulary to describe the former workplace. But I've seen some form of this play out in real life enough times to tell your assessment probably isn't too far off, and you were right to get out of there.
Finding someone that's actually dedicated and not just willing to say "dedication" in an interview needs a position that won't burn them out, or the knowledge of when to quit. Having neither is a disaster waiting to happen.
I'm not entirely sure that's true. I have no evidence, I just don't think any company would have a blog post "We had staffing problems, now we're buggered". Instead it's "We had staffing problems, we fixed them, now everything is amazing".
I also don't think things would have gotten this bad if the original dev team could have picked up more of the slack, but I don't have experience in dysfunctional workplaces. Still, the "We had one guy with all the domain experience, then we fired him and all our other devs magically became amazing" thing doesn't sit right with me.
They failed at fighting Rick, they failed at politics and they quite clearly are not ready to be leaders. They did not failed at programming once the above was solved by new leader.
Yeah, because that framing is imo wrong too.
1.) In all likelihood, they Rick was not that much genius, although he might have some more more knowledge initially and probably was not bad programmer either. There is nothing to show he was genius.
Here I will make the guess: Rick belittled other people and criticized everything they did and simultaneously had more initial knowledge. That made everyone assume Rick is genius and others are low skilled. The politics follow from there. Note that other people did not slacked at work nor took three hours long lunches (I assume). Them working overtime would fix nothing.
Local praised genius belittling other employees has predictable consequences - it is that others look significantly less skilled then they are to management (yeah management is to blame too) and those people know it. It also means those people will learn to not have ideas and not have initiative, because they get insulted for those and because those invariably turn against them. As I told, it is very predictable.
2.) The other people did not turned from people who are learned to be passive to people who suddenly got agency and autonomy. Nothing in their programming skills changed, only people management of leadership of company changed.
And note how them not being arrogant still means they get comparably very little credit for technical talent or skill. (I see that as pattern in it.)
It was the same project. Requirement and expectations management was done better. Communication about current status was done better. The descend to troubles described in original article is not just "too much work". It is bad decision making.
And I stand behind this 100%. Non technical management cant really do the above and where they can, it is only when technical staff feeds them accurate information about difficulty and scope. Someone making up requirements in his head and then coding something much more complicated is not a mark of genius. It is mark of someone who don't listen.
Great question. I build something similar without a moisture sensor, the key to obtaining proper data is to know (and control) the exact amount of water you elect to distribute.
The self-learning (ghetto A.I) software that I wrote would try to predict the next (optimal) watering event. You can start to tell how much weight the water adds and how fast the plant consumes it after a couple of iterations. Plus the soil will usually outweigh a plant by a considerable margin.
> Cool service but it ends up being like 2x the price of leasing.
You are comparing apples to oranges. Your $1500/month (from the past, different year, packages, car) lease payment is an obligation that requires a 24/36/48 month commitment.