My company just had to do it today in response to a critical security issue we identified in production.
Saying that nobody should ever do it isn't realistic, sometimes things go wrong on a Friday, sometimes you can't afford to wait 3 days to watch them become even worse.
It's a good rule to live by, and like all rules it can be overridden when the circumstances require.
It's good to instill in your team that Friday deployments should be done only in special circumstances, or they become a habit (and people inevitably end up working Saturday). But yes, sometimes it's necessary.
IMO, if you deploy on Friday, you are promising to be available for fixing it on Saturday... which makes it not really a Friday in the sense in which it's meant.
Don't deploy when you won't be around to support it is a better description; but it doesn't roll off the tongue as easily.
If it has the potential to take down your site? Unless you have someone who can fill in for you, absolutely.
Really, it will come down to a cost/benefit analysis. Are your chances of the site going down due to being hacked over the weekend higher than the chances of a last minute update taking down the site?
The answer is almost always no (much to a sysadmin's chagrin). If the answer is yes (i.e. another heartbleed), then you are probably going to be working through the weekend anyways.
I'd rather have a robust infrastructure and enough confidence that deploys won't break anything that we simple stop paying attention to the day or time!
Friday is the best day to deploy to production. If something screws up, you still have the weekend to solve the issue before the big bosses are there.
On weekdays you have the added pressure of other work.
(Of course depends if your business is mostly or evenly loaded on weekends as on weekdays. In most businesses I've know it's usually the lower load/customer visits period).
In my experience this strategy has not worked out very well. When we tried to deploy risky changes during low traffic times, we would end up creating time bombs, scale issues being masked until a high traffic time rolled around.
This also creates like a kind "brain latency" for developers I think. I'm coming at this from the operations side of things, so maybe my observations are a little biased here. I have observed that if people deploy changes and it breaks something immediately, it's a very clear correlation and they can fix the bug generally pretty quick. If they deploy and then it breaks 72 hours later, any number of things could be the culprit, especially in a fast moving environment (times 1 billion percent if it's a microservices architecture without a strong devops culture, which most of 'em are). Debugging then takes much much longer. This is made worse if the person who deployed the change is not quickly available when their thing breaks, and it makes being on call for someone else's unproven feature very stressful.
So instead I think it's better to make sure deployment and build systems are rock solid, and deploys are as accessible and as idempotent as possible. Chatops type systems are good here. Then you can roll out big changes during peak traffic and be confident that you can quickly revert if it goes bad, and that the changes were reliable under load if it goes good. I also think it's critically important that big changes are behind rollout flags, such that you can dial up or dial down traffic at will. This is also useful when introducing new cache systems or something like CDN if you need to warm up slowly.
This is a better approach I think than trying to use the time of day to modulate user traffic. I would rather developers can control traffic to their feature themselves and have the person deploying the change with their hands on the wheel until they are confident they can take them off. That way people can do stuff independently, and everyone can trust everyone to deploy and yet still feel safe.
[Client, Friday 4PM]"Hello, client here. I know it's Friday 4pm but we messed up and did X. Could you deploy Y fix, thanks!"
[Client, Friday 4PM] "We are having a big sale this weekend we told nobody about. Could you quickly deploy a fix where all the product's prices are red and bold? That shouldn't take you long, right?"
[Project manager, Friday 4:45PM] "Hey team, X just released an important security fix for Y platform. I need you to deploy it right now or the client could get hacked."
And my own darn fault for using platform Y from team X that would release a security update on a Friday. Yes, fix it. But if this isn't a one-off, get off of platform Y.
If platform Y regularly releases security updates on Friday, that's poor management by X and I would reconsider using platform Y. 0-day issues of high severity should be the exception.
Reminds me when my boss got pissed at me because we were discussing moving all deployments to production after 5pm and I replied something along the line "I'm not going to be on the hook after hours to fix problems other people caused".
At one place I worked I put in a 4pm deadline on deploys, because I was tired of the devs seeing the end of the day as their deadline, tossing it over the fence to me, and then sodding off at 5. Invariably, it led to me trying to hunt someone down at a time when everyone was commuting home or similar. 4pm was enough time to do the deploy determine if something went wrong, and get the relevant dev started on the fix.
Because missing three days of velocity will definitely cause your growth hacking to fall off the hockey stick and reduce the tempo of your disruption of unique hackathons for people with a left little toe deficiency?
Nothing of that. The reason has nothing to do with "startup culture", head in the clouds buzzword crap, quite the opposite.
The changes are pushed and working in staging, the tests are passing, Q&A is done. Why hold off? So that you get your Good Practices™ badge? I'd say it's better that you don't have to deal with last week's work on a Monday if at all possible.
I think being down to earth, and keeping your good judgement is key. I don't recommend making world-shattering changes on a Friday, but even then, well, it really depends on the circumstances.
Surely you can envision some scenario where it makes sense?
* It's the holiday weekend and your new website with its curated comparison feature needs to go live
* It's the end of the quarter and having this is the only way you can sign a deal now (enough of your partners will be bound to quarters that this is possible)
* You're in the business of live sentiment analysis from TV video and a critical bug needs to be fixed before this weekend's Presidential Debate or your news channel partner will be pissed
Well, when do you do if you have a submission of your project on Saturday morning, and someone finds a bug on Friday afternoon? You fix it and hope it doesn't break anything else, that's what :P