This post is apropos to a recent topic "PR process killing morale and productivity" [1].
The Google code readability program is talked about very little outside of Google, and as far as I know, is replicated nowhere outside.
Within Google, it is a cult. There is never any discussion or action taken to walk any part of it back. The bar simply gets ever higher, year after year. The C++ style guide itself is massive and authoritarian. Are you working on an embedded system? Too bad, you must follow best practices meant for high volume production services to the T.
When I got C++ readability 4 years ago, it felt like a hazing. It was an utterly opaque system, accountable to no one, where a random readability reviewer had a 10% chance on each review of insisting that you make some major refactor of code you didn't write.
Even your level of progress was inscrutable, being measured in little "+"'s next to multiple dimensions. Occasionally, you'd get another "+" and wonder when you'd finally be done. And then one day, after many months, thousands of lines of C++, hundreds of review comments, the readability program hive mind opaquely decided that your hazing was done, and you are now free to write whatever code you want, whether it adheres to the style guide or not.
Some languages at Google have more than a year of backlog to even enter the program to start getting readability reviews and make progress towards readability. The answer is always to "add more readability approvers", yet the bar for that is even higher, and the time demands of being such an approver are incompatible with most teams at Google.
In any case, I don't intend to litigate Google's readability program in a public forum. But I provide it as an example of how "style guides" can and will go too far and consume far more resources than necessary. And how there is naturally no review process or escape hatch to roll back excess in this area.
Linting and style guides are not of "critical importance". No business objective will go unachieved because some checkins use tabs and others use spaces.
Whatever problem style issues might cause can be resolved before lunch by running a formatter and committing the result.
> Linting and style guides are not of "critical importance".
This is simply false, as attested by the huge volume of comments in this thread by those with actual professional experience working on real-world software projects.
You're also oblivious to the problem domain, because otherwise you'd understand that the critical problems are not whether a space should be at the left or at the right of a symbol, but all the churn that is required to manually address style problems in PRs.
Try to think about the problem. You post a PR that screws up all formatting. It takes time for a team member to review a PR. Once you start to get reviews,you notice comments pointing out failures in complying with a specific style. Whether you go the passive-aggressive path of waiting for any other team member to review your code or you do the right thing and fix the problems you introduced, that requires another round of PR reviews. The time that you take with each iteration is the time your work is delayed to be merged. Now think about how many hours per month you waste just because you can't manage to format your code properly.
I’m not sure if I’m understanding you correctly, but how on earth would a pull request even make it to the review state if it fails to lint in the pipeline?
I sort of agree with GP in that the discussions are a waste of time. I also agree with you that you should simply automate it through tools. Styling doesn’t have to be a democracy or about personal preference, all styles work, it’s all about picking one and then forcing everyone to use it. Of course you do it in a much more involving process than what I make it sound like here, but ultimately someone with decision making powers is going to have to lock down the process so no further time is wasted on it.
Easy, don't focus on the cell phone, focus on the driver's focus.
Mandate driver attention monitoring, and require that prolonged inattention (i.e. N seconds continuous, or N seconds of the past 3 minutes) results in escalating consequences, e.g. a warning chime, followed by activation of the hazard lights, followed by cutting the throttle and engaging lane keeping assist if available, and with several minutes continuous inattention, automatically call emergency services if possible.
Ideally, phone manufacturers and car makers could work together so that the driver inattention chime automatically locks the phone and tells the driver to pay attention (or tells the passenger to tell the driver to pay attention).
Hardly big brother to say that you should be required to pay attention when driving, and this all stays on-vehicle until you've been gone so long that it must be a medical emergency.
One trick, if you can get away with it, is to ensure that you are always estimating for a fixed scope exclusive of unknown unknowns.
You should not provide an estimate for "feature X implemented", but rather for "feature X engine". If you discover additional work to be done, then you need to add "existing code refactor", "feature X+Y integration", etc. as discovered milestones.
Unfortunately, you need that nomenclature and understanding to go up the chain for this to work. If someone turns your "feature X engine" milestone into "feature X complete" with the same estimate, you are screwed.
------
There is a related problem that I've seen in my career: leadership thinks that deadlines are "motivating".
These are the same people that want to heat their home to a temperature of 72F, but set the thermostat to 80F "so it will do it faster".
I was once in a leadership meeting, where the other participants forgot that I, lowly engineer, was invited to this meeting. Someone asked if we should accept that deadline X was very unlikely to be met, and substitute a more realistic deadline. To which the senior PM responded that "we never move deadlines! Engineering will just take any time given to them!"
Engineering, in that case, gave the time back when I left that team.
Setting the thermostat to 80F WILL bring the room to 72F faster than if you set it to 72F on most ovens/AC devices, unless the thermostat is located far away from the device.
Also, many engineering teams WILL take any time given to them.
But instead of making estimates and plans into hard deadlines (when facing the engineers), managers can make sure the organization is ready for overruns.
And as the estimated completion time approaches, they can remain reasonable understanding as long as the devs can explain what parts took longer than estimated, and why.
Part of this is for the manager to make sure customers, sales and/or higher level managers also do not treat the planned completion time as a deadline. And if promises have to be made, customer facing deadlines must be significantly later than the estimated completion time.
> Setting the thermostat to 80F WILL bring the room to 72F faster than if you set it to 72F on most ovens/AC devices, unless the thermostat is located far away from the device.
The thermostat is meant to be far away. This isn't a valid analogy if the thermostat is measuring the temperature of the heater rather than the room.
> Also, many engineering teams WILL take any time given to them.
Agree, engineering teams are not single-stage heaters. They can make more progress toward the goal by working harder (in the short term), or reducing quality, or reducing scope.
But holding hours/week, quality and scope equal, engineering teams aren't going to implement faster because the deadline is sooner. If there is actual slack in the schedule, they will tend to increase scope (i.e. address tech debt, quality of life improvements, plan better).
It might seem that engineers take all the time given to them because most engineering orgs tend to oversubscribe engineering (which makes business sense, since engineering is expensive).
> If there is actual slack in the schedule, they will tend to increase scope (i.e. address tech debt, quality of life improvements, plan better).
If all your engineers will make use of most of the slack for such purposes, rather than unproductive activities, you've either been very skillful or very lucky in hiring them (or at least their team leads, etc).
A lot of engineers (myself included) will produce more if there is at least a moderate amount of pressure applied towards delivering something useful relatively soon.
Too much pressure can definitely have the reverse effect, as it introduces more long term technical debt than what is saved in the short term. Also, intermittent period of low pressure can lead to innovation that wouldn't happen if the pressure is always constant.
Still, most tech teams I've worked with will tend to become a bit too relaxed (leading to shorter days, longer breaks, more chatter/social media, etc) if they are presented with delivery dates 1 year in the future for tasks that "feel" like it's only going to take 3-6 months.
Which may very well lead to the task taking 15-18 months instead of the 10-12 months it really takes due to those unexpected complications nobody explicitly thought about.
This also transfers to "agile" development, and in many cases "agile" can make these issues even worse, especially for deliverables that require months to years of effort before anything economically self-sustained can be released. (For instance, if you need to replace a legacy system, the new system isn't delivering net benefit until the legacy system can be shut down.)
> Setting the thermostat to 80F WILL bring the room to 72F faster than if you set it to 72F on most ovens/AC devices, unless the thermostat is located far away from the device.
What logic is that based on?
Most devices are just bang bang controlled on or off - so setting to 80 or 72 makes no difference.
Some rare invertor devices may use PID to ramp down as they approach the setpoint, but that's not common.
I'm going to assume we're talking about an oven below, but the principle also applies to AC:
A thermostat attached to a device will measure the temperature near the device, which typically is a bit higher elsewhere in the room, even if the sensor is at the air intake.
Also, even when the air temperature in a room reaches 72F, the walls may still be cold. This means that the temperature experienced by a person in the room will be lower than 72F, since the person will be exposed to less infrared radiation than if the room had been at 72F for a longer period of time.
So, if the goal is to reach a stable 72F (as felt by a human), the fastest way is to turn it to maybe 80F, and then turn it down when the temperature feels about right, or even a bit later (due to the thermal mass in the walls, furniture, etc).
If instead, the the temperature is set to 72F from the start, the oven will start to switch on and off quite freqently as air near the sensor reaches ~72F, and the felt temperature in the room will approach 72F assymptotically.
I live in an old house in a place that can get very cold, and I know this first hand.
It's not legal to yell "Fire!" in a crowded theater. Why is it legal to yell "Hydroxychloroquine cures COVID" on the crowded internet? Is it the immediacy of the harm? Maybe it's not necessarily legal, just untested in court?
It is legal to shout "fire", more or less. Or at least there's no prior restraint.
I wouldn't suggest it, because there's plausibly various laws that apply where you live, and it would be expensive to get them thrown out. Unless there's an actual fire, of course.
I'm in an adjacent space, so I can imagine some of the difficulties. Basically live streaming is a parallel infrastructure that shares very little with pre-recorded streaming, and there are many failure points.
* Encoding - low latency encoders are quite different than storage encoders. There is a tradeoff to be made in terms of the frequency of key frames vs. overall encoding efficiency. More key frames means that anyone can tune in or recover from a loss more quickly, but it is much less efficient, reducing quality. The encoder and infrastructure should emit transport streams, which are also less efficient but more reliable than container formats like mp4.
* Adaptation - Netflix normally encodes their content as a ladder of various codecs and bitrates. This ensures that people get roughly the maximum quality that their bandwidth will allow without buffering. For a live event, you need the same ladder, and the clients need to switch between rungs invisibly.
* Buffering - for static content, you can easily buffer 30 seconds to a minute of video. This means that small latency or packet loss spikes are handled invisibly at the transport/buffering layer. You can't do this for a live event, since that level of delay would usually be unacceptable for a sporting event. You may only be able to buffer 5-10 seconds. If the stream starts to falter, the client has only a few seconds to detect and shift to a lower rung.
* Transport - Prerecorded media can use a reliable transport like TCP (usually HLS). In contrast, live video would ideally use an unreliable transport like UDP, but with FEC (forward error correction). TCP's reaction to packet loss halves the receive window, which halves bandwidth, which would have to trash the connection to shift to a lower bandwidth rung.
* Serving - pre-recorded media can be synchronized to global DCs. Live events have to be streamed reliably and redundantly to a tree of servers. Those servers need to be load balanced, and the clients must implement exponential backoff or you can have cascading failures.
* Timing - Unlike pre-recorded media, any client that has a slightly fast clock will run out of frames and either need to repeat frames and stretch audio, or suffer glitches. If you resolve this on the server side by stretching the media, you will add complication and your stream will slowly get behind the live event.
* DVR - If you allow the users to pause, rewind, catch up, etc., you now have a parallel pre-recorded infrastructure and the client needs to transition between the two.
* DRM - I have no idea how/if this works on a live stream. It would not be ideal that all clients use the same decryption keys and have the same streams with the same metadata. That would make tracing the source of a pirate stream very difficult. Differentiation/watermarking adds substantial complexity, however.