> As engineers we're measuring everything, all the time right?
Is this mostly a joke? I've written plenty of APIs, but I have never load-tested them in isolation, never had to respond to underperforming latency or throughput metrics, or even face any feedback on software performance. In most cases, I don't know who uses the code I wrote, and no metrics from them ever make it back to me personally. For exactly 95.2% of what I've delivered, I couldn't tell you that I improved Foo by Bar units even if you held a gun to my head.
This is after years of delivering LOB software to clients in banking, manufacturing, and oil & gas industries.
I think it depends on work culture. We recently merged with an american corporate and their engineers are obsessed with measuring, pilots, ab testing, RFCs etc. It's so slow to work with them even on the smallest features. Poor 10% of users who miss out on awesome features for 4 months because we "need" a control group.
I think that the point of the parent poster is that often even when the work culture is obsessed with measuring, the people doing the A/B testing and careful study of the benefit to the company are quite separate from the people implementing the change; the information will be used in some decisions and flow to various layers of management, but the developer who built that feature won't necessarily even get a message when it eventually got chosen for widespread deployment or got abandoned after 4 months of being shown to a control group, much less getting the data on what the estimates of financial impact showed.
This is after years of delivering LOB software to clients in banking, manufacturing, and oil & gas industries.