> Having a clearly defined prioritization process can help ensure all your team members understand how decisions are made and give them the confidence to actively participate.
"The boss said we have to do X" is a clearly defined prioritization process. Now go tell your team members that's how decisions are made (note the passive voice) and tell them to get on it already ("give them the confidence to actively participate").
That's the definition of the previous animal in the list: the boss says so, others don't need to bother themselves discussing.
This is a valid way to make decisions, but it does away with any upsides of discussing things or basing decisions on data.
Another problem is when a team of the "big guys" actually discusses stuff, and someone co-opted into the discussion does not feel like adding a word, "nobody will listen anyway". That person could be e.g. a subject matter expert and potentially offer valuable, decision-changing input, if made to participate. It takes a conscious effort to make such people feel welcome and share their opinion — which was the very reason to invite them to a meeting.
Thank you for the feedback! I agree, I failed to elaborate on this, this is definitely not what I meant. Now I extended that section a bit and added that having "The boss said we have to do X" is exactly what leads to having a team of disillusioned RHINOs.
This is needlessly adversarial. Thinking about your colleagues as assailants and parasites certainly doesn’t help your product development process. Alternative titles include:
“Why everybody sucks but you”
“How to point out problems and suggest vague, empty solutions”
“We asked GPT-3 to write an inane thought leadership piece”
Thanks for the feedback, I didn't see it that way and it was never my intention - I added now a short intro paragraph which hopefully eases this tension.
What bothers me with this kind Of culture is that it quickly becomes a gatekeeper for ideas you don’t like. Like an idea? Push for it without data. Don’t like an idea? Ask for more data.
Excellent point and potentially good material for a separate post - I totally agree. On that note almost any kind of culture can be (mis)used to create gatekeepers.
Agree, it's just mumbo jumbo to most people. Being really data driven in many realms falls into the category of Complicated or Chaos (in terms of the Cynefin framework) so often times being truly scientifically data driven is a matter of a PhD thesis. The only true recourse many companies have is intuition and trial and error which according to Cynefin is the most effective means of operating in the chaos realm
Genuine question – what's the alternative to trying? Being data-driven is definitely hard but imho teams should be commended for striving towards it.
Even imperfect shifts towards data-driven decision making can be valuable as well (eg maybe you have great data for optimizing your onboarding flow, even if knowledge of which features most improve retention or database performance are still black boxes).
I think part of the issue is that rampant innumeracy makes half-assed attempts at being data-driven actually worse than nothing. The reason is that without clean data we fall back to the old standard: gut feel + pet theories. Which of course is often not great, but then again almost always has more than zero correlation with reality than random. Poorly executed data collection and analysis on the other hand, I think drives many teams at full speed right off a cliff. All the randomness of flying blind with the confidence of feeling certain.
Even in that simple of an example, we lack the data we need for that number to matter much, and the fact that it's a concrete statistic can easily lead to prioritizing one feature or another using that imperfect data as a cudgel.
In particular, you'd probably want to know how a feature would impact future sales, survivorship curves for current customers, and survivorship curves for whichever kind of customer would sign on with the new feature (treating a single feature in isolation for simplicity). This is especially important when comparing multiple features to put on the roadmap because it's easy for one idea to be simple to imagine and better than the status quo (hence asked for by many customers) but be nowhere near an optimal solution to whichever problem is being solved.
Having that kind of customer insight is better than not having it, as long as such data is used appropriately -- without additional data it can't do much more that guide or refine gut feelings and insights, and attempting to do otherwise is a recipe for an inferior product.
Great point, and the post never intended to talk about this topic. I think there's great merit to 'gut feeling' from field experts, even if they cannot support it with data at the moment, maybe because the nature of the thing is exploratory at that certain stage or it's simply too chaotic. Experience and intuition coming from experts tenured in relevant areas is priceless - but it doesn't have to come with hierarchy or power based authority. When people naturally respect someone they will listen. No need to force it.
I clearly failed with the post (by not elaborating) on this point - I never meant forcing fake data where it's not possible. I'd still rather have this conversation:
A: We should definitely do X!
B: Why? Why not Y?
A: Because I'm the authority in the room
B: Well we talked to sales and customers and Y clearly has Z impact, on the other hand we saw no evidence of customers needing X.
There should be analyses of product misses in a well functioning product org, though getting to a concrete “root cause” can be a lot harder and may not need to be the goal.
You should at least be trying to understand, to the extent possible, why something didn’t work and then using that to better understand the problem you’re were originally trying to solve.
Data-driven works well for cheap services with many customers. Try it with expensive services with few customers and your experiment quickly becomes statistically worthless and expensive.
Also, data-driven does never work for a product. If you sell a product, your customers do not want to be monitored. They also do not want to randomly get an inferior version of the very product they paid for just to satisfy your research goal. So your only realistic test case would be to offer a second product and see how it sells. Or to pay for expensive surveys.
A/B testing is but one way of gathering data. Also, it usually comes after the decision to actually invest in building _something_ out (even if it's only a prototype/MVP). There are other types of data such as sizing the opportunity based on market research, competitor analysis or simply asking your customers - among many other. This shouldn't be trusted blindly either, sure. Never said that.
Yeah, It's too bad it went with initialisms instead of memorable animal characteristics. A couple work both ways: Hippo throwing their weight around, Wolf doing their own thing.
You touch upon two great points that I failed to point out:
1. These are behaviors in certain situation, not necessarily personality traits
2. I should have been explicit about that the article is not (only) about developers. In fact I've mostly seen these behaviors from product managers and VPs.
A long time ago, I was briefly involved with an organization in which management of software projects worked like this:
* Lots of projects were started and promised to stakeholders. Money was, most of the time, not really an issue because of the organizations size and various kinds of cash-flow.
* Marketing was always over-promising stuff which could be done. Lots of bingo words. Huge weight on design and leaflets.
* The organization was a matrix organization with, like, 100 people serving 30 or 40 projects. Developers were regularly pressed or shanghaied to take on more projects (the more senior developers had perhaps twenty or thirty projects they were participating in) which of course took focus off the running projects.
* Starting one's participation happened usually by sending people an electronic invitation in some kind of kick-off meeting in which they were more or less expected to participate, and telling them about their work package. It was considered inappropriate, almost rude, to say no. There would be no documentation or task defined in writing. People would meet a few times to vaguely discuss how things were going (using some kind of agile task chart), and point out dead-lines, but with nothing really concrete about the technical stuff.
* The idea was, I believe, that with so many people doing so many projects, it would help enormously if people joined and gathered knowledge from different projects in a synergy effect. One was always allowed to ask more experienced people for help. This happened mainly by submitting support tickets into a big bug-tracking system. Each ticket was assigned to a team working on the topic.
* The main idea to cope with all the projects at the same time was to modularize everything as to make it re-usable for different projects. Unfortunately, there was, however, very little time to define interfaces or APIs between these components.
* There was also the minor problem that it was somehow hard to retain people. One reason might have been the payment structure, or the career opportunities. There happened to be companies in the direct neighborhood which were paying substantially better. However, there were some senior die-hard staff members which were valiantly holding the flag. Unfortunately, the senior members were severely overloaded. They did not even came around to document basic processes. Unfortunately, that further increased their workload.
* When working on a project, sooner or later, one would run into a couple of issues which had no obvious answer. One would file them into the ticket system. A few of them would be answered, but not all, because of the overload of the other members. Then, one would become blocked. The best thing one could do was, to work on another topic. That was heavily encouraged by the creation of more tasks and projects. Now, think in a distributed system with lots of locks and messages and procedures in some kind of spaghetti graph. Because, from a certain point, other people would depend on one's own contribution, this could have the result that the whole project dead-locked without anyone really noticing. How would people have noticed? Everyone was very busy! Usually, such a stalled project would be kicked to run again two or three months later if some deadline was approaching or some report due.
* When these issues were brought up with lower managers, it was rather evident that they were aware of the problem. However, conversations about that ended all-too-often with the suggestion to pick up on a further project. So, it felt a bit like as if critical feedback up the command chain was not possible, or at least strongly undesired.
* Surprisingly, the result all of these efforts did not matter that much. It was somehow good enough to keep the organization going. One reason for this was surely that software was not an important product of the organization, or at least the top managers had the view that it was not a main priority.
With the distance of a few years, I am wondering if this structure was perhaps actually caused by some kind of micro-management. It was clear to me that the most developers would have hugely preferred to work on a much smaller number of projects. What would support this view is the experince how much organizations are often generally influenced by the top decision-making people.
On the other hand, I've come to the conclusion that most seemingly dysfunctions in organizations, or their parts, might have deeper causes, and, in a way, good reasons, in how the organization procures resources. What happens is inevitably also an optimization of this procurement of resources and of what is perceived as important.
Anyway, I'd be curious whether this kind of management pertains to some kind of pattern, and what is perhaps the best course of action one could take as a new developer in such a case. (I was pretty conflict-avoidant at that age, and I left that organization rather quickly, in spite of what could have been, at a technical level, rather interesting work).
> Having a clearly defined prioritization process can help ensure all your team members understand how decisions are made and give them the confidence to actively participate.
"The boss said we have to do X" is a clearly defined prioritization process. Now go tell your team members that's how decisions are made (note the passive voice) and tell them to get on it already ("give them the confidence to actively participate").