For the record, I agree. We definitely make much better decisions when aware of our biases and we can only be truly confident we understand those biases with rigorous statistical analysis. Such supporting data is usually unavailable, though, particularly at the individual level.
My point is only that many claims subject to pretty much the same biases are usually made in HN (about hiring, about software architecture, about understanding technology/futurism) and treated with far less skepticism. That said, I am inclined to agree that the standard should be more skepticism in all cases, rather than less. With the caveat that it might not be practical up to a point.
Although, to be honest, I'd love to go into an interview with "I am X deviations above/below the mean of developer productivity in Java software analysis when compared against a sample of Y developers selected by procedure Z, measured by metric W; I am usually more productive working on teams of size n-m (p < 0.01)."
My point is only that many claims subject to pretty much the same biases are usually made in HN (about hiring, about software architecture, about understanding technology/futurism) and treated with far less skepticism. That said, I am inclined to agree that the standard should be more skepticism in all cases, rather than less. With the caveat that it might not be practical up to a point.
Although, to be honest, I'd love to go into an interview with "I am X deviations above/below the mean of developer productivity in Java software analysis when compared against a sample of Y developers selected by procedure Z, measured by metric W; I am usually more productive working on teams of size n-m (p < 0.01)."