What makes you think that your judgements are any better than theirs? How do you judge performance? Do you look at code? Or do you simply go by how many feature check boxes they get through in a given time frame?
edit: and I don't ask this to be insulting, but judging performance is a very difficult task and I've never seen a reliable methodology for doing it. Thus, I'm naturally extremely skeptical when someone claims that they can do it better than others. Especially so when it includes broad generalizations about a group of people.
I'm usually one of the strongest developers on my team, even when I'm managing. It's one of my strengths and why I stay with small companies. It also means I'm qualified to judge the technical skills of my developers, I know how long it should take.
People who think judging performance is hard have a broken development methodology or no technical chops.
That's also why I included the comment on my own rankings; I have some pretty serious weaknesses that were ranked as areas of strengths by my peers and reports.
>People who think judging performance is hard have a broken development methodology or no technical chops.
And this is exactly why I asked the question. Here you have judged me, since I said judging performance was hard, on both my development methodology and technical chops. Yet you know nothing about me at all.
The problem of evaluation is an important one, and one I'm always interested in reading more on. Unfortunately the strategy of taking one's self as a gold standard isn't exactly what I was looking for. It does seem obvious that being an expert in your domain is a necessary trait to be able to evaluate others, but it's not at all obvious that it is a sufficient trait.
The idea of multi-source feedback seems intuitively good, despite the conflicting research on it. That more input will expose biases or inaccuracies is a decent hypothesis. On the other hand, judgment from a single person seems highly error prone. Whatever faults that one person has will never be exposed and reviews seem almost guaranteed to have bias.
In the end, I'm left with no idea why you so quickly discard the reports of your peers.
I've never had a problem judging performance when I was doing my job right. Any time I've been unsure it's because I was either not managing the team properly or did not have enough technical knowledge to evaluate the estimates and claims (e.g. this took longer because of X and Y).
I know you can manage performance in a way that allows evaluation even if you don't have the technical chops to evaluate it technically, because I have seen technically weak peers do a good job of it.
I didn't discard the reports of my peers. I noticed a trend of the peers who liked me rating me higher in areas of weakness (as perceived by both myself and my boss) than my skills deserved.
Data that contains some bias is not useless if you can adjust for the bias.
Most managers are unable to adjust for their own biases, though.
I'm especially skeptical of managers who evaluate themselves as having above average technical ability after they have asserted how poorly most people evaluate their peers. The necessary conclusion of such an assertion is that people therefore haven't a rational basis on which to judge their own skills relative to others.
People who think judging performance is hard have a broken development methodology or no technical chops.
I suggest you reflect on this. It seems to me that you've convinced yourself that you are good at certain things but there isn't really any evidence supporting your beliefs.
When I was younger, I actually used to think like you - that perf evaluation is easy and a good engineer (e.g. me) would be able to evaluate others easily. Over time, I've realized this is an incredibly complex and nuanced issue because everyone's strengths and weaknesses are so very multifaceted. I am more or less convinced that people who think they have all the right answers simply have a lot of blindspots.
I've been managing teams for more than 20 years. Performance evaluation is easy, somebody is either performing well or they are not and a good manager can tell. One of the reason I like Agile is that if it's done right, the whole team can tell who isn't performing.
Performance analysis is incredibly complex and nuanced. Judging if somebody is performing poorly because they are unskilled or not managed properly or having marital problems is hard. Improving performance is even more difficult because it requires the analysis and a remediation plan.
This kind of thinking throws me into infinite loops. I think I'm not such a good developer, which makes me think that I must be a good developer because I recognize my limitations, which makes me think that I'm not such a good developer because I think I'm a good developer, and so on.
HN has a real problem with people who self identify as good programmers ... there is always a bunch of "no you're not" and "you only think you are" and "people who say they are good usually suck" type of responses. It's pretty weird reaction for a tech site where many good programmers hang out.
The problem is companies are looking for engineers while you are talking about this in terms of programmers. In most companies, it is about shipping and making a difference to the bottom line. It doesn't really matter if you are a good programmer if you are not getting things out of the door / ensuring future dev/maintenance will be pain free. For this programming is a tool, just like planning / estimation / design skills / being likable which helps with mentoring new developers etc. The % might differ, but the net evaluation has to be on the whole. Net results are also based on the big picture - what is your impact on the product? If we go by programming abilities alone, then managers must make base pay, which they don't. Also as you grow senior, the requirements change too.
And I don't know your scale of 'good' programmer either: What do you mean by that? Are you saying you are as good as Linus Torvalds or are you saying you are better than your current peers? I am assuming it is the latter - which probably means zilch in a site like this. Many folks here receive that recognition and over time they come to a conclusion that it doesn't mean much. Which is why you get a lot of naysayers. They aren't doubting you, they are just doubting the general feeling as been there, done that.
I don't think that is it. If you self identify as a good engineer/developer/progammer you get the same response. It's the self identifying thing that's seems to be the problem.
Look at this thread, there are several who are very explicitly doubting me in no uncertain terms. Even doubting the idea that I'm good compared to my current peers.
edit: and I don't ask this to be insulting, but judging performance is a very difficult task and I've never seen a reliable methodology for doing it. Thus, I'm naturally extremely skeptical when someone claims that they can do it better than others. Especially so when it includes broad generalizations about a group of people.