Hacker News new | past | comments | ask | show | jobs | submit login
How Do You Know a Developer Is Doing a Good Job? (professorbeekums.com)
217 points by beekums on Feb 10, 2017 | hide | past | favorite | 212 comments



Back when I worked at Boeing, I was once compared poorly against another engineer who was always down on the factory floor when called, any time, whenever there was a problem with his stuff. The thing was, I was not called down to the factory floor because my stuff worked.

I'd say a developer is doing a good job when you, as a manager, are not having him come in at all hours to fix problems with his stuff.

Other signs of a good developer are you don't have to micromanage him, and he's more of a partner helping you get things done rather than a subordinate you've got to spell things out for and check up on.

I.e. when he's on the job and you can relax and concentrate on other things, you've got a good 'un.


> I'd say a developer is doing a good job when you, as a manager, are not having him come in at all hours to fix problems with his stuff.

That's also what a developer who ships nothing, ever, looks like. There are quite a few of those. A developer who gets the most done and tackles the hardest work, on the most important and frequently used tools, integrating with the most problematic 3rd party systems and has specialist knowledge is likely to be called on a lot.

Ultimately you can't make any judgement of the developer without detailed understanding of the specifics of the work. This applies to every possible developer productivity metric which is why its so damned hard.


> Ultimately you can't make any judgement of the developer without detailed understanding of the specifics of the work.

You've nailed it. This is why I would go so far as to say that someone with no programming experience can't effectively manage developers.


You can manage them, you just can't judge their work. I'm a team lead and I'm lucky in that my direct supervisor was a programmer for a long time, still runs a SaaS in his spare time, etc, so he gets it. But he has colleagues who are CPAs, management consultants, etc before they were managing their current dev teams.

They rely a lot more on their technical leads to tell them how the other devs are doing at a technical level. They're perfectly capable of managing their professional development without having an in-depth knowledge of enterprise architecture or C#.


> They rely a lot more on their technical leads to tell them how the other devs are doing at a technical level.

So the leads are de facto managers in that they're assisting the manager in managing the other developers. I think this can work, but I don't think it contradicts what I said. It just means the leads wind up doing some managing even though that's not what their titles say.


I've meet one programming manager with 0 programming experience (but he was a Mech Engr). His team loved him, they got real work done, he prioritized things for them effectively, and he helped insulate them from the b.s.

So, while it's rare there are certainly managers out there who can.


> and he helped insulate them from the b.s.

This reminds me strongly on yesterday's Dilbert: http://dilbert.com/strip/2017-02-09


This is also why remote working isn't more popular.


Can you say how this is related? I am trying to get into remote development (after 20 years of software experience).

How are the other two points mentioned earlier related to remote working? I am curious.


If you can't accurately assess the productivity of the developers by monitoring their output, you will default to metrics you can understand - usually that means monitoring the time spent in their seat.

From what I've seen coding managers seem much more open to remote work than non-coding managers.



well remote working does make checking on the remote worker harder so more difficult to know what they are actually doing, but knowing if their tasks are difficult or similar things should be achievable just over chat.


I have noticed that people who review your code have no issue with remote work, and can easily assess your state of mind, personality traits without much more help like video and visits. It makes it harder for the people not reviewing your work.


Sadly you are right. In person, an unskilled manager can tell if it looks like you are working hard, even if they don't understand what you are doing. Remotely, they can't even tell that, and are effectively blind.


Bingo. Well said


At the company I work for we've had a few developers who would've done a lot less damage if they hadn't written any code rather than the code they did write, which we then spent months on to detangle afterwards.


Those are bad enough - but I have met one developer that was technically competent (I would definitely pass his code in a review) except that he didn't care about how anything else worked in the company or even the logical constraints of the real world data (like the concept of brand was useless because different companies providing data treated it so differently) so what he spent a month working on at the end was nice code that did what it was ostensibly supposed to do but that could not be integrated with anything else, as a result of which if that code was integrated it meant the destruction of several years worth of work.

Was he a good developer? Looking at his github repo I would have said yes... talking to him I said yes... if I had examined what he built without any context I would have said yes, but really he was something of a company destroyer that I would not ever recommend to anyone else.


Yes, exactly. There are developers who can create code that works, that looks fine, that works for a certain context; however that context is completely separate from reality.

Conceptual damage to an application is a lot harder to fix than the damage of 'ugly' code. You spend years fully recovering from fundamental flaws in the concepts of your application.


This rings true. There are those who tackle the low level plumbing, core problems and take them on with the rest of the team. Then there are those who take on surface-deep issues. Intentions and subject matters.


What they're shipping is a much easier metric to track automatically though.


I'm considered slow (ish), but I have very few bugs returned. Most (but not all) the faster devs have many bugs (often trivial, but sometimes show stopper) and spend time going back and forth with QA.

My thinking is, QA isn't there to find my bugs. They're there to make sure I don't have any. I seem to be in the minority on that thinking.


I also consider myself on the slower side (I've never objectively measured, but it's how I feel), but I try to not let any errors leave my local environment. Sometimes I wonder if it all depends on how you came up programming. My early jobs were all in finance dealing with money and payroll systems. Mistakes meant people didn't get paid properly, and led to lots of clean up work for everyone involved. This led me to think hard about every change, and multiple ways to test the change.

Even today it would never occur to me to make a change and push without testing said change, although I see people do it all the time.


> Even today it would never occur to me to make a change and push without testing said change, although I see people do it all the time.

How is this even a thing? I don't understand it, yet I come across it time after time.

If you weren't self-educated, did you just throw together assignments without testing them? At work, how can you just write code without testing the functionality.

I've seen people work on webapps that didn't bother to navigate to page they changed. I'd pull in code, go to test my change, and find that the page looks like a GeoCities atrocity. Then I start rolling back changes only to find the code I pulled is the culprit.

I can understand writing a unit test that doesn't cover as well as one thinks. Not taking 15 seconds to visually inspect a visual change? That's unacceptable.


>At work, how can you just write code without testing the functionality

consider this scenario: developer is tasked to make an update to a report. He does not have reporting services available locally (long and complicated setup) and neither does he have access to the data he will be reporting on (because of data privacy regulations). He makes the change to the report, checks in the code without any testing and it then blows up in qa. This is a common scenario that i have seen many times before, especially when developing for systems or with data that developers don't have full access to.


As professionals we should demand the tools to do a proper job. That is NOT too much to ask.


There are only two kinds of scenarios that I've seen people do this where it might be vaguely acceptable:

1. Where it is impossible for the developer to test the change himself. For instance, if the bug is not reproducible by the developer due to some difference between the production and development environment that he lacks concrete information about (e.g. external customer). In which case, the developer should test that it didn't break his own system in some additional way and should at least warn the customer that there's no guarantee it'll work.

2. If you are under some contractual obligation to ship on a particular date, and the thing you are shipping is going to be a giant bug ridden turd anyway due to time constraints caused by poor project management. This is not ideal at all, and not something to take any pride in doing, but it may be passably acceptable to cut corners like this if management tells you to because otherwise your employer is going to lose money for not delivering. Especially if you have a "defect release" planned in the contracted schedule later on anyway (e.g. because all parties to the contract already know the schedule is going to produce a turd and planned accordingly).


There are edge cases where that could make sense.

I've made live changes in production systems without testing them. If the production system is down, it's probably not going to get more down. Some of those changes are made on the live system and then back-ported into the release process. (Sometimes we aren't even sure which supporting index would help enough to come back, or which query to "neuter" to get site functionality mostly back. In cases like that, you might need to do development in production.) I've approved someone else shipping binaries built on a developer desktop to get a production site back functioning more quickly and then commit the changes are re-release from the build/deployment pipeline. We've pushed changes to prod without QA review. There are times to follow the measured, careful, prudent approach to development (most times) and there are other times where a meter of $10K/minute suggests that a lower latency change process is more appropriate and higher EV for the company.

That said, I've seen far more instances of no-reasonable-excuse events where code that was checked into master couldn't possibly compile, people doing an "svn resolve; svn commit" without actually resolving anything and checking in the ====== ++++++ conflict markers and both conflicted sections, etc.

"There, I fixed it!"


I'm with you - QA should be a double check, a fall-back; if you haven't checked, how is that going to be possible? The risk with coders leaning on QA is that they've outsourced their conscience; and the only people really testing and checking the code don't know it very well. Not good.

FWIW, I think dropping QA for periods to shock those who are starting to lean on it is a good idea; but it should still be there most of the time.


I have "slowed down" a fair bit, especially at the start of projects where i don't "do" a lot except think through scenarios. I write a lot better code than I used to.


I struggle with this sometimes. There's a line to walk between getting stuff done efficiently. If you stray too far either side of that line, you're going to drag everyone else down.


Ha! Resonates. Used to have a group of comm protocol engineers at our company, the most famous and popular with sales and customers. Reason: their code broke all the time, and they flew places to hack it until it worked. Looked like heroes most of the time. Meanwhile, our group got an award for least-bugs-in-a-release-ever. Well, recognized anyway, no money or promotion or even a plaque. So it goes.


Yeah, exactly. If your stuff "just works" there's a danger that the manager will assume that what you're doing is easy. I, though, regard people who's stuff "just works" as solid gold.


Joel Spolsky summed it up nicely a few years ago: a good developer is one who is smart and gets things done.

https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid...


This is exactly the problem with this industry - Blanket statements and buzzwords.

An engineer can get things done quickly but that doesn't mean it's done correctly and won't break (and someone else will have to fix it).

To complicate things more, a feature might seem to work perfectly, until a certain point when you realize that it's all wrong at an architectural level and you have to rewrite everything.


An engineer can get things done quickly but that doesn't mean it's done correctly and won't break (and someone else will have to fix it).

If something doesn't work then it's not done.

To complicate things more, a feature might seem to work perfectly, until a certain point when you realize that it's all wrong at an architectural level and you have to rewrite everything.

If the requirements have changed then a new piece of work is needed. That doesn't invalidate the previous work; it just means things have changed.


There is a difference between something not working and domething prone to breaking in the future. Changes are fine, but your architecture may be the one that accommodates changes with ease or makes any change extremely painful.


There's a difficult balance between "future-proofing" and "over-engineering". I'm not sure which is actually worse. Given how often I've seen a client happily run a prototype of some code in a production environment for years, I tend to err on the side of thinking future-proofing being a bit of a waste of time. This is especially true if there isn't a clear roadmap that states when something is going to change, and the cost of changing isn't already committed to the project.


IME the best architectural designs are usually the result of heavy retrospective refactoring, not up-front design.

Thus future proofing isn't just a waste of time, it's actively counter productive. The predicted requirements are rarely correct although the extra code remains a liability.


> IME the best architectural designs are usually the result of heavy retrospective refactoring, not up-front design.

Right. And this applies to other disciplines too. "Heavy retrospective refactoring" is exactly what a writer does each time they go back and make edits to a piece. Connections between elements such as characters, themes, writing style, and dialogue are felt out only by emitting draft after draft.

Even this comment went through many edits to get into this shape. These two sentences alone have been rewritten ten times.

In my experience there's also value in imposing a framework once one has done enough emitting of raw material. For code this might be looking for ways to extract common logic out of several similar functions. For writing this might be considering how two characters have "interacted" so far and what other details ought to fill out their relationship (without considering whether or not all those details will manifest overtly in the work.)


This is where experience comes into play though. Not every bit of software is totally unique and built on first principals. If you're working at some place and built similar software at a previous place, part of the up-front design is going to be driven by retrospection on previous projects.


But wait a minute - shouldn't most of one's future-proofing be making sure you CAN easily alter a design later if need be (including better documentation), especially in the ways you can already see change coming?


Cleaning up existing code will make it easier to alter designs later and that's something you should continually be doing as you implement features and fix bugs, but that's not pre-emptive architecture.


As I say elsewhere here, I've killed projects assuming that. Pulling out bricks from the bottom and replacing them is sometimes so hard, you really have a dead project.


Seconded. Premature design is worse than premature optimisation.

http://wiki.c2.com/?YouArentGonnaNeedIt


There may be, and often are, valid business reasons so that it's more important to get X working now in a way that's prone to breaking in the future, instead of spending more time to do it properly; and that choice is not really up to the developer, often they wouldn't have enough information/context to make an informed decision. A developer who always takes the time to ensure that things are done right is (in most industries) doing suboptimal work by definition, since one shouldn't always do so, it's a tradeoff.


I once wasted a helluvalot of money obeying just such an order, and not doing the extra coding I knew would prevent inadvertent huge purchase X. Turned out the boss just didn't know how to do complex logic or how to listen, or think ahead. The break, when it inevitably came, was catastrophic, but had the good effect of removing said boss (I was long gone.)


I agree with this sentiment of weighing short vs long terms needs. However, if you are working in a medium to large place not everyone is benevolently doing things just for the sake of the company. Many times managers demand to sacrifice software quality for their own career climbing even if it will be a liability to the company as a whole.


Prisoner's Dilemma. A very real thing in management stuctures - one historical example, middle managers under pressure from the top who cheated on X-Rays testing the work on a silo for a nuclear plant. The big boys didn't know. Killed the plant, and I think, the company.


That's where the "smart" comes in.

It wouldn't be smart to build something that is un-maintainable or prone to breaking in the future if that's what was needed, would it?


But all systems - not just software - eventually hit some sort of internally derived limit.

At some point, you have to go with what you know. You can't expect perfect knowledge on the part of those in the past.


And a great developer is one who is done and gets things smart :-)

http://steve-yegge.blogspot.de/2008/06/done-and-gets-things-...


I'm still bummed that he never posted the last 2 parts of his Programmer's View of the Universe.


Well that just shifts the goalposts from one completely unquantifiable thing to another. You may as well say a good developer is one who isn't bad.


That is a completely subjective, relative measurement. How does one teach another how to objectively tell if someone is smart and getting stuff done?


Common problem is these people (who just do their job and make no trouble) are often seen as useless - you don't hear much about them anyway, what are they getting money for at all?? And the fact that their stuff just works makes many people think that they were just lucky to work on much simpler stuff than others - and you know, when code is done and works, especially a high quality code - it looks very simple, a buggy spaghetti code looks a lot more complicated and someone writing it usually has an easy time explaining a non-technical manager that he is working on very complex stuff...


> ...someone writing it usually has an easy time explaining a non-technical manager that he is working on very complex stuff...

This drives me right up the wall. I've also found that these people are masters of stringing together jargon, so even when you sit them down to try and pass on experience it becomes too mentally exhausting trying to parse the Markov chains that fall out of their mouths.

"It seems this call here was inherently looped, boosting the complexity to at least theta x to the logx due to some improperly keyed database API tables. This poses a serious security risk."

"Okay... great... but if you could stop embedding style elements into your pages anyway, I'd really appreciate it."


Yup. The CMM of "firefighters" is immature and drama/chaos-oriented. Solid engineering is best when it often goes ubiquitously unnoticed because it just works like a reliable utility (uptime FTW).


I'm in the same position currently. I'm not available on weekends or extra hours during normal working days, some people are working extra hard and bringing up how much re-factoring needs to be done on their code. Except the extra hours are from fixing their broken mess, and the re-factoring is largely from self-inflicted injuries. Not that there aren't things that I wouldn't re-factor from my own code, but it's considerably less and I do it incrementally when I re-visit those sections instead of pounding my chest about how much work it all was. But the latter is praised.


I had a manager that rated you principally on how many mistakes that you made.

Someone who did half as much work and made half as many mistakes was a much better employee as far as he was concerned.


Nobody ever got a bonus for putting herculean efforts into extinguishing a fire that didn't start.


I agree with the first point. The second point, I think, is the difference between a junior and a senior programmer.


> are not having him come in at all hours to fix problems with his stuff.

Is it stuff that he wrote or work he inherited?


I have this problem. I have no solution.

My experience tells me that the best developers are: - generous with their time to other devs

- not defensive about their code

- able to solve complex problems fast, and explain those solutions to other people

- able to focus really intensely

- enthusiastic about learning new things

- lazy about boring stuff (to the extent that they will spend 4 hours looking for a shortcut that saves them doing 1 hour of boring stuff)

but some of the most productive programmers I've worked with treated the entire thing as a job that they did from 9 to 5 to pay the bills, and their real interest lay elsewhere

and some of the best programmers I've seen were the hardest to manage, and ultimately were a net negative to the team

Coding is creative - there are an infinite number of ways to solve any given problem. This is why it's so hard to predict and manage, and resists quantification.

<edit for formatting>


A good developer adds business value.

A bad developer subtracts business value.

Ultimately, in a business setting, those are the only two metrics that matter.

Selection is a management problem, and like many management problems the median level of understanding is primitive and often dysfunctional.

Not many businesses understand how many flavours developers are available in, nor how to match those flavours to the team/project roles they need at that time. So they robo-hire with standard CS whiteboard hazing, or interview for "cultural fit", or some other nonsense.

The hard-headed hiring question is always "Will this person make the business more money, and if so, how?"

This may sound like a recipe for a sweat shop, but it really isn't. There's a lot of possible variation in "how", which includes making smart architectural choices, having good team lead skills, raising the company profile through social presence, and so on.

The strongest possible code is always a good thing, but sometimes it's fine if it's eclipsed by other qualities. You're hiring a person with many possible skills, not a cog in a machine that you're planning to spin as fast as possible until it breaks.

It's up to management to work out how to get value from that. And there's a lot of creative potential in management that traditional work cultures overlook.


Problem with this is it's hard to account for risk.

What makes a good insurance policy?


To clarify:

"Will buying this insurance make the business more money, and if so, how?" is a little more nuanced, since it requires consideration of risk, and dependent, unknown factors.


I second this! From my experience the best or rather most productive programmers get stuff done focusing on customers over eligant code. Mostly, just because they want to get something done so they can either avoid further meetings or work on their personal stuff.

The "best" programmers you mentioned, from my experience, always end up souring the whole team. Most of the time, they can't seem to tell the forest from the trees.

With that in mind, one of the things I tell my team we always need to hire more of are just developers with social skills. Being effective at working with a team is 100x more important than being a good developer. As an example, I have spent weeks in meetings debating archetecture. Each meeting I would make my same suggestions and then the rest of our "best" developers would debate. By the 6th or so meeting regarding the architecture I just presented a prototype of the project. I won the debate by acting, not by trying to find an ideal solution.


Good attitude for startups which will die after barely releasing MVP. Others will learn that maintaining is where the most time is spent and that chasing customers' whims without any regard how it all fits together will lead you to the situstion where new features are no longer possible to implement and the old ones start to break in random places. But hey, you at least talk about it for hours with all those wonderful social skills you have.


I think it is false dichotomy. "Best" programmers at least do nothing before they have a plan and HLD (good or bad). Productive programmers simply get things done both ways, good or bad. I've seen many times how we "productively" produced overengineered unreadable spaghetti that works. Yeah, I can write even straighter single-page perl golf too to show selling prototypes, but that's not what we are paid for as a company. That can be outmanaged by detailed design specification, but then you have to have an expertise at the code level (easy) and spend half time on the task yourself (hard).

Both types can do good or bad for you.

Edit: iow, smart and get things done, not or.


There are lots of ways to go wrong, of course, but make very sure anyone you hire with high social skills also has high ethics (there's an inverse correlation.) Because a good actor can trash almost any tech business, if they have enough knowledge to pass. The truly competent techies will be outmatched.


>>Being effective at working with a team is 100x more important than being a good developer.

Having such a constant in your mind would be indicative of never having to do something 'hard' (algorithmic, latency related, extract every single bit of performance b/c you have to)


When you can't find a clear answer... May be you asking the wrong question. Create code is - in my opinion - a service, not a product... If you keep paying a service like it is a product it will continue to be inefficient and open to opportunitistic behaviour. What about changing the reward model then? What you, programmer, get paid some fix price for expenses coverage and then a fee for each week (or month) you're software runs without a flaw? Or a fee that goes on as long as software keep working (with your eventual maintenance and fix help, of course). This would make customer happy (he has a working programmer) and do reward more the best programmers. The main issue of one-time assessment on programmer job is, in fact, that you not always discover software flaws immediately.. it takes time..


One quality of good developers not often brought up is empathy. It's absent from this article and all of the comments thus far.

Empathy is quite important in a vast array of creative endeavors. Development is not an exception in my opinion. It's the trait that allows developers to know what a person needs even if it isn't what they're asking for.

Being able to put yourself into the shoes of another busy person working in a domain you may have little familiarity with is difficult. Trying to solve technical problems for that person when they are not technical is harder. Efficiently using that person's time and getting to the root of what the actual problem is, not necessarily what they ask you to do, is harder still.

Getting to the point where you know how to ask the minimal amount of questions to understand the actual problem that needs solving takes practice and experience. It requires good listening and comprehension skills, and comes from caring enough in a person's problems to get to the root of the issue; an exercise in empathy.

By getting to the root of the problem you end up with better, more useful software and solutions, with less iterations. Even having the conversations to achieve these understandings requires knowing your tools well enough to gauge what's possible, an innate curiosity to care enough to dive into whatever it is that needs solving, and enough experience to know when to say "yes," "yes, but," "no," or "I don't know," to ideas. And to not only say it, but mean it, and know it to be true.

Additionally, developers who are good at actually solving problems for people must be creative enough to improve upon that situation without being told how.

All of these traits are excellent separately, but combined produce developers who solve problems and create value, not developers that just write code.

Empathy is a trait of some of the best developers I've ever met or worked with and a lack of empathy has been a strong indicator of mediocre, bad, or hard to work with developers.


Empathy is the just the newest word for certain people to feel that they're better than everyone else (who lacks empathy?).

After a while it gets tiring keeping up with it all, I sometimes think I should try and identify the newest word to add to the word bingo software developers seem to feel the need to employ to differentiate themselves from everyone else.


This is a remarkably dismissive comment. Empathy is not just a new buzz word, it matters a great deal. Some of the best advice i have ever received as a developer was to "think like the client". That is a difficult thing to do, and i do not view it as a reason i am better, i view it as something i am constantly subpar at.

But thinking like your customer is, by definition, empathetic. Intimately understanding a customers needs and reactions to your software will not help you to write academically superior code, but it will help you create better products.


You would think someone who was good at this newfangled "empathy" stuff would've managed to start off their comment in a more positive way.

It's all necessary for writing software, but when someone starts telling me one specific aspect of it is more important than the others, then yes, I absolutely dismiss them. It's code for "I can't write good code, but it's alright because I'm a people person and that makes me more important than the ones who CAN write code".

It's just more egotistical bullshit that's more about the person making the statement than anything else.


This is a great point, I think a good developer -- has respect for other's code and perspective, he/she doesn't think that the previous developer did a bad job writing the code in first place. Many times the the code/solution that is better at times may not be good enough in long run -- also a good developer is one who would write code/logic in such a way that the future developers would understand it easily. Sometimes having good comments in code could easily increase longevity of the program


Not only does it help with customers, but also with other developers. A few lone wolves only scales so far.

I've thought this for awhile, it's encouraging to see someone else draw the same conclusion.


Programming is a really difficult skill to acquire and it tends to attract a lot of people that have below average social skills.


except for you, right? You're the special snowflake drowning in the sea of below average social skills.


Actually no, I have really difficult time with social skills, and I know a lot of other programmer types that struggle in the same areas I do. Where did I imply that I'm better than anyone else?


Been a coder and engineer for 12+ years, now a manager of a team of 8.

I evaluate my team on their ability to ship regularly and with a consistently low bug count. I encourage communication regularly and pair programming and all that. And I am around and I listen and interact.

It is pretty easy to tell when someone isn't pulling their weight or is having problems contributing based on the interactions within the team. People naturally express frustration (as well as praise)... you just have to be an active, hands on (not micromanaging necessarily) manager.

In addition, I specifically look for growth mentality when hiring and cultivate it within the team. That way I can be confident that when a weakness or need for improvement is identified, the person will work on it. It's my job as manager to properly address it and motivate the person.

In my experience, KPIs and other measurables (including even basic timesheets) are always gamed by developers. And anyway there are so many intangible and abstract aspects to doing this job, especially as you advance in your career, that it's arguable that the most important parts of the work aren't even measurable in any real sense in the first place. That's the art.


There is quite simply no way to quantify in a scientific way how "good" a developer is.

Whether or not a developer is "good", and "how good", is purely subjective.

This is the core problem at the heart of all recruiting, and no matter how hard people try to quantify, test, assess and interview, it is still not possible to make an objective statement about how "good" a developer is.

Part of the issue is that "good" depends on a vast array of factors: the task at hand, the developers experience with the specific technologies, how they feel today, who the judge is, if the developer is feeling good or bad today, if the developer is interested in this specific task etc etc etc etc it goes on and on.

And one of the worst traps to fall into is to think that a developers "ability to code" in some way is a really good measure of how good that developer is. i.e. how quickly can you reverse an array, sort of dict of arrays etc. To my mind this is only a certain portion of what goes to make a great developer, and not even necessarily a very large or important portion. So many other things are important in making "a good developer" that assessing substantially on code seems misguided.

Most importantly, "good" to me, is definitely not the same as "good" to you, because the important things for you are likely completely different to the important things to me.

I wrote a post about some of the characteristics of a great software developer, but even this does little to nothing in terms of providing a "science" for the evaluation of how good a programmer is.

http://www.supercoders.com.au/blog/50characteristicsofagreat...

The best description for a good programmer that I have ever heard is from Joel Spolsky who suggested to employ people who are "smart and get stuff done". Interestingly, almost no employers take this approach to recruiting.


The way those line numbers on the left wrap is irrationally irritating.


It's just to look nice, not code.


There is no reliable method yet. But who is to say there won't eventually be one?

You point out there numerous different dimensions of good coding. Consider an IQ test, it has many different categories, but since these strengths are correlated, you can compute an overall "score" for something very abstract like intelligence.

Why should we not, if we dedicate effort, be able to make something similar for coding? Aptitudes are already measured very effectively in many other disciplines (LSATs, MCATS, Putnams, SAT II)


Yeah. You don't want the same guy working on medical devices as you do in a SaaS startup, probably.


The most important thing from my perspective is how well he or she develops in the context of the entire team. In no particular order a good developer will:

1. raise issues when other developers suggest solutions (even if just to play devil's advocate) to encourage discussion

2. be willing to have his or her opinion overruled by consensus, understand why, and proceed with contributing to implementation of it as well as if it hadn't been

3. accept constructive criticism in reviews of his code (and can make counter-arguments when appropriate without becoming defensive)

4. improve his or her craft by acknowledging and correcting errors in code and design when he or she makes them (nobody is perfect, and every experience level of engineer will make mistakes in judgement, and errors in logic, structure, and design occasionally), but generally make fewer such errors as time passes

5. seek out opportunities to share expertise, experience, and advice with co-workers

I'd say if most of these points are hit, a developer is doing a good job.


I'd be very afraid of mentioning any of your points to anybody that needs them mentioned. Well, #4 is a little more resilient to misunderstanding the other ones, but even it can lead to horrible decisions.

Anyway, that makes me question the original theme. The article's answer is as good as "if you know how to evaluate developers, you can evaluate developers", and yours isn't much better than it, yet, I've never seen a better answer.


Well, I disagree that this is equivalent to your description of the article's answer. But I take your point.

I'm not sure there's a universal, objective way to evaluate what "doing a good job" means for any job that isn't ruled almost strictly by some quantitative measure (e.g. "output is X widgets a day").

None of the items on this list have absolute measures ("plays devil's advocate at least once a month"), or even well-defined relative measures ("makes 10% fewer mistakes after being corrected") for exactly that reason. They're heuristics, and I'm skeptical applying anything more rigorous is reasonable to attempt.


I get your point marcosdumay, if you tell your peons how to game you, some will game you. But others may learn and adapt.


This article is right about one thing: managers should avoid mechanical metrics (lines of code cranked, bugs fixed, etc.) as a way of measuring developer competence.

Many developers are good at figuring out constraints and optimizing around them. It's a game for us. We're more productive when we're playing that game on behalf of our businesses and customers than on our own behalf. But if you award us points (dollars) based on optimizing metrics for ourselves, we'll probably do it. At the very least we'll have to spend mental energy resisting the temptation.

These kinds of metrics inevitably generate perverse incentives.

It's true that developers should remember that debugging code is harder than creating code. We should avoid creating code that is right at the limit of our cognitive capacity to understand. When we do create code like that, we've created code that's a stretch to debug.

There's a management corollary: don't use the most ingenious incentives you can imagine to motivate your krewe. You'll have a hard time debugging the incentives. And your krewe may exploit your buggy incentives in the meantime.

And, in both cases, imagine what will happen when your super-clever code or incentive scheme is handed off to your successor.

Oh? You want to keep doing exactly what you're doing now for the rest of your career? OK, go for it.


> These kinds of metrics inevitably generate perverse incentives.

Precisely. There's a post that goes around about finding good programmers, which ended up concluding that good developers almost inevitably make many commits with small per-commit changes. I had some issues with the methodology (their outlier-exclusion looked like it might force the result), but the outcome made intuitive sense.

I would be horrified if anyone used that metric to judge employee quality. Even assuming it's flawless as a retroactive assessment, it's still trivially gamed. As soon as something that blandly quantitative affects people's work outcomes, it's ruined by the fact that optimizing for the metric is more efficient than optimizing for good work which happens to meet the metric.


Remember though, that the MMPI personality test actually trolls for those gaming the test (diagnosing them with personality disorders, usually.) It does that by tempting people to exaggerate their virtues, for example. So if you're willing to dive into the details of those commits, trumpeting that metric might be a nice honey trap. You don't want the serious gamers and players in your workplace.


Yeah - basically all working real-world systems include "and don't get clever" as one of the rules. If you can't refine your rules to be unbeatable, just put in a ban on beating the rules!

There's an old story about a Soviet pin factory that was judged by number of pins, so it made huge numbers of tiny, useless pins. In response they were switched to a weight metric, so they switched to making giant, 100 pound pins. It's not a true story, because whoever tried it would have gone to prison immediately. You don't actually need to legislate exactly how to behave, because you can just demand good faith and punish people who don't offer it.

I might sound sarcastic there, but it's pretty true. Employee metrics can work on a code of "don't cheat, jerk" as long as the cheating is detectable.


I think I often get rated poorer by management compared to others because I tend – by nature – to focus deeply on complex problems (Donald Knuth-style), whereas a lot of other developers communicate more, answer emails promptly, go to a lot of meetings and manage to cut 'n' paste together some code to show results the very next day. So you do get respect from fellow developers for your skills and experience, but managers don't really value you.

Nowadays there seems to be more and more focus on quick results ("profit") in development rather than an effort to have a more lasting solid base to build on; a lot of software is in eternal "prototype" phase, like the "under construction" state of web sites around 2000. I guess it echoes the general business mindset of quick short-term profit over lasting quality.

I think that pretty consistently the wrong people are chosen or promoted to lead or manage development projects; they are rarely the most knowledgeable of developers – if they know anything technical at all – and definitely even rarer do they possess the kind of personality that can actually stimulate and motivate others. I guess that's the curse of being more like an engineer as opposed to a business person; you tend to end up in a position where the people higher up don't necessary share the same motivations and goals, but have more control over your destiny than you'd like.


Have you read the RethinkDB post mortem? http://www.defstartup.org/2017/01/18/why-rethinkdb-failed.ht... It explains well why time to market is very important.

I used to be in the "correct but slow" camp. Now I better understand business I'm more aware of what needs the correct solution and what needs the timely solution, and this has greatly helped my career.


If you haven't read the "Worse is Better" article, I think it's the best expression of the value of just getting something simple out the door.


Yet I remember choosing a data structure too hastily and that killing a project a couple months later. By the time the error was realized, there wasn't time to reboot, project dead. You want a minimal viable product and both words create very difficult judgement calls.


It sounds like your assumptions about other developers copy and pasting is lumped in with a bunch of very positive things (asking for help, helping others, being a teammate, being invited to meetings about complex problems). My personal opinion is that software is nuanced and approaching every issue you run into with an assumption that it is important may be a misgiving.

The two second localization change shouldn't be "I have decided to rewrite localization." Similarly for new features, getting an MVP and releasing it internally is often better than writing the best XYZ from the start.

With that being said: there is a time for being very safe and cautious. Sometimes the work you're doing is the core piece of the business -- that deserves respect, lots of extra thought, a full suite of tests, etc.


In terms of engineering problems, make sure you aren't making things more complicated than strictly necessary. Most software gets rewritten because constraints/requirements/teams change over time, in unexpected ways. YAGNI.

Also, usually your job isn't to solve an engineering problem, it's to provide business value. It's easier to claim great engineering value (because all the requirements/constraints are in your head), rather than actually solving customer need where someone else gets to decide whether you are providing value.

Oh and of course people who are more visible to higher-ups get promoted more!


I think a huge problem with YAGNI as a principle is that it is - in itself - as subjective as most of the points in this whole discussion.

I've read my fair share of code authored by people who's definition of "done" seems to be "it compiles and doesn't fall over the first time I look at it", ignoring various obvious edge cases that could be triggered randomly. Cleaning up messes like this always costs more than the shoddy implementation saved in the first place. Usually those clean-up session start when some strange edge case occurs on a critical production system.

On the other side, the approach is fine, if the code only runs once or twice an is then discarded. In that case ironing out all possible bugs is clearly a waste of time.

So, YAGNI means different things in different contexts. So, I'd say that being aware of said context is a very crucial skill to have as a developer.

---

When it comes to business problem solving, I couldn't agree more, though. There are a lot of "solutions" that could have gotten so much better if someone tried to figure out the problem first.


People spend a lot of time demonstrating their sexual fitness for reproduction by displays of prowess. It's just what we do. Intellectuals and engineers aren't exceptions, they're often very naive about this tendency and the biological roots of it. I agree there. I've sometimes said that universities are very important because they teach people to meet the irrational demands of cloistered professors in order to succeed; thus preparing them for business. But there are times, you would probably agree, when you have to argue strenuously for engineering problems to be funded for engineering reasons, before they crash the business. Building a bomb because that's what management said they wanted isn't what you're paid for either.


The world is a social place. Who you know is much more important than what you know, a lot of times. Unfortunately I think this means that the personality types who seek management jobs are often gaining their position by personality, not qualifications. For someone who values competency and technical skill, the world is a frustrating place. Of course there are companies out there that make an effort to go against this paradigm but I feel like those are exceptions to the rule.


This can start at the top. If you're in a company that exists because the founder is a great actor and was able to bullshit, social engineer and gladhand a hapless angel or VC into funding them; then they'll likely hire similar shiny people for other management positions below them, too. Chances of success: poor.


Are these complex problems causing more trouble to the business than the ones that are simpler to solve?


A point, but if nobody thinks about them, you'll never know.


I would say the first step to competency in recruiting and managing developers is if you can tell when a developer is doing a terrible job. That's more important and easier, and more prone to objective measures. Someone who commits 100 lines a week may be better than someone who commits 200 lines, but the one who hasn't committed in a month may need some help getting things together.

With hiring as difficult as it is, developers who make any reasonable positive contribution are probably worth keeping.

From there, as a manager, the best thing to do is focus on getting the best out of each developer, and keeping the business profitable enough to be able to give raises that would make anyone happy.


> but the one who hasn't committed in a month may need some help getting things together.

They might, but on the other hand they might be the one who decided that the project needed documenting properly, or who is researching and designing the next feature.

Not all useful work involves writing code.


This is a good reminder that performance on any metric needs to be tied to decent communication. A problem like this can easily be fixed with "hey, what're you working on?" "oh, this ream of important documentation", but far too often that never happens.


I manage a data science team, and currently I measure on 4 axes:

–Technical (ability to effectively solve problems)

–Business (picking which problems to solve)

–Productivity (things done to improve workflows, automate processes, etc. of themselves & team)

–Team (teaching, hiring, training)

This works pretty well. I find that these skills are multiplicative, so someone's impact on the team is reasonably well-approximated by taking an average of these scores. I also find many things you might expect, e.g. variance in technical skill is pretty high, talented engineers usually develop high technical skills before high skills in the other domains, senior engineers tend to be force multipliers by being exceptional at the non-technical skills, etc.


Meh, this is almost exactly the formulaic industry-standard "solution."

The challenge here is that a human is evaluating all of these traits (you). Of course you think you're not biased, but research shows all humans are [e.f. more likely to promote people who remind them of themselves, without being threatening].

I think much like if an engineer made code and said "it's working great" without any kind of external monitoring to validate, asking a manager to promote without external validation is completely a crap-shoot.


I think what you label business is the most important and also hardest skill of these. It is also immediately relevant for productivity and team/training if you consider "learning as a whole"[0] a good approach (which I do).

[0] See: "Making Learning Whole" by David Perkins


> The risk with this approach is that the criteria are vague from a developer’s point of view. They can’t really know what their manager is thinking. There is now an incentive to focus on learning to sell themselves to their manager instead of focusing on becoming a better developer.

Yeah, no kidding. Welcome to white collar work.


You can measure:

Junior Developer: By how much more someone else was willing to pay them.

Intermediate Developer: By how screwed you are after they leave.

Senior Developer: By how screwed you're not after they leave.


This is true and very unfortunate. Nobody realizes when a senior is working hard at preventing bugs complexity, or making code understandable or refusing to implement unnecessary features.


This is a really interesting statement. I interpreted it several different ways before understanding what you mean.

Yes, I agree with all of these.


Managing a team as a tech lead (that is, I also develop a lot).

I primarily use two techniques; simple project management and communication.

Our developers are not forced to work on the project nor are they assigned by anyone else; they are in the team because they want to work on it (and they can leave at any time). The project management is simple; we have a backlog and engineers are (mainly) free to choose what to work on. We have priority indicators and developers tend to work on things they are already familiar with, but in general one can choose what sounds interesting.

How do we (we are two tech leads) determine if engineers are doing a good job? On the code level, we are doing reviews (everyone can review everyone else's code) and give feedback. We are talking a lot with each other, asking about progress, blockers and new ideas. We also talk about technology in general. We encourage our engineers to look at other things and try new ideas.

And the whole project management is transparent; we as tech leads do most of the communication with the business so that developers can focus on their stuff but we report everything in great detail to them.

We do not use metrics like bug counts. Everyone writes code with bugs. What I recognize is whether engineers can take responsibility for their stuff or not. We "teach" them that mistakes are nothing bad; they are encouraged to say "Hey, that's my fault, I will fix it".

I think we, couldn't use this approach with 20 or 30 developers, but in a small team it as really working for us. And as I see it, our developers have a great time and work and a developer-friendly environment.


I like this. So if the question was "how do you tell which sled dogs are really pulling", your answer would be: you can't, if you're standing on the sled. But if you're one of the dogs, it's a lot easier to tell.


Only a really good, really savvy manager can tell - The manager needs to be more knowledgeable (especially architectural knowledge) than any of the engineers working under them - That almost never happens. I've worked for maybe 10 different tech companies in my career so far and I've only had one manager who I considered to fit the description.

Most of the time, a manager might be really good in a very narrow area but they don't have the variety in experience or critical thinking ability to provide good leadership; they often just follow trends and patterns without actually thinking them through - They deal with everything as though it were inside neat little boxes and regurgitate blanket statements made by 'thought leaders' as the basis of all their decisions.


I believe you need to continue writing code to stay in the engineering mindset. Otherwise it's very easy to lose touch.

It would be a superhuman manager who is able to maintain their engineering practise, whilst managing all the soft skills (negotiation, politics, communication, presentation) that a manager needs to do.

In my opinion, the skillets are diametrically opposed. One deals with precise certainties, the other on soft human ambiguities.


A software developer is doing a good job when his bottom-up programming skills are good. The most common mistake a beginner do is that he works top-down. The way you know someone works top-down is that he does not or don't want to rewrite existing code whereas he should be doing that a big amount of his time.


I used to say I work one month a year writing code. The other 11 are spent debugging, supporting, documenting, packaging, controlling. Not as extreme these days with throw-away web projects but still a small fraction of time is spent on the original draft I believe.


> Can they write good code?

This is the most important but also impossible for non-technical managers, or even technical ones that were never that good themselves.

Are there companies out there the do independent audits of developers/code quality? If always wondered if there is a business opportunity there.


So, this sort of devolves to economics, at least where I work. What I mean is, the people that pay for the code don't care and don't care to know. And this is entirely rational from their point of view. If it appears to work, it's good.

To audit the code accurately (as opposed to merely giving out aesthetic advice), in most cases would involve learning the domain, and this is what's expensive.

To the business folks, we're a black box. Stuff goes in, stuff comes out. And it costs. Paying someone else at least as much to prove that we did it according to 'best practice' (I think this is what you're suggesting?) would be anathema.

There might be a business there but it's not mass market.

It's probably also worth noting that all of the above applies to other domains too. Substitute 'code' for 'marketing' quality. In some real senses, die welt ist alles was der fall ist.


> To audit the code accurately (as opposed to merely giving out aesthetic advice), in most cases would involve learning the domain, and this is what's expensive.

There are a lot of things between aesthetics and domain knowledge that are audit-able. A lot of code bases you can walk into and see a million n+1 queries right away and the architectural patterns that lead to them. Another canary is the test complexity that reveals various anti-patterns like active record. Another is correct IOC usage, a lot of code bases will use it for creating POCO objects for example. These are all very common and quickly discoverable.

> So, this sort of devolves to economics, at least where I work. What I mean is, the people that pay for the code don't care and don't care to know. And this is entirely rational from their point of view. If it appears to work, it's good.

I wonder if you could go in and say "this project is costing you $10 million" and we can give you an early indication of whether it's going to be worthwhile or be a turd.


   > There are a lot of things between aesthetics and domain
   > knowledge that are audit-able.
I don't disagree with you. The pertinent question is 'does anyone care'?

   > A lot of code bases you can walk into and see a million
   > n+1 queries right away and the architectural patterns
   > that lead to them. Another canary is the test
   > complexity that reveals various anti-patterns like
   > active record. Another is correct IOC usage, a lot of
   > code bases will use it for creating POCO objects for
   > example. These are all very common and quickly
   > discoverable.
Again, I don't disagree. You're right, possibly even corrrect. But we're talking business plans yes? So feel free to buy that expensive suit (or hire a beard) and see if you can explain these issues and why they matter to the people with the money.

"If dogs could talk, perhaps we would find it as hard to get along with them as we do with people."


I get what you mean but I do think the care factor could be broken down a bit better. Some simply don't care, talking to them is a waste of time. Some don't care because the can't, they don't have the visibility into things so they're pointless to worry about.

The final group does care, often because there arse is on the line and are often a CXO. These are the ones that love Agile/scrum because they think it'll identify problems early and/or they don't trust their Devs.

Whether a solid business case could be sold I'm not sure, but I think everyone that ever hired a scrum consultant would be the target market.


There could be. But then again, how would you prove that You are a good coder (and/or good judge of one)?


In my hypothetical business plan we'd be selling to the higher ups that wouldn't know anyway.


Just charge a lot of money and wear nice suits.


That's where the plan falls down. I'd have to partner with someone else that looks good and talks to management while I play the part of the disheveled geek.


Like being a full time scrum master?


Is it really the most important? I'd question that. If you have a guy who writes brilliant code, but isn't on board with the company philosophy, or is acerbic and difficult to work with, or something like that, he won't be as effective as a journeyman better at the non-code aspects of the job.


Programmers are like art: you can't exactly pinpoint why a particular painting is good but you can usually still put different paintings in rough order of how good they are. It's never that one brush stroke that reveals the quality but the complete picture.

With programmers, there are indirect, subtle signs which may or may not give clues. Sometimes they only validate some good or bad aspects of the programmer.

I think that to really know the only way is to work with the programmer and see what works and what doesn't. Generally the team does have a good notion of who are good and who are just great, although a manager might want to use the information from that source with some grain of salt.

However, it's easier to pinpoint programmers which aren't good, or not completely good. The guy who doesn't get things done. The guy who breaks more stuff than he fixes. The guy who rewrites everyone else's code as the first step in any project or the guy who's always asking help from others for simple things. Or the guy who is actually a good programmer but always ends up doing something a bit different than what the important things to do are. Or the good programmer who is relentless in insisting he's right because "that's what the facts are" while completely ignoring social issues such as understanding that a dictatorian attitude doesn't fly well with other programmers. These guys still do provide some value but you can see these shortcomings from far away. Then there are the guys who actually produce negative values, i.e. taking away others' time merely by being employed.


Once upon a time, I was refused a salary raise because I've told that few of my commits had really critical bugs. On the same time another colleague was promoted to team lead because their app was flawless. A small difference here. I've worked on multiple complicated legacy parts of the platform used by lots of people and the other team was building a single web api from scratch... I quit a month later...


Gregory Brown has fairly recently picked up the torch publicly reminding developers that the actual development is the smallest part of their job:

https://twitter.com/practicingdev (on haitus)

"the thing that motivates me is inspiring developers to think holistically about software dev"

"The value you bring to your work is not measured in how much code you produce, but in helping other people solve real problems."

etc.

Semi-obligatory book plug: https://amzn.com/dp/B01LYRCGA8

--

Specific to this discussion, a big part of success as a developer is effectively communicating that one is doing a good job... if anyone has to ask this question, said developer has already lost!

The OP's question is definitely relevant to managers, and I appreciate the practicality of the discussion for both perspectives - what to do and/or what to look for.


Do we really need to know, in non-subjective way? In a 1000-person company, how many persons have their performance calculated in a non-subjective and vaguely accurate way?

It must surely only be in sales - where sales volume is a great measure, and I think sales people accept that they are allowed and expected to learn the cut-throat techniques to gameify the situation and win. If we want to force the corresponding personality traits that make such measurement reasonable onto developers - then OK I think story points achieved (aka velocity) is an equivalent measure we can take.

An experienced twisted senior dev would know how to maximise their score there, and make a junior dev look atrocious in comparison, but if its OK for sales then its OK for devs?

(my actual opinion - being subjective and relying on the opinion of good managers is fine. The developer personality matches this.)


Best way I've seen is to have peer code reviews. Have different people from across the team review each others checkins. Every. single. checkin. It can be a quick "buddy check", or a longer code review for a big meaty feature. It's totally fine if a junior dev reviews a senior dev's checkins. This has the side effect of cross-training, catching bugs, preventing things from getting territorial, communication, and learning.

If everyone is looking at everyone's code, the team will form a pretty accurate opinion (at least within the team's context of what makes good code.)


My metrics are the following:

- being able estimate how long the work will take* - being able to deliver projects on-time - effectively communicating their status - quality of the work delivered - deliverables meet all stated business and architecture requirements - being able to function effectively on the team

*If an developer comes to me with an estimate that seems really low or high then that's a good time for a conversation to see the reasoning behind their estimate so it can be adjusted if needed. This is a metric that also scales with experience, I expect more from senior members of the team.


I generally agree with those measures, but 1 and 2 (estimate and deliver on time) are only really possible when:

  * Developers and teams have quite some slack (c.f. 20% time)
  * The code and architecture are of very high quality and well documented (i.e. no accidental complication)
  * There are very few external interruptions
and probably a few other factors. So, for example, in most legacy projects, it's practically impossible to give any meaningful estimates (i.e. estimates that are correct within reasonable ranges most of the time).

I wrote a few blog posts on estimating some time ago, those also touch the issues I mentioned here. For example: http://devteams.at/a_spectrum_of_effort_estimates


This largely amounts to the question: "How do you tell if someone is extremely conscientious or just puts on a show of being so?"

The best answer is, wait a couple decades. Of course, that's not the answer you want, you want to know now. Close code reviews (plural) may tell. Ratios of kinds of bugs may also be a tell (if they're committing more dumb syntactic mistakes that debuggers easily catch they're probably good, and spending their time preventing ugly and subtle logic errors.)

But to the extent that algorithm choice and design is part of their job, that may not be enough.


These criteria fall short and are an example of begging the question[0]. To say that "they are a good team member" as an answer to "how do I know if they're doing a good job", doesn't really help.

I have worked with many developers who fit all five of these criteria, who never get any work done.

I like to break projects into small, medium, and large. A large project takes 4-6 weeks, a medium takes 2-3 weeks, and a small takes less than 2 weeks. A developer is doing a good job if they're completing two large projects per quarter. They're doing great if they're completing more than that. Instead of two large projects it could be 4-5 medium projects, etc.

Many projects are smaller than a "small", so your team should be bundling those together into some kind of focused "project" or "sprint" that is targeting a certain area and achieving a user-centric goal.

I also like to measure user satisfaction by sending surveys about new features. Does it solve a problem for you? Is it better than what we had before? Was it buggy? Score from 1-5.

These metrics aren't perfect, but in my experience they go a long way towards measuring developer performance quantitatively.

[0] http://begthequestion.info/


When you say projects, do you mean finished applications or are you talking about features?


Either, but usually features.

When it comes to building a brand new app, you need to break it into multiple large projects. The first one might be "create a new Cordova app with just a login page", since bootstrapping all the Cordova stuff might take a while. The next project would be "add features X and Y to the app, and deploy in the app store", etc.

I constrain projects to no larger than 4-6 weeks because I work in startups, where priorities shift constantly. You need to be able to finish Phase 1, and potentially say "another important priority came up, we need to push back Phase 2 by 2-3 weeks while we complete this other medium project".


The answer is simple: the manager should be a good developer, and so the manager has the tool to appraise the quality of code, otherwise it will be a bad manager.


This is a little circular though. If you are not able to identify good developers and your criteria for promotion is development ability then you stand a solid chance of promoting someone who just looks like a good developer to the position where they're judging other developers. That can be disastrous.


How is this different than evaluating any other white collar job, other than perhaps sales? Accounting, marketing, lawyering, HR, etc, all suffer from the same issues: no great quantifiable metrics, activities that are hard to tie to actual revenue, gameable metrics.

Maybe it is just me, but I haven't read any "how do you know if your marketer is a good one" posts, but I see many on how to evaluate developers.


It's rare to have your marketer leave and then, three years later, have your company collapse into bankruptcy because they did something a bit unusual way back when. That's different.


That's exactly my thought. I see a lot of articles about some "mystery of development" that seem to apply equally well to other careers.


I realised on the train today that outside of our bubble, 'developer' means 'of housing'. I was, for a good moment, very confused by a fellow passenger's newspaper headline...


I used to work in a nonprofit and they all use "development" to mean "fundraising."


I was hoping the article had an answer, but it really just asked the question and pointed out why some obvious answers are not good answers...

But peers tend to know who the good developers on the team are. They know whose code they don't mind taking over. They know who to go to with questions and problems. And those are the developers doing a good job -- the ones that the other developers trust.


The article harps on how it would be nice to have an objective and simply measurable metric. I'd refer to the classic statistics quote "The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data." which seems to be the case here.

There's probably a consensus that reading code is harder than writing code. You're not going to get a good evaluation of a developer's output without looking at that output, and it's not going to be simple, there are no shortcuts - if you need that information, you have to have a competent developer look at that output and they'd then be able to tell how good or bad the quality is, and how the quantity of work relates to the time spent. This can be done in code reviews.

Other than that, you'll simply have to accept that you'll use subjective metrics - which is also fine, this isn't the only domain where accurate measurements are hard/expensive enough to do them sparingly.


And hope the "competent developer" doing the evaluation isn't going to reject superb code because he isn't good enough to know why it's so different.


A good developer just go for the low hanging fruits, and proudly herald his competence to the management showing how much (s)he cares about market.

In doing so, a lot of plumbing/dull complex problems are set aside, let to the rest of the team seen as incompetent to fix.

When your managers are not good coders themselves, good developer are the one who do courtship.


The problem seldom lies upon the implementation, but within the people and communication.

Programming is not a solo task (unless you do something alone, for yourself, without anyone ever using it). It's all about communicating your deadlines, issues, successes, problems, estimations, risks and decisions to the team/client.

Programming itself is a form of communication. You're usually writing the code for the next developer that's going to have to read it (which might be you in a few months).

As a manager/client if you're not sure if you have a good or a bad programmer, you certainly don't have a good programmer. If you have a programmer that always keeps you up to date and makes sure you understand what's going on, you do have a great programmer in your hands.


Always relevant to this discussion:

https://medium.com/@_princeb_/the-parable-of-the-two-program...

(This isn't the original source, I realize.)


this


The very best devs are not always using code as a 'solution'. Also, going back to your customer (internal/external) and reviewing their 'needs' goes a long way into understanding the core issue they are facing.

Very often I've experienced requests made to devs are 'solutions' the customers think they need for their core issue (which you have not discovered yet).

A very good dev will go past that initial request and ask questions to understand the core issue. Many times the intial request does not solve the core issue, and the customer needs to be made aware of this, because it has a huge impact on the final solution and on your business productivity.


But the impossibility of getting such straight, logical answers from most civilians/customers is the best part of why Agile exists; only when they see something can most customers be clear, at least about what won't work. It's not even a question of their being vague - they will often give you flatly false statements because they've never been trained to be precise, think about edge cases, or even what intellectual honesty is.


Also known as the XY problem [1]. It's almost tragic how often the best opening question to a request is "what are you really trying to do?". The fact that it is a widely-known Stackexchange anti-pattern is pretty telling.

[1]: http://meta.stackexchange.com/questions/66377/what-is-the-xy...


Do peer reviews, the other developers know. And they want to work with good people.


I think this is a great approach but it may backfire in smaller companies. If I have a coworker who is my senior and they ask me to review my code-review with him, I'm not going to rat him out and say he's a terrible coder. I'll say something like "it was really interesting to do a code review with ____. His design choices may not have lined up with mine ideas but the review went well." Or something like that. Maybe I'm alone I don't know.


> it was really interesting to do a code review with ____.

This reminds me of the famous "what the British mean" :

-What the developer says : "This is very interesting"

-What the developer thinks : "That is clear nonsense"

-What the manager understands : "They are impressed"


I'm sure you have company. Amongst that company: people I've fired for doing just this. [unprintable] with my information and you're gone. The only reason to do this is because you don't have another job lined up where you're allowed to do what you're paid for, yet, but are doing your best to find one.

I sympathize with your plight, but if you can't do the job, you can't keep cashing the checks.


Good developers are 'hardly working' and bad developers are 'working hard'.

Good developers are striving for the simplest solution in the realistic time frame in the given context and then they just go do it, which is what I mean by 'hardly working'.

Bad developers are at it day and night to further screw up an already screwed up system, they are 'working hard' at the screw up!

And as not everything is right in this world, the 'hardly working' ones either quit or have to make way for 'working hard' ones.


I guess the question really is "how do you know, without looking deeply at his code" ( either because you don't have time, or because you're not a software developer). Which is a problem everyone has in every craft.

How do you know a plumber is a good plumber, if you know nothing about pipes ? Well, you won't until he's done : does it leak or not, and how long until it leaks again.

Now, on the other hand, if are a software developer, then simply look at the code in detail, and you'll know immediately.


In situations where agile/story-points make sense, I suppose developers that accomplish more story-points are "better".

In academia, articles that are referenced more frequently are "more important". For page ranking, pages that are referenced more frequently, are more important. I wonder if there's an equivalent with infrastructure programming? If a developer builds a subsystem that is used frequently, does that indicate the developer has provided substantive value?


> In situations where agile/story-points make sense, I suppose developers that accomplish more story-points are "better".

Not really. Story points are cost estimates--a subjective judgement of how much effort a story will take. They're a prediction tool, not a productivity measure. It's very easy to increase the # of story points you deliver just by increasing your estimate and possibly even sandbagging. Not good.

If story points were value estimates, created by a business folks rather than developers, they'd be better measures. But even then they'd be subject to gaming. It's easy to increase velocity in the short term by incurring technical debt. Say, not writing enough (or good enough) tests. Or not helping out co-workers. Or not spending time understanding business context. Or...


You are not the one who choses what subsystem you will work on, If someone writes a mandatory wrapper for logging which then is used in every single subsystem doesn't mean anything. If someone writes the selfdriving algorythm for the car which works on itself but is not used by other subsystems, this means nothing either.


In the real world being a good developer is not about programming ability. A great programmer needs to absolutely engage in managing perception in the everyday working environment.

The reasoning is if programmer A addresses issues 2X as fast as programmer B.

Then as things average out programmer A will generate 2X more issues than programmer B all else being equal.

Managers, clients and fellow programmers will almost always see programmer A as a problem and programmer B as a hero in this situation.


I don't agree with your choice of words here. A developer is absolutely about his technical ability combined with the people skills needed to use that ability within a team. That is, after all, the purpose of the specialization.

That a developer would benefit from other specific people skills doesn't mean those skills become part of being a great developer. A developer would also probably benefit from being more good looking due to human psychology, but what does that have to do with the profession itself?

Rather, this means that the industry favors politics over development, which is a problem with the industry. The fact that it is not great developers who will ultimately prosper in this environment should alarm you.

Perhaps that's the answer to the question of: "Where have all the good developers gone?" - nobody wants them.


I would argue that people feel "a developer is absolutely about his technical ability" but when you look at it you will find that this is a feeling rather than something which is in any way quantified as hinted at in article.

Very rarely does management decipher a developers problem solving skills, or rank the quality of code or rank eagerness to learn for example ..management does not do this.

What they do is judge "good team members" (and is that developer nice to me is very important) and "can they trust" which and goes back to "frequency of positive interactions" / "how often do you have do deal with issues".


Good point, namely that doing less is the best way to avoid creating bugs. Which screws up many a metric. But you need scare quotes around "good developer."


Reminded me of this great post: "Are Your Programmers Working Hard, Or Are They Lazy?"

Key point: "To managers I would say, judge people by results, by working software, not by how hard they appear to be working."

http://mikehadlow.blogspot.se/2013/12/are-your-programmers-w...


Setup an initial target(delivery points, for exampl) and adjust the target to each delivery. The most importat is the performance of the group in delivery more points per week/15 days... During this process you will observe the guys that are really engaged and contributing to deliveries and others that dont fit in that group. If you have more working teams you can realocate them and check what is working better.


I think comparing one person to other isn't best way to judge. It's exactly what's wrong with our education system too.

Setting objectives and comparing against it has worked well for my team. Each member has a different style, capability and motivation for work. Challenging them on such attributes brings best out of them. For those who fall short, there needs to be a penalty off course.


http://paulgraham.com/gh.html

"With this amount of noise in the signal, it's hard to tell good hackers when you meet them. I can't tell, even now. You also can't tell from their resumes. It seems like the only way to judge a hacker is to work with him on something."


The answer is better managers.

They need to understand the history of the software, the current state of the software and the problem domain of the software. They need to understand the business priorities of the software, so they can evaluate the trade-offs proposed to them by their developers.

I don't think this is a job that a non-technical person can do well.


so, you say you need a person operating a vehicle.

you get a lorry driver and put them into a f1 car and expect them to get good lap rounds. or you hire a rally driver and put them into a taxi. it might even work - at least for a short time.

developing software isn't a narrow field any more. there are specialists better suited to different tasks. you might hire an excellent developer, put them on task unsuitable for them and they'll fail hilariously.

another interesting thing i noted is the quality of the project lead: a bad manager can have a bigger influence on some developers than others. some just don't care and happily do the work as told, others' productivity is crippled because they're forced to work in a way that doesn't feel comfortable to them (while they'd otherwise do a stellar job).


Level 1:

They ship code that works.

Level 2:

They ship code that works on time.

Level 3:

They ship code that works on time and meets the objectives.

Level 4:

They ship code that works on time and meets the objectives with minimal guidance.

Level 5:

They ship code that works on time and meets the objectives without any guidance whatsover.

Level 6:

They redefine the job of programmer in some way that is material and long lasting.


# Managing 101

1. Help set goals for your employee.

2. Evaluate progress toward said goals.

3. Provide insightful praise and criticism.

4. If progress is made, then goto 1. Else goto 5.

5. If problem is short-term (goto 1) or long-term (goto 7).

6. Evaluate termination. If no termination, then 1. Else 7.

7. Terminate relationship with employee.


Looks like you've got some dead code there at 6.

Also I think the point is that 2 is not nearly as straight forward as it sounds and probably requires better knowledge of the problem and solution than the employee has.


Untested idea: Do regular user surveys asking, "how reliable is the product?" This would provide a measure of how buggy the software is that can be tracked over time.


How productive is the developer? That is, how many tasks do they complete successfully over a period of time? The definition of a completed task is something that doesn't have bad bugs that affect the product negatively, and doesn't introduce very much technical debt.

Then, how technically challenging can the task get while the developer maintains the same level of productivity?


You need a manager doing a good job to answer that question. But then a new question arises...


A major reason we cannot define what makes a programmer "good" is that we have thus far been unable to define the practice of creating software as a proper engineering discipline [1]. It is difficult to even enumerate the necessary expertise of a "good programmer" in a general way that transfers everywhere- it is a combination of computer science principles, knowledge of specific tools / languages / frameworks, programming idioms, domain knowledge, certain communication and interpersonal skills, etc. Further complicating things, certain types expertise is useful in some problem domains, and useless in others (i.e. COBOL programmers or kernel hackers will have to gain a different sort of expertise to become effective web developers).

This means that one type of programmer can be "good" at rapidly gaining new forms of expertise. Another may be crazy good at a problem domain he/she's been hacking at for years, and have irreplaceable knowledge. Yet another maybe can do both well, but be stuck at a cushy job for 30 years and never realize their full potential at jumping around and building other great things (like some of my old coworkers).

This lack of clarity in the definition of a "job well done" for software teams is a huge source of pain and lost productivity in companies, especially when the stakeholders and the developers themselves are unable to interpret the signals of failure from their teams. There's no real silver bullet [2] to immediately improve the situation, but there is certainly lots of data that can inform one about a dire situation in a team before it's too late to fix it.

- Code itself has lots of empirical data [3] that can predict team performance in projects with enough of a track record.

- Analysis of team composition and practices [4] can uncover deficiencies and lack of confidence in a team, pointing at other underlying project risks.

Currently (as the article described), all of this information is synthesized and estimated in an ad hoc manner by managers and stakeholders. What really needs to happen is for this data to be routinely collected and delivered to managers, stakeholders, and team members.

Here at Mindsight (http://www.mindsight.io), we are trying to tackle such issues. We're building technology that collects all this data and empowers a manager and his/her team to synthesize reports to stakeholders as frequently as their CI system publishes builds.

We may still have a ways to go in evaluating programmers in the general sense. However, there is still much that can be done to help individual teams understand their true progress in a project. Keep an eye on a future Show HN from us!

[1] http://repository.cmu.edu/cgi/viewcontent.cgi?article=2967&c... [2] http://worrydream.com/refs/Brooks-NoSilverBullet.pdf [3] https://www.microsoft.com/en-us/research/wp-content/uploads/... [4] https://wweb.uta.edu/management/Dr.Casper/Fall10/BSAD6314/BS...


Great post. It seems a lot of difficulty from judging quality of work is we have no metric to define what quality software is in the first place. People talk of this code code is 'elegant', or it is readable/efficient, but these are just opinions of other developers, and not easily measured.


The Deming solution - re auto workers - was to constantly rotate all workers through different teams, then statistically judge (after accumulating years of repair records for individual cars) which individuals consistently dragged their teams down. You need a lot of data for this to work, obviously; but it does work.


Simple solution that gets close to the social truth: ask team members to rank everyone else. Average the rankings. It works surprisingly well and for teams of up to 10 people or so differences in rankings are mostly noise.


That's not the worst thing to do but is game able the same way a lot of the proposed methods are. The developer who brings donuts to morning meetings is likely to score very high on this measure. Also given the diversity challenges a lot of teams have it's possible unconscious (or even overt) prejudices will affect scoring.

It's something to add to the toolbox, and a consistently low score for one person is a sure sign of trouble, but ithe does have some of the same shortcomings the other approaches do.


I would (and expect of my peers too) only value the donut-bringing as a minor strength compared to their technical skills and interactions with me. Which I think makes a little game-ifying ok: If someone wants to make a conscious effort to be the donut guy instead of the never-say-hi-in-the-hallway guy, they've earned a few points.

The amount of points to award is the tricky point, maybe it depends on company culture. How much value does the lovable team player bring, how much value does the awkward quietly-fix-anything person bring?


I've seen peer ranking work well for hiring when people in the team are strong and are motivated to hire even stronger ones.

It doesn't work at all when the team members are not so strong. They tend to value people who don't question "best practices" and don't take bold, unconventional positions. In essence, groupthink wins.


Works if your culture is good. Only. (Doesn't necessarily preserve that culture though.)


articles with a question in the title rarely answer that question


And that is just fine. The idea is to highlight a topic for discussion, which this article did a great job of, as reflected in the discussion in this thread.


> "developers can be confident that they will rewarded for doing the right thing, not looking like they are doing the right thing"

From my professional experience (10 years in 3 companies), this is one of the biggest issues in software development businesses. You're either too small and then the manager doesn't have the time or skill set to effectively evaluate the developer's efficiency or you're too big and there are too many layers between the developer and the manager. The situation when your direct supervisor is also a person who is authorized (and able!) to evaluate your performance happens so rarely.


"Maybe number of bugs fixed is the answer?"

What about the number of bugs never found? I.e. reward those producing the least buggy code. And an incentive to write code with fewer bugs aligns with business goals at least to some extent.


I'm going to flip the question, because we get a little more insight with it upside down.

How do you know a developer is doing a bad job? Is it because they don't code a lot? They sleep late? Maybe wear the wrong clothes?

I'm agreed with the author that with an organization of any non-trivial size, all we have is subjective data, since effort and value are so completely decoupled in tech.

Unless you know you're going to be funded for life, every tech group has a difficult question it must face sooner or later: how do you deal with incrementally decreased funding, where somebody has to be released?

The best I've got is that the team has to vote somebody off the island. I don't like it, but it's a decision that must be made, and managers aren't the ones to make it. I would add to that: the team includes the developers, the managers, and the customer/users that interact with and receive value from the team.

It's an open question whether or not you decide to do this on a regular basis or just save up and whack somebody with a vote all at once. I don't like forced ranking, but I would prefer a rolling average of votes over a long period of time as opposed to an all-at-once vote. For some reason it seems more humane to me.


Asking the team to vote on who to fire seems like it'd be awful for morale and team trust.


Completely agreed.

What is the alternative? Drawing straws? A game of chance?

Should the team be responsible for downsizing itself? If not, how does that happen?

Perhaps the team force-ranks itself once a month or so using a 360-like method -- but the results are never read and kept secret until/if they are ever needed?


Drawing straws would be better for morale I think. So yes, if those were the two choices I'd prefer drawing straws.

I think management should make the decision though - even if they have to pick someone at random. The heart of management's job is to make this kind of large-scale decision and carry for responsibility for it.


I want to make sure I understand you.

You are working in a team of, say, six. You are working alongside some business folks and helping them make money and support their family. Your boss, let's say, is a nice guy but an old mainframe IT guy who doesn't really grok what kind of things the team is doing.

There is not enough money for all six to work next year, but the company thinks they can pay for four.

You'd rather have a manager make the call, a guy who doesn't know really what the team does or the individual contributions of each member -- than the team? The team, who if push came to shove could probably agree on the four most qualified people to help everybody else in the coming year keep their jobs.

I don't follow this. I think the team is the most qualified, and I think if you care about the value you are creating then the team is also on the hook to make the tough decision. Now how it's made and such? I have no idea. I just know if you care about the work you're doing, the people closest to the work are going to be the most qualified to make businesses/management decisions like that. But maybe I missed it?


> I think if you care about the value you are creating then the team is also on the hook to make the tough decision.

I think if management isn't able to do it then what are they even for? (This is not just a rhetorical device - if I was working at one of those radically-less-management places like Valve then this might be one of the pieces I'd expect to have to pick up). But yes, I want the boss to make the call - he can ask me questions and I'll try to answer them, but I want it to be his decision and his responsibility. Not liking to make that kind of call is part of the reason I've chosen not to go into management (and paid the price financially), after all.


If you have no idea what the team does or who is contributing how much until you're starting to fire people then you aren't really managing anyone. Even if you are completely nontechnical figuring that out should be within your grasp.


I disagree, but my intent was not to argue. I've found it quite common even with developers working side-by-side for there to be a big gap between what they think is going on and what's actually going on. Thanks!


Of course, but if they continue operating that way for a long time it's failed communication. Part of what I'd expect from a decent manager is letting me know if everyone thinks I have an issue while I'm totally oblivious.


Yes. Of course.

Communication failure is the #1 cause of technology project failures.

Communication failure is insidious because there are no warning signs or alarms that it happens. In fact, in most cases, even after the death of a project there is no serious examination of the communication failures that caused it.

I love management. I love being a manager. I love being the guy who is responsible when things go wrong but has nothing to do with things going well.

But technology development has changed that game. It's basically flipped the idea of manager upside down. It will be quite a while for the rest of us to adapt to the change.


The manager should use his subjective judgment, including but not exclusively asking other team members for their opinions about their reports, and everyone should be on the same page about how well or not they are are doing. Any scenario where someone could be blindsided to learn their performance isn't up to snuff is just really poor management. Anybody watching that happen with their head screwed on straight is going to head for the exits as soon as possible (not to mention the potential for personal grudges in any system that relies entirely on peer input to decide to fire somebody).


If you turn up the heat a little, you can just get some or all team members to voluntarily quit. Boom, problem solved.


Agreed. Hard to define what is a good developer, but this would be terrible management.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: