Hacker News new | past | comments | ask | show | jobs | submit login
Programmer's Dilemma (medium.com/i-m-h-o)
134 points by CoryG89 on Aug 15, 2013 | hide | past | favorite | 106 comments



There is a fundamental disconnect of reasoning here. What we are expected to believe is that these people at their "big, name-brand companies" are carrying on an elaborate hoax on their employers and their inability to do blank page coding exercises reveals their fundamental uselessness.

Possibly, the truth might be that the majority of serious work is done under conditions where you are used to your framework and, frankly, can easily forget how exactly to fire up a editor and write code to solve a toy problem.

One might say this reflects more poorly on the interview process than the interviewee.

The idea that there is some special magic "creativity" involved in this kind of blank sheet programming that is never required when programming within a well-established framework is pure hokum. We have a lot of well-established practices in our code base that allow us to bust out a new and sophisticated graph analysis or transformation in precious few lines of code - meanwhile, the blank sheet guys are self-congratulating because they know how to call glib's hash functions in a clean sheet environment. B... F... D... (and I don't mean the library)


I've done kernel implementation for 20 years, and this question is absolutely terrible -- unless the answer that one is looking for is an intelligent explanation of why this is such a terrible question. It's an awful question because the kernel may not be involved at all on a user-level call to malloc() (e.g., a caching allocator like libumem or a traditional allocator that needn't extend the break for a given allocation), may be involved in a minor capacity if the break must be extended (on most systems, a system call and some VM interaction), or could be involved substantially if the allocator is a mapping allocator and page faults are induced (text and data) or scheduling events or anything else that necessitates kernel involvement.

For whatever it's worth, when I have historically interviewed university hires for kernel positions, the question I ask is much simpler; namely, what does "* ((int *)NULL)" do? Sadly, most university students -- even ones who have done very well in their university computer science courses -- don't get this right. And it's sad because it's not really the question, but rather just the segue to the actual question: when a candidate correctly answers that NULL is dereferenced and the program crashes[1], I ask them to write the program (statement, really) in an assembly of their choosing (most graduates have MIPS or SPARC on the resume) and I then ask for everything that happens between the execution of that instruction and the resulting core dump and return of control to the shell. With a qualified candidate, this could easily be a multi-hour jaunt through microprocessor architecture and operating systems implementation.

All of that said: I don't really do that anymore, but rather look to a candidate's open source works. The last kernel engineer I hired had done open source work that was so impressive that my interview consisted only of me determining that he had done the work himself -- which took all of 30 seconds or so. (He had, I hired him and he's been an absolutely terrific engineer.) Easiest interview ever!

[1] Except on AIX and any other asinine system that maps NULL.


>what does "* ((int * )NULL)" do

Technically speaking, that is undefined.

"If an invalid value has been assigned to the pointer, the behavior of the unary * operator is undefined...Among the invalid values for dereferencing a pointer by the unary * operator are a null pointer,..."[1]

[1]http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf page 79


Yes, yes. What I actually do is write about a five line program which -- absent fancy alias disambiguation -- can't be optimized out of crashing. You're hired, okay? ;)


> [1] Except on AIX and any other asinine system that maps NULL.

It's pronounced "aches" for a reason.


Is that an almost correct piece of code? What sort of situation would something like that come up in?


Every hear of segfaults? They are almost always the result of de-referencing NULL.


I think of segfaults as being caused by accessing an out-of-bounds index in an array, personally. Yes, you can do the same thing by dereferencing NULL, but in my (university only) experience, overrunning an array came up much more often.


In C it is quite common for functions to indicate failure by returning NULL. Not checking for these errors is another very common reason for segfaults.


In C, accessing an out of bounds array index does not necessarily cause a segmentation fault. It's called buffer overrun and regularly used in malicious exploits / attacks.

What does occur many times is you mess up your stack, or mess with other memory in your heap, causing segmentation faults for various other reasons.


I applaud the author for bringing up this point, however, there is only so many hours in a day. The developers in question are probably not 30 years old anymore. They were cool once too ya know. Instead of bashing them for not coding and drinking Mountain Dew till dawn, why not see if they are willing to learn again ... on the job?

Be careful young bucks ... you're looking at yourself in 10 years. The fix to this is company development programs ... not doing fake interviews (for real?) and switching teams.


I can only speak to US-based IT jobs, but there's a really distinct problem with the industry here.

There is a decent supply of bright young things coming out of school who don't know things, but know that they don't know things, and study and bust ass on stimulants to make up for it -- coding all night on Dew.

By the time those bright young things are 30, they have probably given up the Dew because they have high blood pressure, and their evenings and nights are taken up with family duties.

It's an all-or-nothing approach; spend all your waking hours on your job, or have a life. One's healthy, one isn't. One will keep you employed in the IT industry past 40, the other won't.

And yet, we hear IT companies complaining that they don't have enough talent and trotting out the old "old programmers can't learn new tricks" schtick. Instead of developing the employees they have, they want more input into the front of the system while they shuffle the 90% of programmers who don't become "greybeards" and aren't "management material" off into suboptimal jobs doing 4-hour response hardware service.


[deleted]


To me the most valuable programmer is a full-stack developer

That's because of the type of problems you work with. I have seen a embedded programmer literally create over 10 million in value in less than 8 months by rewriting someone else assembly code. His code did error correction, and cooperative multitasking on a 32khz embedded system which allowed the company to avoid ripping out a lot of deployed hardware by added new capability's and fixing a range of issues relating to cosmic radiation flipping bits. (You deploy enough sensors and it's an issue.)


Although the article as a whole makes very good suggestions about keeping your skills fresh, I think his standards for the candidates he interviewed are impossibly high. Expecting an understanding of what the kernel does when malloc is called is probably fair (actually I think this is a trick question, as I believe the kernel doesn't actually page in any memory until it is accessed for the first time). But I don't think anyone without superhuman coding chops would be able to implement an LRU cache framework in one hour using a C library they have no experience with.


A simple LRU cache is not very much code (certainly less than 100 lines even in C). Assuming the hash function in question has a sensible API, getting something that works but has a few bugs and performs suboptimally should not be at all difficult for an experienced programmer who knows what an LRU cache is.


I totally agree. This should not be a hard problem to conquer since the hash table is provided by library. I just want to see if the candidate is a competent coder and a fast learner.


Okay, yeah, I guess I was overthinking it a bit. It does seem relatively straightforward now that I reflect on it some more.


The glib hash API in question wasn't linked in the article, here's the reference: https://developer.gnome.org/glib/stable/glib-Hash-Tables.htm....

I would say it's very easy to use, but I'm quite biased (through experience).


While I agree with you, writing a crappy LRU will be a poor indicator of skill for a senior developer or architect. It will be just a pointless and useless piece of code. My guess is that the author asked for production-ready high quality code. I know that I've been asked many times for production quality code in interviews.


If I was the interviewer I'd just check how they go about it - do they just start typing or do they read the docs, that's a big one right there. I wouldn't expect bug free code, just a reasonable attempt, clean code, and something that generally works because a LRU cache is really simple.

I disagree it's useless - if I have a senior developer, they better know how to do the basics. Otherwise what are they good for?


What's the right answer here? Should they start by typing or reading the docs?


There is "crappy" as in buggy / overcomplicated and there is "crappy" as in lean, simple minded LRU that just works. The latter is production-ready high quality, even if it may be not tuned or optimized. You can tweak later, first can you actually ship some code?


I think it takes more than an hour to make something production-ready. Setting up test cases and tracing down memory errors and leaks is going to take a while, even (and I would think especially) for an experienced C programmer.


What memory errors and leaks? A good coder should be able to produce error-free code on first try, provided the documentation is clear and correct, and the problem is sufficiently simple (as in this case).

Tests and debugging will only get you so far, theoretical methods can get you farther in terms of correctness (provided your compiler isn't broke etc).

Quoting Linus Torvalds: "Don't just make random changes. There really are only two acceptable models of development: "think and analyze" or "years and years of testing on thousands of machines". Those two really do work."


Okay, maybe I should rephrase it. It takes more than an hour to carefully think and analyze, and then implement the thing, and then make sure there aren't any bugs. Thinking things through beforehand makes the possibility of errors less likely, but even then you may have made a silly mistake at some point or another. That is why we have things like unit tests and code reviews, because even the best programmers make mistakes. I certainly wouldn't trust C code written by a single developer in one hour to be production-ready.


I can rephrase as well. I'm not going to rush your interview code to production, but the review cycle needs to start with a first version. We'll refine as needed, but this is a simple enough problem to expect well-structured code in one hour.


Fair enough.


[deleted]


That's about it, except you have to keep track of the total size of your cache so you know when to start evicting. I didn't mean that it couldn't be done, I just thought that one hour was a bit short. I tried it myself and, after setting up tests for the common use cases and tracing down all of the memory errors, it took closer to an hour and 45 minutes. I suppose if the author wasn't looking for perfection, but for something quick and dirty, it wouldn't be so unreasonable.


I think the difficulty of the malloc question is a bit hard to assess for non-kernel programmers. Best I could do is some educated guesses that would probably be wrong. But then I am not a kernel/systems programmer

The LRU cache on the other hand sounds trivial.


When I thought about that malloc question, I starting asking myself "so I knew the old way was eventually libc calls sbrk and the kernel adds some more pages to the page table. But I can't remember if that's still considered the right thing to do; I've heard of people using mmap for malloc for some reason I can't remember." But that's probably sufficient for his purposes.


He's abrasive, but I really think michaelochurch is on to something here. Why are programmers old hat at 40, but not doctors? Answer: because it's accepted that doctors have to spend time sharpening their skills: reading journals, attending conferences, etc. This is not true at all of programmers ("The book said you could learn it in seven days. How hard can it be?").

Programmers sell themselves short.


Although I partially agree, medicine and things that doctors deal with does not move nearly as fast as computig/software engineering.

I think the most important factor is the "virtual" nature of computing/software. In a relatively short period of time you can wipe everything and start from scratch.


If you do it right, by the time you hit 40 you understand that programming doesn't actually move that fast. The superficialities are churning rapidly, but there's an underlying truths are quite stable. They are moving, too, but much more slowly.

For instance, take JS frameworks. On the one hand, they're churning so fast that nobody could possibly keep up with them all. On the other hand... they're all just shuffling the same basic primitives around in various combinations, and exploring spaces hardly any different than their desktop brethren 20 years in the past. There have been cases where I've successfully predicted the failure case of the latest JS wonderthingy just by listening to how it works. For instance, five minutes after I heard Node described I knew about Callback Hell, even though the term as such did not enter the lexicon for about another year and half. I could also tell you exactly what the proposed solutions were going to be, what their advantages and disadvantages would be, and almost completely called what order they would be tried in, because it was all just a recapitulation of experiences had in other communities. From the surface it looks like Node has been in wild churn, with more understanding it's merely a turn of the wheel of history.

As an example of the other side, DRY isn't going anywhere, and developing your DRY-vision is the work of a decade. (It goes way deeper than most people seem to realize.)


When people tell me that I should "do projects in my own free time to keep my skills sharp", it throws me into a fit of rage and makes me want to tear them a new one. Do you see any other professions having to do that in order to stay employable? Can you imagine lawyers having to practice law in their free time to "keep their skills sharp"? Can you imagine civil engineers having to draft plans for bridges and buildings in order not to "destroy their ability to make a living"? Or doctors or any other high prestige professions doing anything like that?

If the answer is no, then WHY THE FUCK should I have to spend my free time maintaining a bunch of projects on Github just so I could keep providing for myself? Is what I do in my day job not good enough anymore? Now I can't even apply for a new job listing "backbone.js" as a requirement unless I have it listed in my resume or have a Github project utilizing it? Whatever happened to the notion of smart people being able to pick up and learn new technologies on the job? You think it's okay to try to offload the risk and the time necessary to pick up a new technology to an employee and their free time, vs. providing the time and the resources for learning on the job?

Tell you what OP, you can take that attitude elsewhere, I'm not buying it. I prefer spending my free time having great interactions with my friends and my girlfriend, enjoying traveling and getting new experiences and just making the most of this one life that I have. Now, If you believe that my system of values is incompatible with yours/your company's, then I don't want to work for you because you're an arrogant exploitative asshole with no respect for other people and their time/life.

This is the way you should do hiring, if you happen to have any semblance of fairness and intelligence:

If a senior person with a somewhat outdated skill set applies for a job, then you're not supposed to test them on the latest technologies, you are supposed to take a look at their track record and try to figure whether they are generally a smart person and whether they were able to get things done back in the day. If someone was a good programmer 20 years ago, then they will be just as good now, if not better, because it's ultimately a person's intelligence and attitude that determines how good a performer they are, not a list of buzzwords on their resume. If you don't get this, then I'm sorry to say, you probably aren't very bright.

Caveat: if you are moving extremely fast for some reason or need something done IMMEDIATELY, then it makes sense to hire someone who already has the skill set you're looking for. If it's a long term assignment/project, what technology they know/don't know has no bearing on how good they will ultimately perform.

PS. You have inadvertently confirmed everything another blogger wrote in a post about why a career in programming ultimately has low prestige:

http://www.halfsigma.com/2007/03/why_a_career_in.html


"Do you see any other professions having to do that in order to stay employable? Can you imagine lawyers having to practice law in their free time to "keep their skills sharp"?"

This is a profoundly ignorant question to ask:

- Lawyers do essentially have to practice law in their free time. Many, many state bars require continuing legal education in order to remain barred. Can you imagine having to pay money for classes or be legally prohibited from programming?

- This is also true for doctors and nurses. Continuing medical education is mandatory for both to be allowed to practice.

- Most professional academics spend a huge amount of their "free time" thinking about their field, reading journal articles to stay current, and taking classes to stay current on new statistical software, programming skills, etc.


I don't quite buy your argument - yes these professions may work long days, but generally the upkeep of skills falls into those allotted working hours. Programmers should do this too, when it is helpful to their work. But suggesting that someone should add a whole new project to their resume on the weekend is akin to saying a lawyer should take on a pro-bono case just to buff their CV. If they want to do that, more power to them, but it shouldn't be expected of everyone.

As to academia... don't get me started. That is a strange and unusual profession which lies somewhere between 'hobby as job' and 'slave until you're tenured'.


I don't know, most professionals I know spend at least some of their "free" time (or at least the time they spend at home) reading journals or publications.

The idea of a profession is that one is not generally paid by the hour for "work" even if one bills a client by the hour so time spent on professional development whenever it takes place is part of work and covered by a salary rather than a wage.

Of course it might be true that other professions do less non-billable practical work though this is probably more due to the fact that programming doesn't require a significant budget.

The issue I think is one that many companies want programmers to work more like wage laborers and take care of the professional side for themselves.

For example a law firm is probably more likely to allocate budget for associates to use for professional development than an IT contracting company.


CEUs required for licensure are not nearly the same as what the OP or GP are talking about. For Texas:

- Medical professionals require 48 hours of continuing education per two years, of which 24 may be informal self study or hospital lectures - Legal professionals require 15 hours of continuing education per year - Professional engineers require 17 hours of continuing education per year

CEUs are obtained either through classroom instruction or attendance at recognized conferences or seminars. This is not the same as doing your day job after hours for fun or because you feel you must in order to actually get a new job. It certainly does not keep your skills sharp.

Doctors, lawyers, and PEs also have professional societies and often employers to facilitate obtaining CEUs in the most efficient manner.


Almost every doctor or lawyer I know would likely gladly trade CE for "programmer-style" self study - and CEUs can keep your skills sharp if you do them right.


They can, and are supposed to, but they have morphed into a box-checking exercise used to keep politicians and professional guilds happy and management asses covered. Hence the reason most of these professionals see the requirement as a burden. I felt the same way back when I was actually working as an electrical engineer.

That said, self-study is a different animal from the side project nonsense that the thread root is referring to. I personally engage in a significant amount of self-study. Some of it a roll back into my day job, some I experiment with in my free time, the rest is purely academic knowledge acquisition. I don't have anything tangible to show off, though, because most everything I have done off hours is the software equivalent of a carver or welder practicing with scraps.

Maybe that isn't the right attitude for a programmer, but in that case a programmer is a different animal than any of the professions listed previously.


I think plumbers would be a better example than lawyers. Do you expect them to unclog toilets in their free time?


toilets in europe are not the same as in north america. a north american plumber will say screw europe, I will never work there. What if you need to move to europe?


Exactly


He was asking them to implement an LRU cache in C, hardly newfangled stuff. Plus they were kernel engineers, so it was supposedly their domain of expertise. I don't think he was suggesting that programmers have to keep always up to date with the latest trends. I think the main issue he was pointing out was programmers finding their skills being dulled because they were spending all their time maintaining old code instead of seeking new challenges. So it's like if a doctor was treating the same patient, and only that patient, for the same disease for years. But then again, you never know, they may not have been very good engineers to begin with. But also, people in high-prestige professions do pursue professional development every now and then by attending conferences and training courses. The only difference is that it's on company time and their companies pay them to do it.


>Do you see any other professions having to do that in order to stay employable?

Yes. My dad is a critical care doctor. He regularly spends some of his free time reading medical journals, or studying for the re-certification exam.


That is the difference between a good and a great professional, in any profession.


Doctors have to do that as a routine. Because those are the only non biased sources of knowing about a new medicine, or a treatment procedure.

Not doing that may make them bad, but doing it will not necessarily make them good.


> Do you see any other professions having to do that in order to stay employable?

Yes, most professionals do. Licensed professionals--doctors, lawyers, veterinarians, civil engineers, teachers, etc.--are usually required to get a certain number of Continuing Education Units every year to retain their professional license. The specific requirements differ from state to state.


Many professions enjoy employer paid continuing education. A totally different ballgame. Staying up every night committing to Github in hopes of landing a job is a terrible terrible thing. I know because I used to do it when I was young and stupid.

The only reason why I was able to land my first dev job fresh out of college is because I had an open source project that I built in my own free time. If it hadn't been for that, it's unlikely I would've got the job and that's why I loath the practice so much.

16+ years of formal education down the drain (or 30+ years of work, if you're older), unless you adhere to some arbitrary standards by IT blogosphere keyboard jockeys.


>Many professions enjoy employer paid continuing education.

Lawyers who work for a firm? Maybe. If they work for themselves, they're stuck paying the bills; continuing education is required in many fields, and yet it's not always reimbursed.

>Staying up every night committing to Github in hopes of landing a job is a terrible terrible thing.

That's never the "right" thing to do. The right thing to do is to commit to Github because you want to.

>16+ years of formal education down the drain (or 30+ years of work, if you're older), unless you adhere to some arbitrary standards by IT blogosphere keyboard jockeys.

OP wasn't talking about arbitrary keyword standards. You're tearing down a strawman of your own creation. OP was just talking about whether a developer actually could code, not filtering based on exact keywords.

To your comment: Some domains DO take way more work to learn than I would be happy hiring someone with no experience in them. I'm not talking about the fancy JavaScript framework du jour, or experience using a particular database, but actually different domains. Like kernel programming. I wouldn't touch a Java developer with 10 years of experience at enterprise Java for a project that involved kernel-level code. Or game programming. Or Android app programming, even; most Java developers would still take months to get up to speed on Android, if they ever managed adapt to the Android way of thinking at all.


Err, what? Months? It's just an API. I was once given two weeks to write a prototype of a small Android app, w/ no prior Android experience (years of Java, though) - granted, just being a prototype, it didn't need to be production-quality code (and I'm sure it wasn't), but I got the job done. I would feel perfectly comfortable putting an experienced Java dev w/ no Android experience in an Android role, provided they were working under the mentorship of an experienced Android dev.

ETA - I do agree w/ your main point, though - I wouldn't put a Java dev w/ no kernel experience on a kernel project either.


A demo is very different that production code, especially on Android.

Android is NOT just an API. You need to really grok the Activity Lifecycle [1], or your app won't behave correctly. I don't mean just skim it; I mean really get that you need to save state at the right places, or you'll break the behaviors.

You need to understand how to support multiple devices correctly [2], including how to best design the app to scale to multiple screen sizes. Ideally you'll support both phone-sized and tablet-sized devices, and so you'll probably need to understand how the Fragments API works [3]

You need to understand Intents and how they function. [4] You need to know whether you'll need a Service [5].

This is just the stuff off the top of my head that you need a clear understanding of before you write the first line of code. Most of these things are not easy to just "use when you need them;" you need to know how they all interact before you start, or you'll be throwing out a lot of your code.

Android is a really alien API, even to people who've done GUI work before: I think it's very poorly designed from a "Principle of Least Astonishment" point of view. They created a lot of new concepts, and I don't really think all of them are superior to the standard way of doing things in a GUI. But they are what they are.

I've TRIED to work with an experienced Android developer who didn't know all of these in advance, and he tried to charge me for 30+ hours of work for something that should have taken 2-3 hours -- just because he'd lied about KNOWING how to use Fragments. I can't imagine a Java developer with no experience diving in and getting all this right the first time.

[1] http://developer.android.com/training/basics/activity-lifecy...

[2] http://developer.android.com/training/basics/supporting-devi...

[3] http://developer.android.com/training/basics/fragments/index...

[4] http://developer.android.com/training/basics/intents/index.h...

[5] http://developer.android.com/reference/android/app/Service.h...


> Or Android app programming, even; most Java developers would still take months to get up to speed on Android, if they ever managed adapt to the Android way of thinking at all.

So you doubt they would be able to adapt to an environment that uses the exact same language they've been using for 10 years, simply because they don't know the API? Do you think that someone who is smart enough to do Java development for 10 years isn't smart enough to pick up a new Java-based environment in days/weeks?

Your attitude reeks or arrogance, elitism and lack of empathy. I'd never want to work with people like you in any capacity.


>So you doubt they would be able to adapt to an environment that uses the exact same language they've been using for 10 years, simply because they don't know the API?

Not exactly. Probably 95% of "Java Developers" have never touched a GUI, or an event based OS, or graphics code, or an end-user app, for that matter. They'd have plenty of ways to fail besides not knowing how to read a manual.

>Do you think that someone who is smart enough to do Java development for 10 years isn't smart enough to pick up a new Java-based environment in days/weeks?

Weeks, maybe. Days: It depends on what you mean. Start tweaking Java code in an existing app? Sure, why not. Architect a non-trivial app from scratch? Extremely unlikely. Again, talking statistically.

95% of Java developers are NOT as sophisticated as probably half of C++ developers. There's a higher minimum threshold of skill to work on the problems that C++ developers work on than the ones that MOST Java developers work on.

You can be a 100% competent Java developer, and yet not have the right skills to do most of what I do on a regular basis.

And this isn't about YOU; I'm only talking statistically. Most Java developers work on server code, enterprise code, private company apps, and other things in very different domains that thinks that I work on. These things are important, skilled work; and they're VERY DIFFERENT than Android development. Or game development. Or low-level C development.

>Your attitude reeks or arrogance, elitism and lack of empathy. I'd never want to work with people like you in any capacity.

I do believe I'm a better developer than most. I feel like that's based on objective facts, but it certainly can come across as arrogance or elitism. Truth is I'm happy to work with developers of any skill level, as long as they have the skills to do the tasks they're assigned to do. I spent years working as developer tech support for a popular API, and I helped hundreds of developers to use it correctly, and to find the bugs in their apps -- by the end I had a nontrivial fan club, and a lot of people contacted me later to see if they could work with me. But have it your way.

I don't know how you glean lack of empathy from my post. I empathize with people just fine; if I'm hiring someone, though, I have to make a business decision that makes sense, and I'd have to be stupid to hire someone who can't get the job done, as much as I'd love to give everyone work. Too often I've not been ruthless enough; someone with years of experience in Android AND Java and tons of great references must be good enough to get my project done, right? Wrong. See: Mythical Man Month, some programmers are 10-20x better than others.

If you tell me who you are, though, I'll be sure that we don't accidentally work with each other in any capacity. Though since your comment sparked this thread with hatred at people who hire based on (among other things) GitHub commits, I suspect that isn't a danger to begin with.


>That's never the "right" thing to do. The right thing to do is to commit to Github because you want to.

The article presents doing your own projects as a thing you should do.

>Or Android app programming, even; most Java developers would still take months to get up to speed on Android, if they ever managed adapt to the Android way of thinking at all.

Are you suggesting that it would take an experienced Java programmer several hundred working hours to get up to speed on Android? And that you expect new hires to have worked several hundred hours on Android in their free time instead?


>The article presents doing your own projects as a thing you should do.

Well, if you don't want to, and you don't have real experience elsewhere in a domain close enough to what I need, then you won't likely be a developer that I work with, no.

>Are you suggesting that it would take an experienced Java programmer several hundred working hours to get up to speed on Android? And that you expect new hires to have worked several hundred hours on Android in their free time instead?

Most Android developers work solo, at least the ones I know. If you can find a team to work on somewhere -- some big company that has 4+ people working on an Android project, then maybe they can hire a Java developer and train them on Android. I have no desire to pay a developer for more than 20 hours to learn to do something that shouldn't take more than 4 if he already knew how Android worked (this happened to me, when someone lied and told me they DID know how the Fragments API worked, for instance -- after 20 hours he had a mess of garbage that I threw out).

But a solo Android developer probably needs at least two to three months of Android coding experience to really "get" the relatively more obscure parts of Android, yes. Can a Java dev write code on day one? Sure. Can a Java dev start writing production code from day one? Day 14? Not anything I'd want to use.

Major exception: Java is Java, and so having someone contribute to an existing, already working app, but staying in the "data mangling" domain, could be done from day one. I'm really talking about knowing how an app should be designed, not whether someone can write code to talk to a database on Android after 10 years of experience writing Java that talked with databases on servers.


Doctors in the US get paid per procedure. I sincerely doubt there is an insurance billing code for time spent reading journal articles.


The world of software development moves faster than almost all other fields at the moment, particularly with regards to the tools that we use.

All other things being equal, the person with experience in the specific tools the organisation uses will get the job.

Its just an unfortunate consequence of the industry we are in.

You are of course free to not chase the bleeding edge tools in your spare time, and youll probably stay employed just fine. But there is a risk you can be left behind or miss opportunities.

The people who were hacking on early IOS, Rails, Scala etc are todays experts enjoying niche markets. The people who didnt are working on VB or PHP for a Sausage factory somewhere.


Side note: The people who chose to hack on the less-sexy Android OS are also extremely employable right now. I've had to turn down enough work this year that I could have employed three other Android programmers full-time, if I'd had them on call.

I learned Android on my own time, because I liked the idea of a Linux-based phone OS. Most of the jobs I've gotten have been EXACTLY because I've been playing with the right technology on my own time. OP is right on, and VexXtreme is making noises like a mediocre developer who has a chip on his shoulder.


Agreed. It can be very frustrating to see technology change so, so rapidly. What we worked hard to learn is worth a bit less each year.

But The fundamental data structures/algorithms don't change and the problem solving skills don't change. For example, get an Oculus Rift dev kit, fool around on that for a few months when the consumer version comes out you'll probably have more work than you can imagine. It's the same pattern.


Yeah, that is rather annoying isn't it. It's like, do we really need yet another way to build a web application? Most of these frameworks don't seem to offer anything novel but rather re-implement the same MVC pattern over and over again. But I'm not a web programmer, so I wouldn't know.

On the other hand, Linux kernel programming, which is what the author mentions, hasn't changed that much over the years. What the author seems to bemoan is that the developers are letting those core skills slide because they become dependent on the tools they use in their day-to-day jobs. But actually, I'm not convinced that the mediocrity of the candidates was due to decaying skills. They may not have been particularly skilled to begin with. Experience doesn't necessarily correlate to competence, after all.


I don't think that's true at all. According to this paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909426/ The number of new medicine journal publications significantly outpaces the number of new computers science journal articles per year.

Doctors are expected to keep up to date with the most recent research as part of their jobs, and they are payed to do so by going to conferences etc... I don't think the same can be said for software development.


Academic CS research has practically nothing to do with professional programming. Professional programmers constantly learn new tools and technologies... for the most part these aren't academically novel. (Much to the field's detriment)


Does your employer not send you to conferences and training? Any hack-days, "20% time" or whatever percentage or encouraging people to set up lunch & learn equivalent meetings?


That's because medicine is much more of a science than software. Software changes like fad diets, not like surgery.


> Do you see any other professions having to do that in order to stay employable? Can you imagine lawyers having to practice law in their free time to "keep their skills sharp"?

I see your point but I don't think it is specific only to IT. Lawyers for example (at least in Germany) have to read a lot to keep up. So much that most of them can't do that at work.


This was a badly chosen example indeed: in many countries lawyers are obliged to continue studying and completing tests throughout the career. In fact, one of Holland's most prominent lawyers got suspended for not obtaining enough study points for an extended period of time.

Anyway, in a sense I agree with the author. To stay sharp programmers need to get out of their comfort zone. There's multiple ways to do that; side projects is one of them. If you don't get out of your comfort zone, you run the risk of becoming like a pilot who forgot how to land or take off, because you have only been flying mid air for so such a long period of time.


Its called Continuous Professional Development (http://en.wikipedia.org/wiki/Continuing_professional_develop...) and is indeed a requirement in a great many professions.


You certainly don't have to do this. But let's face it programmers have a higher wages, in part, b/c the technology is constantly changing and the supply of those who keep up is small. Since the structural barriers to most developers jobs are almost nothing (if you're good someone will give you a job and you can even work remotely for an NYC company from Bangladesh) the shifting sand of technology is what drives our wages up.

Now here's where you can win. The hiring manager or business owner wants to minimize risk. If you have 20 years experience working on mainframes in cobol, chances are you're going to be an a passable programmer with good fundamentals. Now say the hiring managers company is using rails or node.js. You know you can learn it quickly. But The manager might be nervous that you that you don't like to adjust and you won't be productive for 2-3 months while you learn. Maybe he/she thinks you're going to be complaining the whole time about how node.js sucks (while you forget how much cobol suck). You can alleviate this worry by having a few simply projects up in node.js or whatever the latest craze is + you have good fundamental experience.

So enjoy you're travels but maybe put a minimal side project up every year or so. That's my plan and I'd like to keep doing this developer thing for a while. I'd like to think, as someone who will eventually get old and have lots of experience on outdated stacks, these are some of the tactics we can use to combat ageism in engineering.


Teachers / doctors / dentists all need to take courses to keep their skills up to date. Programming is no different. I don't think there is one profession where you aren't required to keep your skills up to date / put in extra time.


As I've mentioned before, in a lot of professions it is paid and considered part of the job. If not, then at least it's regulated. We enjoy none of those benefits.

Software is a career requiring a much higher level of personal commitment than many other careers, while often not necessarily being better compensated.


In which professions is it paid? Teachers / dentists I know have had to pay for their own education.

Isn't the level of personal commitment subjective though? What would your solution be?

I think it's hard for programming because there's such a diverse array of programmers (university educated based on data structures / algorithms vs self-taught that might not know those things). My recent interviews with Amazon tested my core CS knowledge but nothing like frameworks or anything. I suspect those things would matter more at startups / web shops (where I used to work).


Software is a career requiring a much higher level of personal commitment

Except it really doesn't. Most of us chose to put in the personal commitment because we love it and then convince ourselves it's because we have to.

I know several people who code 9-5 Monday to Friday on whatever their boss tells them to code on and that is it. Sure non of them will ever be offered a top job at Google, Facebook or awesome SV startup of the week, and sure most of what they do sounds frightfully dull, but they still have a successful if unassuming career in software.


>> http://www.halfsigma.com/2007/03/why_a_career_in.html

I read that article, and while i don't fully agree with it, there seems to be at least a grain of truth in there. He talks about how VB programmers are no longer wanted because VB.net is hot stuff now (the article was written in 2007.) He also talks about how one's professional knowledge counts for lesser and lesser as a programmer, and how a lack of any barrier to entry has resulted in a field flooded with cheap workers with no job security or general social prestige.

I consoled myself thinking this won't apply to the more sophisticated programmer, who is well trained in algorithmic and mathematical thinking. For example, writing machine learning systems, advanced statistical analysis or kernel development etc. DO have at least some barrier to entry, and these are not things that a random script kiddie can learn in a "x for dummies" book. Can a more senior engineer comment?


I don't think the guy mentioned it that sense either.

The areas that you mention only have a high barrier to entry because there is no bulk job market in those areas, plus no one has yet decided to write a 'x for dummies' book in it for the exact very same reasons.

There are more websites written everyday than kernels.

Web development isn't particularly easy, but has been made easier because there is demand for it that way!!

Let say you have a markup language. The language implements every kernel feature known to mankind. Now all you need to do to get your very personalized kernel is specify the interplay of the features though the markup. Lets say such language also keeps updated with all latest developments in Kernel development field. Over time you see, even the most noob guy producing their own framework to do a range of kernel work.

Kernel development will look pretty much like producing a HTML page. And you will have a dummies book for it.

This is kernel development would look if there were millions of jobs worldwide for it.

You could apply the same analogy for anything.


You make an interesting point about abstractions and the march of technology. As technology progresses, difficult and complicated tasks of yesterday become abstracted away and become more accessible to lesser trained people. C followed by Java allows MOST programmers to not bother with understanding assembly, same with web frameworks, and i can see how it would be possible to implement a markup language that generates custom kernels. I can also see how within the next few years, we will certainly have easy to use machine learning libraries (there are already many good ones today) that bring the power of ML systems to programmers not trained in statistics and linear algebra.

That makes me wonder what kind of technical skills have long lifetimes, valuable in the real world (as opposed to merely being technically interesting), and are difficult enough to acquire that one doesn't have to worry about competition against a low-paid worker army....


> I prefer spending my free time having great interactions with my friends and my girlfriend, enjoying traveling and getting new experiences and just making the most of this one life that I have.

For me, software development and picking up new technologies and abstractions IS "making the most of this one life that I have." If you truly love what you do, then there shouldn't be some dividing line between what you do for fun, and what you do for money.

You shouldn't program on your spare time in order to pick up new high-demand skills, indeed that's just a pleasant side effect. You should program because it is fucking radical.


Programming can be fun even if it's unfun after 8 hours of programming.


Interesting article. I actually have a talk that I give about this exact issue. I laugh at anyone [developers] who calls themselves an "expert" unless it's followed by a very specific, narrow topic.

I dislike the title "Senior" when discussing developers because it's usually just a title given to those who have 1) been there there at least 3 years or 2) Have n years of development related experience on their resume. The problem is, the title is associated with level of assumed expertise. As the article says, these guys can barely answer a basic question.

However, that's not to say that they don't know how to be productive or produce quality results.

It wouldn't be so bad if developers just went out and read someone else's code or attended a code camp or user group, but they don't.


Its hard not to fall into this situation even if you work for startups or small companies.

Say you arrive on day 1 on a greenfield project. You will soon be fitting in with other developers patterns and spending time digging into code thats not your own.

Even if you are the lucky one putting low level frameworks and architecture into place, that eventually stabilises too snd you move on to more typical work at a higher level of abstraction.

As time goes by, your knowledge of the code, product, domain, organisation, market etc continues to grow. In a matter of weeks, you then have outsized value to that particular organisation vs starting off as a generic code monkey somewhere else.

This is not a trap or a dillemna, its just the nature of team based software development. To not acheive this in each of your roles would be a bigger cause for concern!

I would say, if you are hiring someone to work on your big complex legacy system, you should probably attach some value to the fact that the candidate may have proven expert in exactly that scenario over the long haul in their previous role.

Succeeding in that environment takes a certain skillset, and its disingenuous to reject people for not knowing certain low level programming constructs that they wont even be using day to day. (Kernel work is one exception to this.)


I wish people would use spell check when they write headlines for blog posts.


Google seems to indicate the author is (was?) in Beijing, and I'm guessing that English is not his first language (reading the post seems to bear this out).

It's a valid point, but I'd give him a tiny bit of leeway in this case.


You don't need to Google his name, even. He says he's in Beijing at the end of the first section. Presumably wyclif was so turned off by the spelling mistake in the title, he decided not to read any of the article itself.


I read the article, and I knew it was Chinese in origin.


Haha, sorry, I was just having a joke.


Sorry I missed that. Chrome seems not pointing that typo out so I neglected. Seems I'm too dependent on tools. :(


Cut him a break, English is clearly not his first language.


I'd be inclined to cut him a break, except that he doesn't seem to cut a break for developers parachuted into his milieu.

The situation seems quite analogous.


I disagree. He cuts them a break: he gives them the opportunity to interview. It's up to them to make something of it by proving they have practical experience.


I don't see what that has to do with not using spell check. Wyclif is not criticizing the author for not knowing English spelling--he's criticizing him for not using spell check.


There is more than just a few typo. There are myriad grammatical errors. I don't know how well grammar checkers have improved in the past 5-10 years, but I don't recall MS Office being all too good at catching grammatical errors or being able to recommend correct solutions.


Slacks are cut, breaks are given.


Well, he's fixed it now. We all make mistakes. Why not point it out politely instead of being snarky about it?


I did point it out politely. Of course we all make mistakes. Whatever are you talking about?


I'm sorry, but "I wish people would use spell check when writing blog post titles" comes across as rather sarcastic and rude. Perhaps you didn't mean it that way, but the internet removes all tone and context.


This is part of why I just occasionally reimplement chunks of existing libraries. It's fun, gives me something to write about, and keeps me sharp.

If the only goal is to push the current project out the door, then reimplementing a hash table or a chunk of printf is pointless. But that sort of thing from time to time keeps you from treating all these things as opaque black boxes.


It is kind of interesting seeing the difference between approaches that other tech interviewers use - the following was one that I saw today on linkedin:

http://firstround.com/article/The-anatomy-of-the-perfect-tec...

I think that it needs to be a combination of 1) what is on your resume as you become an expert in a fairly specialised piece of technology and on 2) general knowledge.

I feel a bit sorry for the guy who had to write the LRU cache. I have been in unfamiliar surrounds before and come across as a blathering idiot, and I didn't feel it was 100% my fault. The interviewer has some responsibility in trying to understand the background of the person, and is not there to show how smart he is.


Maybe programmers rotating every 2 years is what drives companies to have more and more stable frameworks.


Completely agree. The other problem I've seen with getting stuck on the same project for X years is that you'll have a very limited portfolio to show for it, unless you're working on side projects as well.


If you are on a project for a few years (and aren't doing just maintenance, in which case I'd GTFO) you should be able to show a roadmap of where you took the project, its success etc. which might not be so bad.


Two observations:

- The worst programmers work for big corporations. Because there the overall level of competence is so low they can scrape by barely doing anything.

- Very true about keeping skills sharp and developing new skills. It's hard to do and hard to keep up but always worth it.


MORE programmers work for big companies, so you get a full normal distribution, and I would imagine there's a bias toward interviewing more of the ones who got fired or poor performance reviews. You only see the tail.


The worst programmers also work for small companies, because it's easy be a big fish in a small pond, etc.

Or maybe there are varied workers at various levels in various organizations. No simple rule satisfies.


This is also the source for some of the fizzbuzz issues. It is astonishing how hard some people find it to make a computer print one through one hundred, never mind adding the flow control correctly.


Most good developers can barely remember a line of code, they usually have a general idea as to how you could implement aspects a concept and build on it as they go. Incremental changes


"We are captives of our own identities, living in prisons of our own creation" - a quote by T-Bag in Prison Break




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: