> because apparently 'cat error.log | grep 'ERROR:' | wc -l' is not how real programmers find error count
I've been on the other end of that, where we were explicit that the goal of the exercise was to solve problem X with technology Y so that we could gauge the applicant's skill with Y. And yet, we'd still get submissions that used Z because it was way better at solving X than Y was. Well, yeah, we know that. That's not what we were trying to assess.
I'm not saying you did that. It just gave me flashbacks to those situations.
If you know that Z is the best way to solve X then why would you ask them to solve X with Y? Why not give them a problem to solve that Y is best suited for?
It sounds like you got multiple solutions that used the best tool Z, but yet you reward people for being fine with using an inferior tool. That's not exactly the pool of applicants I'd want to bias my search towards.
Edit: I just want to add that I don't mean to sound harsh. And maybe it would help to understand what X, Y, and Z are in your case. Details do matter!
Well, in the parent case X might have been "count the unique lines in a file", Y might have been Python because they were interviewing for a Python job, and Z might have been "cat | sort | uniq".
What the interviewer wants to know in this example is whether the author understands Python and file IO and data structures. Sure, you'd use the Unix pipeline in a production setting because it's better tested and faster. But in an interview context, that doesn't tell you whether the candidate is more likely to store item counts in a list or a dict, and that's what you're really trying to dig into.
Seems like you have a XY problem. You want people who know Python's standard library, but you think having them implement grep is the way to go about it.
If someone says write a grep-like thing in Python so we can see how you write something like that in Python and someone else submits anything in Perl that's not an XY problem. It's a basic failure to meet clear requirements.
If some one wrote grep in Perl, and they are considered incapable of writing the same in Python. I have now doubts on the evaluation capabilities of the hiring committee.
Software world is huge. You will have learn, forget, relearn and learn new things all the time. You don't change programmers every time a new requirement on a new technology comes along. Which is why if some can write grep in Perl, they can also with a little yak shaving write grep in Python as well. Then in Golang, Java etc. And if some one wrote grep in Scheme, they can also write it in Clojure, or Common Lisp.
Programming languages of a particular style(C, Lisp based, ML family etc) don't change much in semantic features in their categories.
The point of a take home project is to see if a person can get things done.
If a base requirement is the thing is implemented with X, like wanting a table made of teak wood, and you give me a table that doesn't have any teak in it you've ignored half of my requirements. I don't care if you assessed my needs as being better met by oak and you made a fine table, I need to put a teak table somewhere and you completely failed at that task.
The point of a take home test that says build foo in Python so we can see it in Python is not "to see if a person can get things done," it's to see how a person gets things done in Python.
I mean, we're probably not going to ask someone to solve a brand new problem as part of a take-home project. We're intentionally not trying to be tricky, but to cover well-known territory so there's plenty of prior art and examples to fall back on. As another case, if you were hiring as a frontend dev, we might ask you to write a simple interactive web page using vanilla JavaScript and not jquery or such. It's awesome that you know React! Great, you'll fit right in! But first, can you use the stuff built into the browser without loading an extra library? Or put another way, are you capable of extending web frameworks if the need arises and not just using them?
You're kind of right about it being the XY problem, except it's where the misdirected candidate is trying to solve "how do I do thing X" when the real problem is "write a simple program in Y so we can know if you know Y". We aim to be very explicit about the project's requirements so that none of this is a surprise afterward.
Well I don't know what is being tested here. grep is a fairly old utility, which might have bugs which I don't deny. But these utilities have been debugged to complete stability by now.
Besides grep itself was built on work done by other people before.
GNU grep uses the well-known Boyer-Moore algorithm, which looks
first for the final letter of the target string, and uses a lookup
table to tell it how far ahead it can skip in the input whenever
it finds a non-matching character.
If the idea here is to test if the candidate can come with the Boyer-Moore algorithm in an hour then I doubt something like that is even possible. Things like that are not invented by some one in an hour, the original inventors themselves likely spent a life time studying and working on problems to produce anything novel.
On the other hand memorizing other people's work, which they themselves took years work in discovery, in the current of Google look-up-able knowledge, literally proves nothing. It's the exact opposite of a person capability to produce things.
The whole idea of a take home test is to demonstrate capability. If submitting a working project under a tough deadline doesn't pass that test, then Im not sure what the hiring panels are even trying to do here.
I've been on the other end of that, where we were explicit that the goal of the exercise was to solve problem X with technology Y so that we could gauge the applicant's skill with Y. And yet, we'd still get submissions that used Z because it was way better at solving X than Y was. Well, yeah, we know that. That's not what we were trying to assess.
I'm not saying you did that. It just gave me flashbacks to those situations.