Hacker News new | past | comments | ask | show | jobs | submit login

> To me, this feels similar to finding the correct granularity of unit tests or tests in general.

I recently had an interview with what struck me as a pretty bizarre question about testing.

The setup was that you, the interviewee, are given a toy project where a recent commit has broken unrelated functionality. The database has a "videos" table which includes a column for an affiliated "user email"; there's also a "users" table with an "email" column. There's an API where you can ask for an enhanced video record that includes all the user data from the user with the email address noted in the "videos" entry, as opposed to just the email.

This API broke with the recent commit, because the new functionality fetches video data from somewhere external and adds it to the database without checking whether the email address in the external data belongs to any existing user. And as it happens, it doesn't.

With the problem established, the interviewer pointed out that there was a unit test associated with the bad commit, and it was passing, which seemed like a problem. How could we ensure that this problem didn't reoccur in some later commit?

I said "we should normalize the database so that the video record contains a user ID rather than directly containing the user's email address."

"OK, that's one way. But how could we write a test to make sure this doesn't happen?"

---

I still find this weird. The problem is that the database is in an inconsistent state. That could be caused by anything. If we attempt to restore from backup (for whatever reason), and our botched restore puts the database in an inconsistent state, why would we want that to show up as a failing unit test in the frontend test suite? In that scenario, what did the frontend do wrong? How many different database inconsistencies do we want the frontend test suite to check for?




That makes no sense to me either. In my book, tests in a software project are largely responsible to check that desired functionality exists, most often to stop later changes from breaking functionality. For example, if you're in the process of moving the "user_email" from the video entity to an embedded user entity, a couple of useful tests could ensure that the email appears in the UI regardless if it's in `video.user_email` or in `video.user.email`.

Though, interestingly enough, I have built a test that could have caught similar problems back when we switched databases from mysql to postgresql. It would fire up a mysql based database with an integration test dump, extract and transform the data with an internal tool similar to pgloader, push it into a postgres in a container. After all of that, it would run the integration tests of our app against both databases and flag if the tests failed differently on both databases. And we have similar tests for our automated backup restores.

But that's quite far away from a unit test of a frontend application. At least I think so.


> With the problem established, the interviewer pointed out that there was a unit test associated with the bad commit, and it was passing, which seemed like a problem. How could we ensure that this problem didn't reoccur in some later commit?

It would seem that the unit test itself should be replaced with something else, or removed altogether, in addition to whatever structural changes you put in place. If you changed db constraints, I could see, maybe, a test that verifies the constraints works to prevent the previous data flow from being accepted at all - failing with an expected exception or similar. But that may not be what they were wanting to hear?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: