Hacker News new | past | comments | ask | show | jobs | submit login

> This whole notion of "documentation can get out of sync with the code, so it's better not to write it at all" is so nonsensical.

To me, this feels similar to finding the correct granularity of unit tests or tests in general. Too many tests coupled to the implementation too tightly are a real pain. You end up doing a change 2-3 times in such a situation - once to the actual code, and then 2-3 times to tests looking at the code way too closely.

And comments start to feel similar. Comments can have a scope that's way too close to the code, rendering them very volatile and oftentimes neglected. You know, these kind of comments that eventually escalate into "player.level += 3 // refund 1 player level after error". These are bad comments.

But on the other hand, some comments are covering more stable ground or rather more stable truths. For example, even if we're splitting up our ansible task files a lot, you still easily end up with several pages of tasks because it's just verbose. By now, I very much enjoy having a couple of three to five line boxes just stating "Service Installation", "Config Facts Generation", "Config Deployment", each showing that 3-5 following tasks are part of a section. And that's fairly stable, the config deployment isn't suddenly going to end up being something different.

Or, similarly, we tend to have headers to these task files explaining the idiosyncratic behaviors of a service ansible has to work around to get things to work. Again, these are pretty stable - the service has been weird for years, so without a major rework, it will most likely stay weird. These comments largely get extended over time as we learn more about the system, instead of growing out of date.




> Comments can have a scope that's way too close to the code, rendering them very volatile and oftentimes neglected.

I think this is a well put and nuanced insight. Thank you.

This is really what the dev community should be discussing; the "type" of comments and docs to add and the shape thereof. Not a poorly informed endless debate whether it should be there in the first place.


> To me, this feels similar to finding the correct granularity of unit tests or tests in general.

I recently had an interview with what struck me as a pretty bizarre question about testing.

The setup was that you, the interviewee, are given a toy project where a recent commit has broken unrelated functionality. The database has a "videos" table which includes a column for an affiliated "user email"; there's also a "users" table with an "email" column. There's an API where you can ask for an enhanced video record that includes all the user data from the user with the email address noted in the "videos" entry, as opposed to just the email.

This API broke with the recent commit, because the new functionality fetches video data from somewhere external and adds it to the database without checking whether the email address in the external data belongs to any existing user. And as it happens, it doesn't.

With the problem established, the interviewer pointed out that there was a unit test associated with the bad commit, and it was passing, which seemed like a problem. How could we ensure that this problem didn't reoccur in some later commit?

I said "we should normalize the database so that the video record contains a user ID rather than directly containing the user's email address."

"OK, that's one way. But how could we write a test to make sure this doesn't happen?"

---

I still find this weird. The problem is that the database is in an inconsistent state. That could be caused by anything. If we attempt to restore from backup (for whatever reason), and our botched restore puts the database in an inconsistent state, why would we want that to show up as a failing unit test in the frontend test suite? In that scenario, what did the frontend do wrong? How many different database inconsistencies do we want the frontend test suite to check for?


That makes no sense to me either. In my book, tests in a software project are largely responsible to check that desired functionality exists, most often to stop later changes from breaking functionality. For example, if you're in the process of moving the "user_email" from the video entity to an embedded user entity, a couple of useful tests could ensure that the email appears in the UI regardless if it's in `video.user_email` or in `video.user.email`.

Though, interestingly enough, I have built a test that could have caught similar problems back when we switched databases from mysql to postgresql. It would fire up a mysql based database with an integration test dump, extract and transform the data with an internal tool similar to pgloader, push it into a postgres in a container. After all of that, it would run the integration tests of our app against both databases and flag if the tests failed differently on both databases. And we have similar tests for our automated backup restores.

But that's quite far away from a unit test of a frontend application. At least I think so.


> With the problem established, the interviewer pointed out that there was a unit test associated with the bad commit, and it was passing, which seemed like a problem. How could we ensure that this problem didn't reoccur in some later commit?

It would seem that the unit test itself should be replaced with something else, or removed altogether, in addition to whatever structural changes you put in place. If you changed db constraints, I could see, maybe, a test that verifies the constraints works to prevent the previous data flow from being accepted at all - failing with an expected exception or similar. But that may not be what they were wanting to hear?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: