Commenting on the purpose of a file is reasonable, as it explaining why a piece of code works the way it does e.g. if an external constraint is in play. But explaining how in code comments is redundant if the code is well written and the functions and variables are well named.
I'd mention one exception: when you're doing something very complicated in a single function. For instance, I recently had to write an algorithm that involved partitioning integers into intervals, and the solution was a very non-intuitive dynamic programming algorithm. I devoted several paragraphs of documentation to the how's and why's, along with additional comments every couple lines. I found it really useful, especially when I had to go back and fix an odd off-by-one error that only occurred in certain cases. In those situations, variable names can only help so much.
Thing here is, it can replace any Blog/Book/Posts whatever about RSS Parsing with Nodejs
And a line-by-line documented Code is in such a case a better solution for those who want learn by reading Code :-)
Surely, it's worse in big Projects. But there I miss often a detailed documentation about the implementation.
Look for example to the strlen() Std implementation for C/C++(it was the same with Magic Hexcodes and so on, right?)
"The advantage of unit testing modular code bases is that you don't have to test every program state, you just have to test every module state."
I don't think that's true with a capital T. Modules interact with each other. You can modularize, microize, or nanoize your application but the minimum subset of test-worthy states stays the same.
That is why I'd rather focus on a tool that would show lack of coverage and certainly would not even say percents of coverage, as it tricks people into getting that number to 100% and stop at that
Except, as pointed out, there is no lack of coverage in my example. All lines are 'covered', and no line is left unexecuted. So by definition, there is no 'lack of coverage' from the perspective of lines. The key is to not think about 'coverage' as a metric on lines, and instead think of 'coverage' as a metric on program states. However, that's generally not what people think of when 'code coverage' is mentioned unless something specific like 'program space coverage' or 'code state coverage'.
Keep in mind that this can be really hard though since even very simple code can have a large state space.
Also, if you have a favorite tool that does program space coverage for node.js projects, it might be helpful to mention it here so that others can benefit. Most of the tools for node that I know are only line coverage.
My reasoning is as follows. Since evaluating coverage in terms of the state space of the program is hard, and no tools that I am aware of is actually calculating that coverage, tools that display coverage in terms of number of lines should not mislead by showing a percentage, and only focus on showing places with no coverage at all.
While in technical terms it seems like the same metric, I think it would encourage its users not to celebrate an arbitrary cap of 100% coverage, and be aware of what the tools can and cannot do.
At least with tools I'm familiar with, your coverage will be 100% if you have one test with 'foo = true'.
That means that you could have odd untested behavior if not executing 'setSomeState()' leaves something important unset.... but you'd have 100% coverage.
Of course in practice, 100% state coverage is really impossible to achieve.
Anyway, that amount of comment is alright for learning purposes but for an actual lengthy code base it would be a nightmare keeping all that updated. Comments are supposed to be used just for complicated/critical code, otherwise is just noise to be maintained.
Hey I'm actually working on a web app to allow people to keep up with a ton of RSS feeds remotely.
I was thinking since it seems like something you could be interested in it would be great if you check it out and possibly give me some feedback.