The article mentions object orientation several times as a hindrance to unit testing. Perhaps that should be its biggest take-away?
Having written an accounting system, including web/user interface and asynchronous/deferred coordination (such is HTTP/browser programming), for the last three years, I can say that functional programming is increasingly helping my team stay sane.
We do TDD always; sometimes in the form of unit tests; sometimes in the form of integration tests and we tend to write as many of our tests as random/generative tests to avoid having to write large code-bases. I've spent the last two days making a piece of our domain monoidal, having defined the three laws as property/generative tests; the rest of the time is an interactive play with the generator, to see if it can come up with counter-examples, to the code I just wrote.
I normally go about coding by writing a very high-level integration test (at the top-most layer that I still have code in the service/frontend); then I write a huge chunk of the code until I think it's correct and looks pristine and easy to maintain. Now I run the test. If it fails and the test is correct — it tests the right thing and is easy to read — then I start at the top (highest level) of the stack, and write down the assumptions I have thought about while designing the code — as unit tests. Until one passes (at some level n); at the higher level n+1, an assumption/unit test is now broken and I can divide-and-conquer until I find the line of code that doesn't work.
This, together with purification; the methodological extraction of pure functions (things that only have one output ever, for a particular input), makes it possible to avoid testing any side-effects/async things (they simply flow non-async data between pure functions).
This, together with first-level-values for control flow, aka. not using exceptions, and generative/random testing, makes it so that ALL of the input has valid output, makes it so that all functions are total functions. And this in turn makes the code uncrashable and bugfree (for the domains of bugs the above methodology removes).
The domains of bugs that the above doesn't eradicate, are primarily cross-browser bugs on the GUI-side, or UX bugs, where a feature is hard to use/understand. On the server-side we sometimes crash when our logging storage in ElasticSearch goes down and never comes up, the intermediate buffer (Logstash) fills up, and then the app buffer fills up and then the app livelocks, waiting for the logging to drain. (=> operations). The second most frequent reason we have any exceptions/errors/bugs is DNS not working.
The first year of writing the software, we still invoked libraries that threw exceptions, but now we've rewritten them all to do control flow with first-class values, so that is not an issue any longer.
I just wanted to share how we do stuff at qvitoo :), in case it helps anybody.
Having written an accounting system, including web/user interface and asynchronous/deferred coordination (such is HTTP/browser programming), for the last three years, I can say that functional programming is increasingly helping my team stay sane.
We do TDD always; sometimes in the form of unit tests; sometimes in the form of integration tests and we tend to write as many of our tests as random/generative tests to avoid having to write large code-bases. I've spent the last two days making a piece of our domain monoidal, having defined the three laws as property/generative tests; the rest of the time is an interactive play with the generator, to see if it can come up with counter-examples, to the code I just wrote.
I normally go about coding by writing a very high-level integration test (at the top-most layer that I still have code in the service/frontend); then I write a huge chunk of the code until I think it's correct and looks pristine and easy to maintain. Now I run the test. If it fails and the test is correct — it tests the right thing and is easy to read — then I start at the top (highest level) of the stack, and write down the assumptions I have thought about while designing the code — as unit tests. Until one passes (at some level n); at the higher level n+1, an assumption/unit test is now broken and I can divide-and-conquer until I find the line of code that doesn't work.
This, together with purification; the methodological extraction of pure functions (things that only have one output ever, for a particular input), makes it possible to avoid testing any side-effects/async things (they simply flow non-async data between pure functions).
This, together with first-level-values for control flow, aka. not using exceptions, and generative/random testing, makes it so that ALL of the input has valid output, makes it so that all functions are total functions. And this in turn makes the code uncrashable and bugfree (for the domains of bugs the above methodology removes).
The domains of bugs that the above doesn't eradicate, are primarily cross-browser bugs on the GUI-side, or UX bugs, where a feature is hard to use/understand. On the server-side we sometimes crash when our logging storage in ElasticSearch goes down and never comes up, the intermediate buffer (Logstash) fills up, and then the app buffer fills up and then the app livelocks, waiting for the logging to drain. (=> operations). The second most frequent reason we have any exceptions/errors/bugs is DNS not working.
The first year of writing the software, we still invoked libraries that threw exceptions, but now we've rewritten them all to do control flow with first-class values, so that is not an issue any longer.
I just wanted to share how we do stuff at qvitoo :), in case it helps anybody.