Hacker News new | past | comments | ask | show | jobs | submit login
Software Testing Anti-patterns (codepipes.com)
465 points by kkapelon on April 22, 2018 | hide | past | favorite | 166 comments



> Writing tests before the implementation code implies that you are certain about your final API, which may or may not be the case.

How does this myth continue to persist?

Writing the test first has nothing to do with knowing the final API. When I write tests first I am looking for an API. The tests help guide me towards a nice API.

I personally find TDD in this manner works best when taking a bottom-up approach with your units. I start with the lowest-level, simple operations. I then build in layers on top of the prior layers. Eventually you end up with a high-level API that starts to feel right and is already verified by the lower-level unit tests.

Although this style of development has become less prevalent in my work with Haskell as I can rely on the type system to guarantee assertions I used to have to test for. This tends to make a top-down approach more amenable where I can start with a higher-level API and fill-in the lower-level details as I go.


> I start with the lowest-level, simple operations. I then build in layers on top of the prior layers.

I was discussing this with a coleague this week: starting by the lower level details vs starting from the more abstract/whole api. I prefer to start by writing a final API candidate and its integration tests and only write/derive specific lower level components/unit tests as they become required to advance on the integration test. My criticism on starting with the bottom-up is that you may end up leaking implementation details in your tests and API because you already defined how the low level components work. I have even seen cases where the developer end up making the public API more complex than necessary due to the nature of the low level components he has written. Food for thought!


This is the approach suggested in the book "Growing Object-Oriented Software Guided By Tests". You start with an end-to-end integration test and then write code to make it pass, testing each component with unit tests along the way. I find it useful myself, although I don't always adhere to its recommendation to start off with a true end-to-end test (sometimes I can identify two or more layers that can/should be worked on separately).


Your point of view resonates with me.

Over the years, I've learned to test "what a thing is supposed to do, not how it does it". Usually this would mean to write high level tests or at least to draft up what using the API might end up looking like (be it a REST API or just a library).

This approach comes with the benefit that you can focus on designing a nice to use and modular API without worrying on how to solve it from the start. And it tends to produce designs with less leaky abstractions.

Of course YMMV.


I just updated our “best practices” documentation to include the recommendation that tests be written against a public API/class methods and the various expected outcomes rather than testing each individual method.

I think the latter gives an inflated sense of coverage (“But we’re 95% covered!”) but makes the tests far more brittle. What if you update a method and the tests pass but now a chunk of the API that references that method is broken, but you only happened to run a test for the method you changed?

I like to think I’m taking a more holistic view but I could also be deluding myself. =)


Why doesn't your best practices document include a requirement to run the whole test suite?


Ah, I guess that wasn’t clear. Typically, yes, all tests would be run, though as someone else mentioned sometimes during development a smaller subset may be run for the sake of time (but all tests would still be run prior to QA and final release).


In some contexts there may be valuable tests that take long enough to run they should be run out-of-band. That said, I don't see where the parent says they don't run the whole test suite.


> now a chunk of the API that references that method is broken, but you only happened to run a test for the method you changed


Ah, yeah, not sure how I missed that. Maybe they meant the test for the broken method isn't run because it doesn't exist, but I very much agree that that's not the most natural interpretation.


Sorry, this wasn’t totally clear and I thought “run all tests” was implied. What I was trying to get at was the difference between “we have a suite that includes individual tests for every single separate method so we have great coverage” vs “we have a suite of tests that run against public APIs that still manage to touch the methods involved”. The former may test “everything” but not in the right way, if that makes sense.

I should have said “you’ve only written a test” rather than only having run a test.


I agree with this approach. It seems that one of the consequences of "Agile" is a general reluctance to invest a fair bit of time upfront thinking about interfaces. It's as if the interface design process somehow became associated with documentation, regardless if any non-code documentation is ever created.


I'm not convinced this is a consequence of agile in as much as it is more a consequence of bad pragmatism or expectation management. In terms of setting expectations, you are not doing anybody any favours by rejecting some up-front thinking in favour of rolling with the punches. And in the same way, you are not being pragmatic by cutting every corner you possibly can - pragmatism is just as often about deciding when to spend time up front to save time later on, and not just sacrificing everything from the short term (which I think is a common and valid criticism laid against poorly implemented agile workflows).

That said, I don't think TDD is a good way to figure out your API. It has its value for sure, but not if you take a purist or dogmatic attitude towards it. In any case, it seems to assume that more tests are always better (hence you start every new piece of code with a corresponding test), when I'd argue that you want a smaller suite of focussed and precise tests.


> That said, I don't think TDD is a good way to figure out your API.

There are plenty of ways to do it and one way that has worked well for me is in practicing TDD. So it is indeed one good way of doing it.

There are also people who are inexperienced with TDD or who misunderstand it and implement it poorly. There are people who are just not ready to design APIs yet who are forced to learn on the job. None of those things invalidate what I said.

If TDD doesn't work for you then you'll have to find what does and I hope that you test your software in a sufficient manner before deploying it.

> In any case, it seems to assume that more tests are always better

I think you're projecting your own opinions into the matter. There's nothing about TDD that prescribes what kind of tests one should write or how many. You could write acceptance tests and work from there if that suits you better. It's test driven development. The operative word is what we should focus on: testing is a form of specification and we should be clear about what we're building and verify that it works as intended.

It comes from an opposition to other forms of verification that used to be popular: black box/white box testing where the specifications were written long before the software development phase began and when the testing happened long after the software was developed.

That's where it's most useful: as a light-weight, programmer-oriented, executable specification language.


> There are plenty of ways to do it and one way that has worked well for me is in practicing TDD

Unless you're talking about the default state of failing tests when no implementation exists, I'm a bit confused as to how TDD helps with interface design.


Best of both worlds is to design top-down and build bottom-up. By design I don't mean something abstract, it should be translated into code sooner rather than later.


> Writing the test first has nothing to do with knowing the final API. When I write tests first I am looking for an API.

The greatest and most poorly communicated benefit of TDD right there.


No, it is an anti-pattern. TDD will get an API, and it won't be the worst possible API. However TDD does not do anything to ensure the API is good.

People who think TDD creates a good API probably have good instincts and a problem where an acceptable API is easy to create. When you have a complex problem that will have hundreds of users (thus it demands not just an okay API but the best) you need to think about the API first. Otherwise you can design yourself into a corner where to fix the API you have to change all your tests.

Once you have the API design in place TDD is a great tool for refining it. You need real working code using an API to sake out all the details and TDD is a great way to figure what you missed. You need a direction in mind first.


TDD does not do anything to ensure the API is good.

Well, what is good? You suggest such an API has to solve a complex problem with hundreds of users. I should think that I've written a few modules with just such an API using TDD methods. So what are we really saying here with this definition?

Your elaboration goes back to the myth I would like to see dispelled as my experience, and that of many others, has demonstrated that it's a bit of a red herring.

When I start a module I don't know what the API should be or what invariant is important or anything. All that test driven development asks is that I first make an assertion about my problem and try to prove that it's true (or not as the case may be).

This leads me to good API designs because the tests invariably contain the assertions about my invariant, they document the API at a high level, and they demonstrate that my implementation is correct as to those assertions I made. If I am thoughtful in this process I know that some assertions contain a quantification over a universe of inputs so I use property based tests to drive my specification. Other times a handful of examples is good enough to demonstrate correctness and so I simply test the units with them. And the API is entirely under my control this whole time. The tests are guiding me to a good API that does exactly what I say it will do by way of executable specifications that cannot be wrong about my implementation (or else the test would fail).

There's nothing in the above that says I cannot take a top-down approach and design my module's interfaces first. Nothing.

Personally I prefer the bottom-up approach for many of the reasons I explained in my OP. I like to define things in small units that I can compose over with abstractions that let me achieve more interesting results. Testing keeps me honest as to the true requirements so that I don't go off into the woods.


Good is in the eye of the beholder. And I was intentionally implying hundreds of users. If - this is the majority of APIs - there are only a couple users it isn't worth the effort to make the API good. When there are hundreds or thousands of users it becomes worth the time to start at a whiteboard and design the API, a few hours up front now can save minutes in the future and those minutes add up.

What you are missing is that TDD gets you to AN answer, but it gives you no information that you got the best answer. TDD does make some bad design choices hard or obvious; but there are other times where that there was a better choice isn't obvious.

Bottom up and top down are very different design considerations. You can get bad APIs in either one. You can do TDD with either.

I'm a fan of TDD and use it. However I'm not blind to the limits.


If what you're saying is that TDD can guide you to a good design but it doesn't guarantee it then I think we're in agreement there.

When working on protocol specifications and proving the correctness of a certain invariant or property of the system I find TDD to be lacking and tend to turn to formal methods for results.

However at the module/interface level of an individual software library or component TDD is a very good tool that helps guide me towards a good design. As do other tools like a sound type system. When they work in concert the system almost writes itself -- I just have to come up with the queries, invariants, and assertions.


There's methodologies in TDD that let you go explore to find the ideal API by working with it, then going back and removing that code (like you would a prototype) and TDDing it afterwards with the destination in mind. I personally think much more easily with this approach. I have to "play with it" and know where I'm going before I can test towards that goal. I think they call it spiking, but I might be misremembering the term.


Crucial part of rapid development is to get a prototype.

The problem is that in many cases management tries to push prototypes into production as quickly as possible and you might get stuck with the mistakes. Forever in the worst case.


Cheers for bringing me back memories of “Spike solution” from the early 2000s :)

http://wiki.c2.com/?SpikeSolution


>Writing the test first has nothing to do with knowing the final API. When I write tests first I am looking for an API. The tests help guide me towards a nice API.

Yeah, it means for people who don't have the time to rewrite the code and the tests multiple times.


> > Writing tests before the implementation code implies that you are certain about your final API, which may or may not be the case.

> How does this myth continue to persist?

Yes indeed. TDD helps you find a nice API.


TDD encourages you to implement an overly complex API with too many possible configurations. It encourages delivery of a box of parts, rather than sub-assemblies.


IMO TDD has almost exactly zero to do with the design of your API.

You can TDD top down by writing an end to end test, and recursing TDD down each level until you get to a single unit, repeat until all inception layers of the tests pass.

You can TDD bottom up, by starting with components you'll know you'll need, working out some APIs at their level, then doing the same to piece together larger components.

Ultimately you're in control of how your API looks, and this has nothing to do with TDD itself - if you get caught up exposing a bunch of inner workings and configurations, then you're simply not designing well. It has nothing to do with TDD.

Likewise, if you get caught short with an API that's too simple, well, that has little to do with TDD either.


To my respondents, who may have read more into my comment than I intended,

I merely wished to support agentultra's claim that

> When I write tests first I am looking for an API. The tests help guide me towards a nice API.

and to counter the article author's claim that

> Writing tests before the implementation code implies that you are certain about your final API, which may or may not be the case.

I did not mean my comment to be interpreted as "using TDD will guarantee you find an nice API". If you prefer, please interpret my comment as "TDD (under certain circumstances and when used in an appropriate way) can help you find a nice API".


I'm the exact opposite. I start with the highest level integration tests to test the API out. I iterate on this process quite a bit until the API is more or less settled down, and then I start implementing the API with appropriate unit tests.

As much as I'd love a very cleanly implemented API, in the end many more devs will be using the API, so the use of it has priority over an API that is easier to implement.


Your comment resonates a lot with me. The bottom-up approach is my favorite, but I think most people are more top-down. When a project has unit tests, its architecture is more pleasant for me. There is also less coupling between classes, this makes the code more reusable. TDD has clearly an influence on the style of the code. I think it forces the code to be better with better API.


I find TDD results in tighter coupling. The problem is we found dependency injection and a mock framework early and never looked back. As a result we inject everything everywhere, and all my classes know about all its collaborators and are tightly coupled. Sure we have a mock and so technically we are just coupled to the interface, but the result is the same: change one change the other.


In small companies where there is no time to "waste" on tests, my view is that 80% of the problems can be caught with 20% of the work by writing integration tests that cover large areas of the application. Writing unit tests would be ideal, but time-consuming. For a web project, that would involve testing all pages for HTTP 200 (< 1 hour bash script that will catch most major bugs), automatically testing most interfaces to see if filling data and clicking "save" works. Of course, for very important/dangerous/complex algorithms in the code, unit tests are useful, but generally, that represents a very low fraction of a web application's code.


Yep. There is a kind of goldilocks zone:

- Company starts out, lots of change and RFCs, testing at a shallow depth (high-level tests wherever possible) helps a ton

- [Goldilocks zone] Company and team settles down, predictable workflow, "not changing the world" anymore, unit tests help

- Project goes into legacy, needs to be split/ merged/ refactored/ rearchitected/, again unit tests don't help. Back to shallow tests and stranglers

Only very small part of programming today is doing hardcode datascience and algorithms. Rest are all just plumbing from A to B to C. I always argued in teams to focus on higher level tests, but then, its easy to get shouted down because of "TDD". Sadly, very few people seem to focus on objective rather than process.


Yea exactly, it's often painful to see when all you need is an API level test which will anyway cover all the use cases instead of the low level unit tests written for the sake of TDD.. leaner the code more maintainable it is


I think the definition of a 'unit' here is possibly causing some strife.

If your product consists of somewhat reusable libraries that you compose together, and extend when needed, having strong unit tests for the low level functionality helps immensely when it comes time to refactor. You know, stuff like "when given int(2) and int(3) as arguments, add(int, int) returns int(5)".

Likewise those high level API tests that ensure the system-as-a-whole is still doing approximately what it's supposed to, regardless of what mess of wires and small aliens is inside the black box are super useful for moving forward as both proof of functionality and guard against regression.

It's the stuff in the middle that's icky IMO. Integration tests that are testing your glue that, especially these days with clouds, containers, microservices etc, can be quite far removed from any codebases you might maintain. I don't have a good solution for it, but I'm searchin'...


> I always argued in teams to focus on higher level tests...

In my experience, your inclinations are true, but it actually takes more discipline and buy in from leadership for it to work. If broken tests are lower priority than feature work, your test suite ends up rotting, getting ignored, then abandoned.

At least with unit tests, individual contributors can maintain pockets and layers of verified-working code.

I'm not arguing against any kind of test. But you do need the people handing out raises, planning sprints, scheduling deadlines, etc. to be on board for end-to-end testing to work. It also helps a lot to introduce those early in the system's life, since the team culture develops with the tests already being there.


I am confused, how unit tests do not help refactoring? Is not this a most often cited benefit of unit tests?


Agreed and not just small companies - large companies don't have unlimited resources either and they work on a lot of different projects in parallel which can sometimes make the scope of work no less daunting.

This is a great essay but I'd say it's the ideal and I would challenge Anti-Pattern 2 in many cases. The time, money and lack of resource crunch is always present. In the video game industry where I've been responsible for automation infrastructure around testing I would say there's the added difficulty where projects can have a short lifespan or be cancelled entirely quite quickly.

Integration tests are highly valuable for being a catch-all for issues. Having only integration tests without unit tests is dramatically less work to both setup and maintain. They are most 'bang for buck' because they can catch so many issues. The points mentioned, that they are complex, slow, and harder to debug than unit tests are all true. However there are some ways to counter these problems. In the section marked 'integration test results' he writes that it's hard to know where to start debugging and that a unit test would have isolated the issue to a smaller scope. While that's true, I don't believe that's a good reason to write unit tests because this issue can be resolved by having better logging and reporting. The integration tests shouldn't just report a fail and nothing else, report the specific reason for failure or at least save off the log so that this information is easily extractable. Also useful is to save screen captures, if there's a GUI, and call stack and crash dump data, if it's a crash.

With integration tests and either few or no unit tests and great reporting you can get enormous value from automated testing. Most real world environments will demand this sort of high ROI for less unless it's projected to be a very long running project or has massive funding, like a government program or grant.

Automated testing saves companies so much cost and helps deliver higher quality software faster, especially with larger number of devs checking in constantly with continuous integration. You're not doing it wrong if you break some of the rules in this essay, you may be required to be more programmatic. There are some great points in this though.


I'm reminded of this article:

https://blog.kentcdodds.com/write-tests-not-too-many-mostly-...

This is advice I generally follow - most of the time, I find I get the most value from integration tests.


I am C++ programmer so maybe it is different but from my experience projects with only integration tests are nightmare.

1. Integration tests let you know that something is wrong but they are mostly useless for locating bug in larger code base. They do not help you much when you write code.

2. Unit tests force you to think about code structure. Code not covered by unit tests is usually much worse in terms of readability and maintainability. (It is hard to write unit tests for huge functions or for spaghetti code but integration tests are not affected by this.)


> 2. Unit tests force you to think about code structure. Code not covered by unit tests is usually much worse in terms of readability and maintainability. (It is hard to write unit tests for huge functions or for spaghetti code but integration tests are not affected by this.)

This is my biggest issue with TDD proponents. Like... you don't need TDD to learn to code properly!


My last company made us stop writing integration tests because "they don't actually find any issues". We then switched to blackbox testing which seemed to basically only be integration tests anyways (at least within a single component/module) so that's, good?


While I agree, integration tests can be pretty costly in term of time to set up.

In the small places I have worked, it been acceptable to have a few bugs, and the applications were completely or partially used by internal users, so maybe 40 -50 people who can put up with a bug here or there and report issues personally.


I find that integration tests tend to have a much higher set up cost (especially if you are serious about test isolation), but that cost is amortized over the lifetime of the project because the code is much more reusable.

Unit tests are less expensive to set up on a per test basis but the code is less reusable and the ongoing maintenance tends to be higher.


Probably, provided that your system is evolving too quickly - that can often be a problem.


I disagree. Typing isn't the bottleneck, so the amount of additional time taken is small.


I don't understand why people are so enamored of the distinction between "unit" tests and "integration" tests. That dichotomy has never seemed useful to me. In my mind the relevant distinction is speed. There's one fast test suite that you run before every code commit, and another slower suite that you run every night on a cloud instance or testing machine, and review the results in the morning. The fast/slow distinction is roughly comparable to the unit/integration distinction, but I don't have any problem with testing a DB interaction in the fast suite - and in fact consider it to be a good practice - if the interaction is fast enough. In general the fast suite should check as much as possible within the constraint of needing to be fast.


>The fast/slow distinction is roughly comparable to the unit/integration distinction

I think you answered the question yourself. Speed is also shown in the table that defines the types of tests.

You can name the categories foo tests and bar tests if you wish.

Anti-pattern 1 - is companies that have only the "fast suite" you are talking about and anti-pattern 2 is companies that have only the "slower suite".


I'd prefer that all testing be done with live data in a real database ... maybe not at the very beginning but as soon as practicable. I came into a project recently with so many issues around differences between real data and assumed data, and unrealistic assumptions about real data that it's been hugely painful, with lots of back and forth between Dev and QA and Ops. With Docker now there's no excuse for not setting up soon in a project's lifecycle a realistic environment (infrastructure topology-wise) with real data on development and QA laptops.

With this you still have fast and slow perhaps, but everybody can run all but the most extreme scaling tests.


That's E2E test. What's interesting with unit tests is that you can run them locally, offline. They are fast enough so that you can run them and see if you broke anything. If your tests take too much time to start or relies on a live env (which may be down or broken), you'll spend more time for maybe nothing.


With Docker I can run my E2E "offline".


If you're touching real data, your test isn't offline.


Not true. You might be touching local sample of real or realistic data. It does not need to be static either.

The performance of such a test is still a problem.


    it('sends an email to the user', ...)


An automated test causes the tested product to send an email to the user? So ... my docker local deployment has a DNS server with the proper MX records, an SMTP server to receive the emails, an IMAP server to expose and serve up the received emails, and the test running on my macOS can check the entire flow. No need to even have my laptop connected to any network for this test to run in its fullest.


It’s more than this. The difference is important. Employed in the right areas like the article suggests they can be an effective weapon against bugs creeping into data access, messaging or web client components.

The speed at which they run is their side effect. It’s possible to write slow running unit tests


It's very simple: Unit tests test a specific "unit" (i.e. in an OO language, that's a class) and integration tests test the ingration of multiple units.

In OO languages, I personally prefer unit tests with all dependancies stubbed or mocked out to verify the interface of that specific unit or component.


Why can't a unit test work on the unit "package"? Or an integration test work on the integration of multiple methods?


I'm with you. I find my tests using a real database or a real file system run in .001 seconds or less, so who cares. Of course I'm in an embedded system, our production database is sqlite: we don't have to worry about spinning a database connection up.


Got to quibble with Antipattern #9 about test code - there's certainly a place for extracting common patterns from test code (factories, complex setup) but too much DRY in tests can make for hard-to-modify spaghetti. Sometimes it's easier to just repeat a few lines of code in the name of clarity.


Similarly, overly elaborate test code can itself be buggy. A specific instance of this anti-pattern is when the test code duplicates the same logic as the code under test, logic which turns out to be incorrect.

In test code, I am a lot more tolerant of straight-line repetition in the name of simplicity, in a manner that would be worth abstracting were it in non-test-code.


That's a sign that you need to write tests to test your tests. It's part of treating your test code as a first-class citizen. I always write a test suite for my test suite.

Just kidding.


I know this is a joke, but I do have a few "don't panic" tests that do extremely simple stuff, like just run a test and return True immediately, connect to db, connect to mock db, etc.

Sometimes I do something stupid like put an extra comma somewhere without noticing, or update my db driver library without also updating my mock library. I think we've all had that moment where tests fail or it doesn't compile because you thought you copy-pasted some text into an email, but it got pasted into your editor window.

So when I see that literally every test has failed, I know it was some really dumb but likely simple mistake on my part. If 180/195 tests pass.... Shit.


This is actually the single biggest problem I've seen in test suites. I'd much prefer to see ugly un-DRY test code than overly engineered test code. The number of times I've had test suites that are passing tests by coincidence is depressingly large and is always due to this sort of behavior. Yes, we're all software devs and we like writing good code, but in tests it's much more important to be able to isolate and verify that the tests are correct than it is to be pretty.


> but too much DRY in tests can make for hard-to-modify spaghetti

This applies to all code, not just tests. Dry can turn some stupidly simple if somewhat repetitive code into a complicated mess.

For unit tests though, as I general rule I don't share any setup between tests but I use the test fixture setup method for common mocks to return default answers. Even that can get too hairy at times and it's better off creating a second test fixture.


Exactly. I hate it when I have to look through 5 different “fixture” classes to find out what’s being set up by a test.


Sure, the point is to treat test-code as "normal" production code. The things you mentioned here applies for that code as well.


True, but too much of everything might be bad for you.

I have seen however more projects out there that have code duplications in tests than projects that go overboard with DRY


"I have seen projects which have well designed feature code, but suffer from tests with huge code duplication, hardcoded variables, copy-paste segments and several other inefficiencies that would be considered inexcusable if found on the main code."

I agree with "code duplication" and "copy-paste segments" (although I fail to see how they're two different things. It looks like example duplication to me?)

I don't agree with hardcoded variables. In tests, they're okay. Tests are not production code. In this case I side with Misko Hevery - see https://youtu.be/jVxmk-tVo7M?t=2m54s

Tests should hardcode values, and (ideally) not contain any logic, not even of the simplest sort. That's the point, that's how we can reliably confront them with production code without risking bugs that fall off off our radar because the same faulty logic leaked from production into tests. Tests should be kept all dumb, all naive, all hardcoded.

That's the thing which proves to be the most difficult to convince fellow developers about, as they're typically used to mechanically transferring over all the usual best practice from production code into tests.

"Ooh, this is so hardcoded" - "good". "This method name is so verbose" - "good; will we ever call this method from code?". "This could be private" - "it's a class with unit tests, how will this code possibly be called from some other class while it executes? Visibility modifiers are waste of space here". And so on


I don't agree with Anti-Pattern 10 - Not converting production bugs to tests

I used to think this was a good idea too, until I saw the real statistics on a project on this.

This project (50+ developer team) tracked all bugs, and also if they were regressions or not. Almost 0 regression occurred of bugs that were fixed before.

All testing needs to consider return on investment. The reality, at least for that project, was that testing time was best spent elsewhere.


Doesn't that just mean that the tests were effective? The goal of doing a test for each bug is to prevent them from happening again. Unless you meant that this project didn't apply this particular advice but was getting away with it just fine?

I personally believe that this advice is one of the most important one actually for a very specific reason: it leaves a very bad impression to client when a bug keep happening again and again after being fixed. Unfortunately I've lived this experience when I was just starting out in my career on a similar project (around 45-50 developers) with basically no tests at all. It wasn't fun explaining to the client, even if they were internal to the company, that the bug we fixed last month had to be fixed again.


>Doesn't that just mean that the tests were effective?

Only in the sense that:

- What's this?

- A device that keeps tigers away.

- But there are no tigers in Los Angeles!

- See how effective it is?

>The goal of doing a test for each bug is to prevent them from happening again. Unless you meant that this project didn't apply this particular advice but was getting away with it just fine?

No, he means that they had code to test for the presence of fixed bugs, but nobody reintroduced said bugs and triggered that bug catching test code ever. So even if they didn't have the code, the end result would have been the same.


How would you know? If code is failing tests, it doesn’t leave my laptop.


We tracked all bugs in a custom made bug tracker: - Unit/Integration tests - Failures reported by QA - Bugs in production

Integration tests were really heavy, and ran multiprocess on a server. So locally we would run the unit tests, and relevant integration tests. The rest was tested on the integration server, which automatically created bugreports when something failed.


Because parent said they tracked the test suites.

Perhaps the tests don't run on the devs laptop, but on an integration system (we have such a setup).

(And of course you can make test suites report failures centrally, whether they tests run on a laptop or not).


They said they tracked bugs. A test failure during development doesn’t generally go in the bug tracker.


> - But there are no tigers in Los Angeles!

Except that there are, and are kept from terrorizing the population by very effective devices: iron cages. (iirc there is at least one Sumatran tiger in the LA zoo)


> Doesn't that just mean that the tests were effective?

Very effective. Especially if you consider that there weren't any ;). You can't get any better ROI of zero effort! ;)

Each project is different of course, and if you saw this in your project, than it would probably make sense to implement them.

But all I'm saying is that this shouldn't be a hard rule. Look at the project, look at the current issues, track your bugs, and draw your own conclusions.


what seems to be lacking from these discussions is the notino that forethought and consideration can win the day.

if there is some brain fart, and its tracked down and fixed, the value of encoding that in a unit to be run until heat death. as you say, exactly zero.

but if there is a problem area that keeps throwing bugs like a metastatic tumor - say race conditions or memory footprint, or unicode handling. in that case the return on writing tests to try to shake out those areas is pretty high.

I think the real failure here is to try to substitute informed judgement calls and meaningful discussion with simple checklists of dos and donts.


> In that case the return on writing tests to try to shake out those areas is pretty high.

I agree, but in that case you shouldn't test for that one specific case that failed, but try to make a test that handles similar issues, or tries catch bugs of the same kind.

Of course it depends how far you can push this.


The dog that didn't bark. Perhaps that's because people aren't committing code which regresses these cases, due to the presence of pre-checkin unit tests?


There were very little unit tests on this project, and only written for new code, never for bugfixes.


Bugs seek each other’s company. Bad code has many bugs. Therefore the best place to add tests is the code where bugs are found.

It’s not quite the same though as adding a test to verify a bugfix.


Exactly and often times when I write tests for a bug fix, I'm exposed to thoughts of other cases that were also missing, and write tests for those too. Like everything to do with software testing, for it to be effective you have to use your brain and not blindly follow some prescription you read on the internet.


In addition to other's comments. Writing a specific unit test helps you test that the bug exists and after it was fixed it helps you test that the bug was fixed. Something that you have to do anyway.


Most things in life are a trade-off. If there is no trade-off, everyone would pick the winner.

Manually testing your bugs is still cheaper in time and effort than writing unit tests. If it was not, everyone would write unit tests.

Unit tests need to be able to fit into your codebase, sometimes contain bugs too, need to be maintained, slows down your release cycle, etc. I'm not claiming they are bad, I'm claiming it's a trade-off that you make in favor of quality, stability, etc.

Unit tests also need to be produced by expensive, probably scarce programmers. Sometimes it's better to hire 'cheap' testers that can script tests into your framework.

That's why I like to talk about return on investment. If you put in the extra effort of writing a test, you expect this to pay off in the long run.

Nothing is cheaper than not testing your bug after you fixed it. But the chance that a lot of time will be wasted afterwards because it wasn't actually fixed is huge. So basically you have to find the most optimal way to spend your effort.

And like I said previously, our statistics showed that writing unit tests on found bugs wasn't the optimal way to spend our programmers effort.


While I can agree on the stated reason being not perfect. Production bugs converted to tests do help validate the bug is fixed for ci/cd and understanding for the necessary rca.


A small quibble with #8 (no manual testing) - I can't recall the number of times a program has passed all the tests the creator thought to write, only to fail when someone actually looked at the output.

Automated testing should not be the end of the testing. It should be the beginning. Manual testing should be the last step, even if every manual test is immediately automated - there's always more to test.

Also, we should be sure to include combinatorial and fuzz testing in that pyramid as well, since skipping them leads to someone coming in with AFL and exploiting the hell out of your app.


Agreed. The number of times that manual tests have unearthed completely unthought of situations is huge. A good QA tester will do weird and wonderful things that no developer ever thinks of doing, never mind testing for. Saying you shouldn't manually test is like saying you shouldn't shower. Sure, you can get away with it for a time, and it saves time! But eventually someone's going to point out you smell bad. Having a customer have to tell you your product is broken (smells bad), because you didn't know, because you didn't bother testing, is seriously not what you want.


>> Anti-Pattern 2 - Having integration tests without unit tests

I strongly disagree with this one. I think that unit tests only make sense when the project's code has really settled down (not likely to change in the future) and you want to lock it down to prevent new developers on the team from accidentally breaking things.

Unit tests severely slow down development. I've worked on projects where it takes 2 to 3 days to update a single property on a JSON object on a REST API endpoint because changing a single property means that you have to update a ton of unit tests. The cons of unit testing are:

- It locks down your code, so if your code is not structurally perfect (which it definitely is not for most of the project's life-cycle) then you will have to keep updating the tests as you write more code and more functions around.

- It encourages you to use certain patterns like dependency injection which might make sense for some (e.g. statically typed) programming languages but are unsuitable for other (e.g. dynamically typed) languages because they make it difficult to track down dependencies.

- It only makes sense for parts of the project that have strict reliability requirements and where any downtime/failure in that part of the code would result in some loss of business. It's important not to underestimate the maintenance cost of unit tests. More unit tests means much slower development (cuts productivity to half or sometimes even a quarter of what it was without tests for small teams), which means that you need to hire many times more developers to get the same productivity that you could get from a single developer. Sometimes it's OK if a part of the code breaks in non-critical parts of the system; especially if you have some kind of user-feedback system in place.


> I've worked on projects where it takes 2 to 3 days to update a single property on a JSON object on a REST API endpoint because changing a single property means that you have to update a ton of unit tests

You just described anti-pattern 5. Did you read the full article?


Anti-pattern 5 is contradictory to the rest of the article...

>> Tests that need to be refactored all the time suffer from tight coupling with the main code.

What is the author proposing? To write unit tests that are only 'loosely coupled' to the code that they are testing? In my entire career, I've never seen a single unit test case that matches this description.

If it's loosely coupled with the internal code then by definition, it's called an integration test.

Anti-pattern number 5 is basically the author admitting that internal unit testing is a problem in terms of productivity but then they fail to offer an actual solution which doesn't contradict the rest of the article.

Sometimes your code needs refactoring, you need to change the fundamental structure of how some objects interact with each other and when that's the case, unit tests actually discourage you from making the necessary changes of pulling the whole class definition apart (thereby invalidating all the unit test cases for that class) and moving the code to smaller or more specialized classes.


>In my entire career, I've never seen a single unit test case that matches this description.

That is not an argument. The fact that you have been doing something your entire career does not make it correct

>If it's loosely coupled with the internal code then by definition, it's called an integration test.

That is your own definition. The article defines an integration test right at the start. It is ok if you have your own definition but that does not mean that everybody has to agree with you.

>Anti-pattern number 5 is basically the author admitting that internal unit testing is a problem in terms of productivity but then they fail to offer an actual solution which doesn't contradict the rest of the article.

The article has an example and shows both the problem and the solution. The solution is to make your tests not look at internal implementation. What more could I do there?

>unit tests actually discourage you from making the necessary changes

you are just describing again what anti-pattern 5 says.

>What is the author proposing?

I am the author, so I know what I am proposing, that is for sure.


To avoid anti pattern 5 in unit tests you should only test the public api of a "unit". The problem is the definition of unit and at which boundary in your code do you consider a api to be frozen. Too many unit tests will block you from doing cross unit refactoring.

How do you balance this?

One technique i use is instead of splitting the test in unit/integration I try to find the most stable APIs in the codebase. So you don't make a complete end to end A->B->C->D nor individual A, B, C, D. Instead you divide it into smaller integrations such as one test for A->B->C and one for C->D, assuming the C interface is stable.


>>In my entire career, I've never seen a single unit test case that matches this description.

> That is not an argument. The fact that you have been doing something your entire career does not make it correct

I've worked for many different tech companies in my career (both startups and corporations) and the vast majority of these unit tests were not written by me.

Also I've worked on many open source projects. Same story.


>Also I've worked on many open source projects. Same story.

Let me put this way. I am writing an article on how to keep your body healthy and provide a list of common mistakes.

Anti-pattern 5 is "you should stop smoking".

And your argument is "I have been into too many companies (startups and corporations) where people have been smoking all the time. So anti-pattern 5 is wrong."

Is this more clear?


I had hoped that this would be a catalog of concrete code examples of common or easy to make mistakes, along with concrete prescriptions for what to instead. Instead it turns out that it is mostly a long winded collection of opinions passed off as insights along with a few general principles that basically amount to good tests are good and bad tests are bad. Who reads this and is any wiser as to how to write valuable tests? Writing tests that add value is hard, is a skill that must be acquired. In my experience, most developers who have given up on testing have done so, because they’d been sold the false notion that testing is trivial and always adds value. What they’ve found instead is that testing is difficult (because they lack the skill), and that the tests they did write, were of questionable value. No wonder they gave up on it. What these people need is not another article talking in loose terms about the wonders and virtues of testing.


>of concrete code examples

Examples of what? Python? Ruby? Java? C++? You cannot please everybody. The article is written in a way that touches all developers. And judging by the feedback I got it has succeeded in this way.

>that basically amount to good tests are good and bad tests are bad

And the article helps people to understand which tests are good and which are bad in the first place. If you are already an expert on the subject then maybe you are the wrong audience.

>Who reads this and is any wiser as to how to write valuable tests?

If you think you can do better then by all means I am expecting your article on the subject

>What these people need is not another article talking in loose terms about the wonders and virtues of testing.

Please write the correct article then.


I thought we got past the idea that a unit is a method or class.

The original interpretation means a unit of behaviour. When you start thinking like that, intergration tests become less important. But still important.

The idea that a unit in unit testing was a class was a misinterpretation.


Totally agree, and this misinterpretation got crystallized into your tools that often recommend/generate tests that are named TestSubjectClassNameTest->eachMethodTest. We have to pay attention to that ;)


"Anti-Pattern 2 - Having integration tests without unit tests"

The first project I worked from after graduating had this problem when I came onboard, all the tests were slow as all hell and 80% of each test did the same thing (clearing the entire database and seeded in new test data, for every test). I felt like I made a mistake becoming a programmer because of how gruesome this anti-pattern is when you work with big Java applications. Then we hired a senior developer three months later who promptly started breaking up the tests after checking with the team if that was ok. The productivity of the entire team increased by many orders of magnitude!


Yes this is exactly what I was talking about. The anti-patterns are mentioned in order of appearance, so this is very common.

Don't let problems like these disappoint you. There are companies that don't suffer from any of these anti-patterns


Really enjoyed reading this article as a Sunday afternoon long-read. Well structured and covers a lot of things I have experienced but haven't formed into such a clear description. Also really appreciate the realistic examples!


Thank you! Yes it took me a long time to write and thinking about good examples is always hard...


Superb article (some small nitpicks but they are ignorable). I would like to add just one more anti-pattern:

Focusing too much on fast feedback

Nope. One should get priorities straight. The #1 purpose of tests is safety. The "sleep well in the night" test. The "deploy with eyes closed" test.

Performance of the test suite, while "nice to have", is not the core objective. If push comes to shove, if it comes down to safety vs performance, safety should just win hands down as a principle.

Doing a bunch of mocks for speeding up 80% of unit tests? Great, but its a borrowed debt, and must be balanced out with 20% of higher level tests.


I find this a bit of a strange statement.

I started with Extreme Programming in 99 and grew and evolved through all the refinements to the process around testing. It's always been about providing safety. Fast has always been about not running silly tests that take too long

Fast Feedback is a core objective. It is not in competition with safety, safety is an integral part. Safety is the feedback you are expecting. We want to know about that safety fast.

A popular term that came around in the early 2000s was "Brain Engaged", meaning you needed always to be aware of why you were doing things and not following blind rules. Meaning you need to know the purpose of going fast.

The whole point is to go as fast as possible safely.

Some of the biggest challenges is how to get things quick while maintaining safety. Kind of makes no sense to have fast tests with no safety.

Now you mention mocks, and I have seen people mock in very strange ways that devalue tests.

I like the general guidance from Kent Beck "I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence"


Can't agree enough. Degrading the quality of the tests to make them quicker is particularly frustrating, especially when it's easy enough to split them out into something that runs at a lower frequency.

Just because you can run all the tests and all the optimisations on every keypress doesn't mean you shouldn't have them.


"Linux command line utility ... In this contrived example you would need:

    Lots and lots of unit tests for the mathematical equations.
    Some integration tests for the CSV reading and JSON writing
    No UI tests because there is no UI."
As with all command line apps, the command line is the UI and requires testing ( input parameters, error messages, etc... )


You are right. I will change the wording to say GUI instead of just UI


In cases where manual testing is the only option, is it an anti-pattern to let the testing be done by the person who developed the code?

As a developer, I feel that others should test my code. Is that reasonable?


Manually testing your own code is liable to tunnel vision. So yes, having others do it is safer. There are even production standards that make this mandatory.


Why would manual testing be the only option?

As an example, we wanted to automate the test for browser-based OAuth connections to GitHub and authorizing an app there.

Here is the test that does so: https://github.com/coreinfrastructure/best-practices-badge/b...


I have been working with teams that heavily use automated tests for over 16 years now and, at least on my experience, making others test your code would lead to untestable code and delegation of responsibility.

Even for manual tests, the test spec should make part of the change request and be reviewed appropriately.

Cheers!


Why should other test your code?


Same reason you don't review your own code, or pair program by talking to yourself.


This seems a bit of a false generalization to me. For a long time I've been the sole developer on multiple large-ish applications. So of course I review my own code, and I write my own tests. And yes as another commenter mentions: if you don't pay attention this can lead to tunnel vision, and yes it can be hard to be consistent about it.

I discovered those drawbacks fairly early, so quickly made it a habit to be super-strict and critical about pretty much every single line of code I write (both with regards to style, functionality, how it adheres to the standard practices etc).

What I also do consistently is review older code: if I have to add a function to existing code, it happens more often than not that I'll re-read and review the surrounding bits. Often to come up with better code. If time permits, else I take note and go back to it later.

In practice it turns out that if I haven't touched code for months, reviewing it is almost the same as reviewing someone else's code, i.e. starting with a clean slate. Which is also why I'll sometimes write something, commit it, but not yet merge it to production. Then a week or so later I'll come back and review every bit again.


Oh, I completely agree, though I wouldn't call writing code carefully and methodically a "code review" if that makes sense.

I personally find that me a couple months down the track is pretty much another person anyway. In a sense you're not reviewing your own code, you're letting it be reviewed by future you. But if you're in a team with multiple developers, having someone else review your code would be more efficient.


I tend to review my code with a reviewer hat, not with the author hat, before I open the PR. Find all the places where I'd call myself out for lazy names, poor abstractions, missing docs, etc. then go in and fix those. Repeat until satisfied, then open PR.


> In practice it turns out that if I haven't touched code for months, reviewing it is almost the same as reviewing someone else's code,

I really like this idea, haven't heard it before but it makes complete sense, thank you.


> ... or pair program by talking to yourself.

Who says I don’t?

https://en.m.wikipedia.org/wiki/Rubber_duck_debugging


It's not pair programming, though, even if can have some of its advantages.


You can solve this by test plan: You write into test plan what you have tested, and others in code review can suggest/require to test other things you missed.

In my previous job this worked quite well.


That's fair enough; I think when people are talking about manual testing, they're thinking about the decision of what should be tested, not the actual button pressing. But it's a distinction worth making.


Yellow rubber duck makes pair programming way more socially awkward ;)


These are exactly the posts I need to level-up my game. More abstract, top level views without going into detail on setting up system-x to make it work, i can figure that out myself! Any more of these types of articles you guys and girls recommend?


Yes I wrote it because it is my main complaint. The internet is full or articles that deal with a specific topic that is very narrow and never give the big picture away.

Frankly the only good source of such material is books (especially the initial chapters where they set the stage for everything)

If somebody else knows where these types of articles exist, I would also like to know where they are.


Great article. I don't agree with 100% of your points but testing is a field where everyone has their own opinion shaped by their own experience, and I think you've covered every approach (along with its pros and cons) exceptionally well.

OOC, how long did this take you to write?


I started in November. You can see the full history here https://github.com/kkapelon/kkapelon.github.io/commits/maste...

Most of my time was spent on thinking the examples and trying to structure the content. The actual writing was very easy when I knew in my mind what I wanted to say (as is always the case with technical writing)


After getting into some comp sci textbooks these last few months I definitely think these books are underrated for the information they provide and how they provide it.


Unit tests is a tool to help you write the implementations and verify it works like you intended to.

You could do the same thing by "F5"-ing it, that is running the full app, but the problem with that it is slower and above all NOT repeatable later, without significant setup time.

Running the test, sets breakpoints and debug implementation by running the unit test, is by far the fastest way to develop, while also having atleast some assurance that the code you are writing is "correct"


Reminding me of too many headaches at work.

We have all integration tests, and no unit tests. I and several others have pushed for unit tests, but with little success. Our full test suite takes 10 hours to run. We have split it up so we can test what we think is the portion we are modifying, but we're never 100% sure something in the rest of the suite isn't dependent on our changes.

I disagree a little with his complexity multiplying for his anitpattern about no unit tests. In theory, yes, you would multiply them. In practice, it is rarely the case you need to test all combinations - real code rarely looks like his diagram. In our experience with our project, the final number of branches we need to test is much closer to adding than to multiplying.

I very much agree with the "don't test internal implementation". If the primary reason your tests fail is because you refactored or made API changes, your test suite is not robust.

Am having to live with flaky tests right now. Horrible. Our team doesn't want to prioritize fixing them.

One antipattern he left out: Making unit tests a 1:1 match with code, and insisting that a unit test should not test more than one funciton. I know the community is split, but I am very much in the camp that "unit" should not be tied to a function. Don't make it that granular.


So we have a REST API created using node and most of the job happens on Postgres and sql files.

Basically, 95% of the methods depend on database [and it's current state].

How can I unit test this?

I've given up on unit tests. They wouldn't make any sense. E2E tests are helping us a lot but I had several attempts to have unit tests but they don't make any sense at all.


Yes, if you look at the article, specifically Anti-Pattern 3, you'll see that different types of applications require different types of tests.

If your units of work are "retrieve data for a list of IDs" and "store this data in the DB" then there probably isn't a lot of unit testing to be done. You probably have some data cleansing or validation functions that you can unit test, and probably some domain-specific data transformation that you can unit test. But most of your testing should be integration testing because the important thing is that the piece that's reading the data and the piece that's writing the data work the same way. If you only have unit tests that set themselves up and tear themselves down then you'll never find a bug where, e.g., the columns get switched around on a write.


Those methods are still data in, data out tho right?

So test that with mocked data.

Unit tests should be able to run with no network connection, no other service, just your test file + data.

Otherwise at that point you’re doing integration testing, not unit tests.


Let's say I have a `SomeEntity.get` function.

Basically it gets an array of id's and calls the database and fetches those entities.

~10 lines of Javascript code.

40 lines of SQL. Which also depends on the state of the database.

So what do I gain by mocking the database? Making sure those 10 lines of javascript work fine? In a vacuum? They are the 10 easy lines anyways which are covered by my E2E test.

What matters is the SQL and how it behaves in different states of the database.


And that’s why you test the DB with an integration test (and likely a functional test too).

For your unit test you’d just mock the dB call and make sure that function called the “db” and returned the faux data. That’s it.

No one said unit tests had to be complex or test the whole stack. Quite the opposite actually.


At that point, that's just testing for the sake of testing, which is just silly - nobody needs you to prove that the compiler/interpreter/library/framework in fact works the way its maintainers have already tested it to work


Yeah, the most you should be testing there is “it’s a function and it calls the (mock) db and returns data of type whatever”.

That’s like... 2 minutes of test writing, tops?

I think that we really need to discuss unit testing as a group more. Specifically: what a unit test isn’t. It’s amazing how popular some misguided and ill-informed ideas about them are...


IMHO. This is mostly integration tests. If you have methods that have calculations or other complex logic, then write unit tests for those. Integration tests and unit tests are complementary. The mix depends on the type of code you are testing.


Most of this article I agree with, such as the need for BOTH unit and integration testing, the need to focus on automation, and the need to turn regressions (bugs) into new tests.

I also agree that "Paying excessive attention to test coverage" is not good. However, I completely disagree with much of its supporting text. If your test code coverage is only 20%, then by DEFINITION your tests are awful. That would mean that 80% of your code is completely untested. I agree that for many programs 100% code coverage is not worth the effort, because those last few percentages cost more than their benefit, but that doesn't mean that such low coverage makes sense. Most organizations I've worked with recommend at least 80% statement coverage, as a rule of thumb. I haven't seen any studies justifying this, but this essay doesn't cite anything to justify its claims either :-). You'd want much higher statement coverage, and also measure branch coverage, if software errors are serious (e.g., if someone could be physically harmed by an error). You should focus on creating good bang-for-buck tests first; code coverage is then a useful tool to help identify "code that isn't getting well-tested at all." It's also useful as a warning: 100% coverage may still be poorly tested, but low coverage (say less than 70%) means the program definitely has a terrible test suite.

This statement is misleading: "You can have a project with 100% code coverage that still has bugs and problems." That's true for ANY program, regardless of its testing regime, because any testing regime can only test an astronomically small fraction of the possible input space. A program that just adds 2 64-bit numbers has 2^128 possible inputs; real programs have more.


>That would mean that 80% of your code is completely untested

That is not always a bad thing. Maybe depending on the application this 80% is trivial and never breaks. It is explained in anti-pattern 4 that you should start with the critical code first.

>Most organizations I've worked with recommend at least 80% statement coverage, as a rule of thumb

This number is making MANY assumptions.

I would demand different code coverage from an application that runs on a nuclear reactor and from an application that is used as point of sale in a small pizza restaurant.


Refreshing article. We hear so many religious arguments about one side or the other in these days, we really need more of these.

I think there's a deep issue that causes all the misunderstanding, the elephant in the room : the definition of a "unit". Words have a meaning in a certain context : if people don't mean the same thing when using the same word they're doomed to misunderstand each other forever. Just ask 5 different people what is a unit and you'll have at least 3 different definitions. The most common one is : in OOP, a unit is a class.

From my experience, a unit should be defined as a much higher abstraction level than that. A better definition would be : "a set of use cases that belong to the same module". In other words, unit tests should be we written in a language as close as possible to your domain language. Or : "test your use cases, not your classes". When you do that, you usually write tests for a few major classes that use all your other classes that are just implementation details. This leads to tests that are far more reliable and easy to maintain, because they have very low coupling to the rest of your application. Typically, this means testing classes at the very edge of your app, classes that directly communicate with the end-user, usually services or something like that.

Let's say you "unit test" a simple car with a steering module. It has all sorts of internal complex mechanism that IMO you generally don't need to unit test directly. What you need to know is if the business value is correctly delivered to the driver, ie :

- when he turns the steering wheel left, does the car turn left

- when he brakes, does the car stop

- when he presses the gas pedal, does the car accelerates

- etc

Under the hood there are dozens of other classes that perform actions that you don't really need to care about when you test. They will be tested indirectly anyway, because they're used by the high-level classes you do test. I think many people blindly try to unit test almost all the classes they write and it leads to code duplication all over the place and all sorts of other problems that make projects fail, people angry and think unit tests are bad.


I have observed the same thing. I have seen way too many test written with the class = unit philosophy that look more like tests of the VM that is running the tests than they look like tests for the code that was supposed to be tested.


> Anti-Pattern 5 - Testing internal implementation

I run up against this the most. The internal state often does not matter. What is important is the bevhavior. Refactoring internal state should not break a bunch of tests.


Then you will probably like this: https://www.youtube.com/watch?v=URSWYvyc42M


Blast from the past; I had forgotten about that one. Thanks


One thing that seems to be not really commented on is "Play Testing / Exploratory Testing" ( and semi related, "dog fooding"). This is the manual test process where someone actually uses the software and looks for problems. Problems are then captured as a new automated test case. This is about finding undesirable unforeseen consequences ( sometimes you get desirable ones! though if it is desirable, you will want to capture it in a test also ).


I have also noticed that many of the frameworks for writing unit/integration tests are so flexible that developers will easily develop a mini-framework that makes their specif tests able to be expressed in a declarative manner. The downside is that their setup and system isn't really usable for other tests.

I think devs need to keep it simple, even it leads to more code, so that a wider range of tests can be written with the same abstractions.


I personally don't like the word 'antipattern' because it sounds like a TDD religion term.

None the less the article does a good job covering lots of the reasons writing tests will result in a bad outcome. Much like the reasons developers dislike certain architecture patterns, it's not the pattern that's at fault but the application of it by individuals and teams


I thought about using "common pitfalls". You think this would be better?


Great read, and many points I'll consider, particularly the first anti-pattern. I consider that writing too much code that interfaces with other services (which in turn leads to more integration testing) can come from suffering NIH, as it's likely that that code might just be repeating code already provided by the api/library of that service.


I understand that if the test hits database, then its an integration test.

But what if a whole load of the logics are in stored procedure? Surely those needs test too.

Of course, one can make an argument that depending too much on stored procedures is an anti pattern....


The stored procedure should be tested within the domain of the stored procedure. The consumer of its result just tests its ability to use the result it should be given (fulfilling its end of the contract).


This website is unreadable on iPad, the font size changes every time I scroll.


Same problem, it’s a shame because it looks like a good article, but too distracting with the layout jumping around. Not sure of a way to take a screen recording on an iPad though.


I have an iPad 1 and it shows just fine. Can you post a screenshot/video somewhere with the problem?


Here it is on a 5th gen iPad with iOS 10.3.3: https://youtu.be/5gqbL_fhy0U

Works fine with Reader View, though.


Anti-pattern 14 - generalizing all testing in the industry to e-shops and websites


I am open to other suggestions. I have really tried to spend a lot of time to another example and cannot seem to find any.

The other alternative would be a loan approval application.

Remember that the definition of a developer is very broad nowadays. You have hackers working with C/Assembly on firmware, all the way to AI/ML with high level languages.

Do you have another suggestion for a good example?


I can't say I agree with this. Partly because it's too generic. I'm of the opinion that you should focus on testing what you deliver. If you're building a framework or library, then unit tests are probably the way to go. If you're developing a micro service that's primarily going to be called by other services, built by other teams for example, you should probably focus on integration tests. If you have an external API, you should focus on e2e tests, as your clients won't know or care that you have several individually well tested microservices behind your API or if it's a monolith. They only care that a given API call has the expected result. The pyramid thing is pretty, but also meaningless IMHO.


You've mostly repeated what he said in section #3

http://blog.codepipes.com/testing/software-testing-antipatte...


> I can't say I agree with this. Partly because it's too generic.

Did you actually read the entire article? It specifically mentions this as a anti-pattern, having tests testing the wrong thing and you should figure out what's the most important thing to test and how. Read the section "Anti-Pattern 3 - Having the wrong kind of tests" and you'll see you actually agree with the article.


You are just repeating exactly what I am saying as anti-pattern 3


Upon re-reading, I think I may have misread large parts of your article.. Sorry about that, these are actually great tips unless you misunderstand them ;-)

at first I took your comments about "the shape is not a pyramid" as there being something wrong with that and that the shape should be a pyramid.. reading a bit more carefully I now see that's the opposite of what you're saying...I blame everything on english being my second language O:-)


It is ok. English is my second language as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: