Hacker News new | past | comments | ask | show | jobs | submit login
API end to end testing with Docker (fire.ci)
95 points by jpdel on Nov 11, 2019 | hide | past | favorite | 27 comments



This is one of the best usecases for docker -- testing your system by spinning it up as it would be in production, with throwaway versions of the dependencies (the more realistic you can get them the better) is a fantastic way to test the overall system.

I'm surprised anyone is still not doing this. The most useful tests are the ones that test a customer's flows -- who cares if some function in your backend code does weird things when it takes a malformed string, the business's first concern is usually whether a customer can completely some flow.


> who cares if some function in your backend code does weird things when it takes a malformed string

I would argue you need both. Without that unit test you might find it difficult to account for or track down that particular edge case. Plus a test like that takes about 5 minutes to write.


Agreed you definitely need both, but one is much more important to your company's continued existence.

To be fair, current best practice is to not write those tests at all, but to generate them and embrace the property based testing paradigm (ala quickcheck[0]). Put simply -- let the computer make random inputs and make sure your program maintains the proper invariants.

[0]: https://hackage.haskell.org/package/QuickCheck


This is not end2end testing but, rather, component testing (or, possibly, contract testing if you took it a few steps further). In our stack, we refer to this as a post-build component test.

https://martinfowler.com/articles/microservice-testing/#test...


What tests are which kind depend on your use case. And is an infinite debate. I don't really care how you call them :)


I don't understand this distinction. Is the reason that they are not testing the frontend itself? Would it be if they were only responsible for the API? Can't you end-to-end test an API?


I would simply say that testing APIs with pacts is a better approach than e2e testing.. Overall e2e testing is something you want to get rid off, cos development loop with it is too long and not modular.

All you should care about in an API testing is its specification correctness. Whats below should be tested by component tests or even units.

This approach more or less already has a concensus in api world. Im surprised there are still ppl doing e2es.


> Im surprised there are still ppl doing e2es.

You're surprised that there are people actually testing that things work after integration, interacting with the system as a user would?

I'm surprised there are still people who don't.


No, I'm surprised people still use e2e tests as they are not the best way to test what you described..

There are better ways, like the ones I've described. We are not longer in 90s..

ps. e2e tests are not the same as manual testing, so they do not even fully cover your case. They are expensive and take long time to run, while simple pact build step provides pretty much the same value and is pretty much instant while making it easier for all developer teams to track api changes..


You seem to be very focused on the API... but that's only half the product :-)


> It creates “containerized” versions of all the external parts we use. It is mocking but on the outside of our code. Our API thinks it is in a real physical environment.

While I am in favor of a Dockerized solution, and have used it extensively, the reasoning above is not entirely correct. You can Mock without the application noticing it in different ways, the simplest being using of a separate process running the Mock logic.

The befit of Docker is the same as in other contexts- you get a packed versioned solution that is easy to deploy and manage


I've been doing this for a couple of projects for the past 18 months. I find it one of the easiest forms of testing the API stack so when the front-end integrates with it there's very little issues.

The application has been written so you can plug in a local file system instead of S3 which again, is fantastic for throwaway tests. It can also catch emails that would be sent and verify the HTML content.

My rule is to not rely on any third part service for the tests.


I think tying this to snapshots of application states (https://dotmesh.com/) would be incredible for development throughput. When you have tons of data you can't be recreating it every time you run through your test suite. Anyone on HN currently doing this?


I didn't know about dotmess. I'll check it out. Thanks for the tip.


I would go with `nock` rather than spinning up a mock server: https://github.com/nock/nock


nock will intercept http requests in the same node process it is used in. Here the test (and the mock) are in a different container and thus process. It won't catch them. Unless it is possible to actually spin up a server using nock and I've missed it? In which case I agree, custom code is not needed.


Can someone explaine the proposed Dockerile ? I understand that they use multi stage builds, but I don't really get the point of doing something like

FROM dependencies AS runtime

COPY . .


What's the benefit of this over tools that you're likely already using to build an API, such as Postman, insomnia or Milkman?


Testing an HTTP API involves running the server, and running a client against that.

Postman, insomnia, and Milkman are REST clients.

I haven't tried it, but I think it'd be quite awkward to get a server setup from Postman. I think it would at least involve calls to external programs to start a server, or to run against an already running server.

The more apples-to-apples comparison would be using Postman vs the chai tests. I'd argue that writing tests in JavaScript with something like Chai as described in the post is better than postman, because the code is in 'readable plaintext' rather than JSON.


>Testing an HTTP API involves running the server, and running a client against that.

That's exactly what tools like Postman are used for. How are you doing any testing during the development process if you're not hosting your web API even locally?

>I haven't tried it, but I think it'd be quite awkward to get a server setup from Postman

There's no need to as I mentioned that you're going to be hosting it somewhere during development anyway.


> There's no need to as I mentioned that you're going to be hosting it somewhere during development anyway.

You will probably host 1 version which is infrastructure heavy and not very flexible. Take 10 developers working on the API. They all need to test their changes. Automatically if possible. Hosting external elements like the database and others is a pain eased by Docker.


Hmm, seems like the workflow you've described is a bottleneck leading to this. Wouldn't you have some kind of test/staging environment that is accessible for this purpose?


For this reason I created https://www.apilope.com - you can trigger hosted API tests from your CI workflow as well.


Getting a warning on the latest Firefox; seems like the certificate is expired.


Cool post, and pretty spot on. I've been doing this with Ruby apps, many of which interact with a browser, and I have some additional thoughts:

- If you're running MacOS Catalina, make sure that you are using the Python version of Docker-Compose, not the compiled version that Homebrew installs. There's a bug with PyInstaller where it needs to fetch resources from the Internet after running any Compose command. This can add significant startup latency.

  You can you `pip3 install docker-compose` to install the Python version over the "Homebrew"
  version.
- If you want to do quick contract testing against your API without running a ton of code, Dredd is your answer. It runs tests against your OpenAPI YAML docs, which is super nice.

- Compose services start asynchronously, meaning that your database might not become available by the time that your unit tests run. While I would recommend mocking database responses during units and spinning up local database instances for contract or integration tests, if you need to do this, ensure that your tests have some kind of code that synchronizes your database against your unit tests.

- Speaking of databases, data within containers is ephemeral, meaning you _will_ lose it after every test run! Remember to use the 'volumes:' block to specify where you want your PG data volumes stored, and if you're using your Git repository for storing this data, ensure that you .gitignore it (unless you need to have test data pre-populated for your tests to work)

- If you're going to do any browser automation in Compose, the easiest path will be to use Selenium Hub and have separate Compose services for every browser under test. I have an example of that here: https://github.com/carlosonunez/bdd-demo

- If you're doing any unit testing for functions that will eventually run within AWS Lambda (or any cloud-FaaS, really), and your functions rely on a headless browser, I would _really_ recommend finding the most lightweight WebKit browser that you can find over Chrome and then use the `lambdaci/lambda` Docker image for EVERYTHING. I had a REALLY hard time getting Chrome to work within Lambda even though my units were passing, despite disabling shared memory, GPU usage and all that other jazz. I eventually landed up using PhantomJS (deprecated) to do what I needed to do. Node has better support for Chrome on Lambda, but only marginally so.

- `docker-compose run` does not expose ports to your host; `docker-compose up` does!

- As you get more familiar with Compose, you will be tempted to use Docker within Docker. Avoid it if you can. Making host volumes accessible to nested Docker containers is a MAJOR pain in the ass and adds a lot of complexity.

- As a general unit testing tip, you shouldn't need any network access for your unit tests. if your unit tests need internet access for anything other than fetching dependencies, then you might want to consider refactoring your tests.


Just wanted to mention (because I had encountered this problem only last weekend) `docker-compose run --service-ports... ` will expose ports.

From docs: "Run command with the service's ports enabled and mapped to the host." https://docs.docker.com/compose/reference/run/

Also, for anybody wondering about "synchronization" solutions, I have found dockerize to be very useful. https://github.com/jwilder/dockerize


That's an awesome tip; thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: