Hacker News new | past | comments | ask | show | jobs | submit login

Personally I use a triplet of tests.

1) The first test I right is a functional end to end test that assumes the application is deployed and running with all configuration in place. That means there is a database and all the other parts necessary to run the application. The test follows BDD format (Given ... when ... then). Its purpose is to cross the bridge between high level functional requirement (I want X) to an executable spec for the aspects of that.

2) I then develop a series of unit tests. These don't require a database or even the file system, they are purely in the language of the application. I use a mocking framework to isolate to units and TDD out all the aspects.

3) Finally I write a performance test. A preloaded set of known data is inserted into an environment looks very similar to production. Any additional specialist data for the individual test is then loaded on top and the test is run and asserted against an expected maximum time.

The combination of the three is working OK for me but there are still gaps. Its hard to get the automated performance testing right as there are so many types of tests you actually want to run which are very hard to automatically verify.

That is how I do it IRL.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: