Yes, I've known about the headless proposition for a while.
Splitting front-end and back-end tests is desirable.
> Worst of all, headless browsers still can't truly test that "the user experience is correct."
This is the claim from the old Joel Spolsky article about automation tests but should not be the ultimate dealbreaker.
Nobody claims you should rely on automation-tests 100%. Automation-tests test functionality of your software not the look-n-feel or user-experience. You have a separate tests for that.
The problem between JS and CSS shouldn't be that many either (shouldn't be a factor that, again, becomes a dealbreaker). If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?
I don't test my configurations (in-code configuration, not infrastructure configuration) because configuration is one-time only. You test it manually and forget about it.
> Splitting front-end and back-end tests is desirable.
I don't feel confident without integration tests. An integration test should test as much of the system together as is practical. If I test the client and server sides separately, I can't know whether the client and server will work together properly.
For example, let's say I assert that the server returns a certain JSON object in response to a certain request. Then I assert that the JS does the correct thing upon receiving that JSON object.
But then, a month later, a coworker decides to change the structure of the JSON object. He updates the JS and the tests for the JS. But he forgets to update the server. (Or maybe it's a version control mistake, and he loses those changes.) Anyone running the tests will still see all test passing, yet the app is broken.
Scenarios like that worry me, which is why integration tests are my favorite kind of test.
> Automation-tests test functionality of your software not the look-n-feel or user-experience.
It's not about the difference between a drop shadow or no drop shadow. We're not talking cosmetic stuff. We're talking elements disappearing, being positioned so they cover other important elements, etc. Stuff that breaks the UI.
> The problem between JS and CSS shouldn't be that many either
Maybe it shouldn't be, but it is. I'm not saying I encounter twelve JS-CSS bugs a day. But they do happen. And when they make it into production, clients get upset. There are strong business reasons to
> If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?
> I don't feel confident without integration tests.
Nobody does. Having said that, my unit-tests are a-plenty and they test things in isolation.
My integration-tests are limited to database interaction with the back-end system only but do not test near-end-to-end to avoid overlap with my unit-tests.
I have another sets of functional-tests that use Selenium but with minimum test cases written for it only to test the happy path (can I create a user? can I delete a user? There is no corner cases tests unless we found that they're a must) in most cases because it is expensive to maintain the full blown functional tests.
Corner cases are done at the unit-test or integration-test level.
Splitting front-end and back-end tests is desirable.
> Worst of all, headless browsers still can't truly test that "the user experience is correct."
This is the claim from the old Joel Spolsky article about automation tests but should not be the ultimate dealbreaker.
Nobody claims you should rely on automation-tests 100%. Automation-tests test functionality of your software not the look-n-feel or user-experience. You have a separate tests for that.
The problem between JS and CSS shouldn't be that many either (shouldn't be a factor that, again, becomes a dealbreaker). If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?
I don't test my configurations (in-code configuration, not infrastructure configuration) because configuration is one-time only. You test it manually and forget about it.