I used to have troubles with UI tests too, but then I discovered Cypress (https://www.cypress.io/). You can still run into an issue once in a while, but it's way better than anything else and it's getting even better with each release.
After reading through the website and watching their video, I still don't understand how it works out what it's value proposition is. Is there some demo where I can see and understand how Cypress provides value over the state of the art?
Cypress aims to be as deterministic as possible. You can write tests that wait for a specific xhr request or even mock them. The combination of that and the promise based API makes it much more reliable than selenium.
In practice, I have had much less flakiness in my Cypress tests than with selenium or other webdriver API.
I think this is one of the "dirty secrets" of current software development ... one of those things that pretty much nobody does very well (at least, anywhere I have seen).
My view is that we won't make real progress on proper UI testing until there is a paradigm shift that moves responsibility for creating applications that are fundamentally testable to developers. This is not dissimilar to how the idea of "devops" finally recognised that managing deployments and infrastructure is actually first class development activity.
The problem is that testing a system that has not been designed for it is almost fundamentally intractable. I have seen toolkits that auto-generate random ids for every element on the page so that it is almost impossible to hook onto anything within the page as a reliable anchor to identify testable points within the page. Even when those things do exist it is usually only incidental and not an agreed contract b/w developer and test automation engineer. So they will always break unexpectedly etc etc.
It’s not just developers that need to be more involved in UI testing but scrum masters, stake holders, management, etc. Automated tests should be viewed as shippable code, and their creation and maintenance should be factored into every story.
Without those “above” developers on board, you end up with a hodgepodge suite of tests because developers/QA are forced to focus on delivering features instead of a testable product and reliable tests.
I solved my UI tests by scripting a ton of screenshots for my documentation using Puppeteer, going through each thing the user can do and proving they can do it at multiple device resolutions. It was a lot of work but now it is quite simple -
await owner.click('Create plan')
await owner.screenshot('Fill out form')
One thing I did wrong that I have to revisit is I don't confirm the screenshots hold any data I'm expecting. It would have been much easier to add the asserts when I was writing them but I was manually inspecting everything very closely to address responsive issues so it didn't seem important until the responsive issues were sorted.
I’d love to hear more about this! Do you have automation that runs as part of your CI builds, and dynamically captures the screenshots for your documentation using Puppeteer? If so, that is so cool :)
Have you had much trouble with the automation scripts breaking as a result of app churn?
So far I still manually trigger a regeneration but it's only a matter of time till it just runs automatically on any update.
The main problem has been with Puppeteer, there is a chance it will just randomly crash at any point and it feels like there are a million little race conditions where you need to wait or keep retrying to access an element after it says a page has loaded so I have a lot of ugly code in my puppeteer-helper like:
while (true) {
try {
element = await active.$('#element')
} catch (error) {
}
if (!element) {
sleep(100)
continue
}
break
}
Another issue only pertained to getting the simulated device screenshots, switching the viewport configuration reloads the page which for me makes any state like "yes, you just deleted x" be an "x does not exist" error so at the moment I get around that by doing each device independently which takes a fair bit longer where slow APIs are involved.
In the final incarnation people using my software will be able to point the screenshot generator at their own website and regenerate all the screenshots so they can reuse the documentation.
One approach that's been appealing to me WRT to page models is to generate them from the page/component templates. It doesn't work with JSX, but it does work with other stuff like Vue or Angular templates: https://samsieber.tech/posts/2019/06/type-safe-e2e-testing-d...
My other comment is: you absolutely need data management methods if you're going to be writing UI tests. And maybe spinning up/down a testing environment on demand.
The one thing i havent fogured out (personally) is testing google integrations
Not intending to drop yet another test tool in the mix, but if you are worried about flakiness due to timeouts or dynamic classes/id you can take a look at boozang.com.
It has many similarities to Cypress, but element selectors are based on natural language so doing automation on top of changing ids/classes is trivial. Also it supports Chrome, Firefox, Safari, Opera and Chromium Edge.
The thing about writing test, either unit test or functional UI test, is repeatability. Back then I had this problem too, to which I had worked on getting a script to drop the database, and recreate it back again, repopulating it with consistent test data.
Then, docker-compose came along. It is so easy right now to just have an environment up and running consistently. And once the test is done, the whole container set of the docker-compose can be easily removed, and created again on a whim.
For most database containers, you can just attach your sql files into the volumes that the database engine reads, and you will have consistent test sets overtime.
This will solve :-
1. Running test again and again, will make certain elements go of page because of pagination.
2. Creating new data with the same data to ensure uniqueness and testing it, can be done, because it will be a clean slate everytime.
3. Previous data causing inconsistent state with the current test
This does not solve every issue with functional UI testing, but in my experience, having consistent data every time is huge! No more second guessing when you write the logic in your test in fear of having existing data or any manual testing data breaking your test.
At the project I’m currently working on we evaluated WinAppDriver (for WPF) and found it to be pretty buggy, slow and low featured. In the end we decided for a commercial product on around the same level as it, but if you’re on windows you can always use the ui automation Framework.
Article should really be called “our teams troubles writing automated UI tests for native windows apps”.
The article mentions “If you follow the Page Object Model pattern, then for each page and control in your application, you create models so your tests can find and interact with the elements on that page or control.“.
Not being a windows programmer I am not familiar with this model in detail, but a major issue seems to be that the UI toolkit doesn’t have any built in support for UI automation.
For example on iOS and macOS, the accessibility system (which is what e.g the screen reader for blind users use to drive the application) is repurposed for UI tests.
This means that if your app is accesible, which it pretty much is by default where you use standard UI controls, with minor code changes such as assigning unique identifiers to UI elements where needed, it is also UI-testable.
The Page Object pattern is common in Ruby (and some JS frameworks, like ember.js) too. I agree that the issues mentioned in this article seem mostly windows app centric. For JS apps running in the browser, the testing story is pretty good these days.