Automated Testing To Identify Root Causes For Flaky UI Tests
As part of my research at Perfecto, I keep track of activities that customers use to improve the speed at which they deliver web and mobile apps. My current role grants me visibility into aspects of the broader process of software delivery beyond that which a day job of coding, estimation, and standup meetings typically provides.
I'm glad to share three key areas that help to improve velocity by driving quality into every build via automated testing:
- Eliminating UI testing flakiness
- Increasing the effectiveness of testing in continuous integration
- Maintaining fast and complete test suites
You can download the full write-ups on Perfecto's site, available in three separate resources so you can digest them at your own convenience.
Identifying the Root Causes for Flaky Tests
Testing after each build gives us early feedback about our code changes beyond our local workstation or environment, UI testing especially so. But before we can expect to move any automated tests into the build cycle, they have to work properly every time; otherwise, they slow us down and reduce confidence in our pipeline.
There are a few things that often cause test flakiness. Fortunately you can address them in increments, so that you can make measurable progress despite your usual work load.
In terms of advice when you're writing tests, a few insights:
- Avoid "sleep() creep"...use your framework's built-in UI synchronization and waiting method. Don't stick "Thread.sleep(...)" in your code, ever!
- Use an element identifier strategy that's shared between your app and your test code, like a compiled dictionary or object repository (i.e. in Espresso, "R.id" for instance)
There are a few other elements on the test execution and environment side that I detail in the first resource, free for download here.
Build Verification Testing For Early Feedback
Many teams run continuous integration trigger unit (and some integration) testing after each build on the server. As you improve the stability of your UI tests, these can provide critical early feedback that unit and service integration tests cannot. A few examples of feedback that UI tests provide are:
- Font and element rendering differences across multiple platforms
- Validation that new features don't override global design standards (menus, headers)
- Device-specific hardware dependencies in features (camera, GPS, bluetooth)
- Cause-effect UI interactions that rely on visual style (i.e. tap...disable...wait...progress)
Many teams also have so many UI tests that it's impossible to run all of them on every build, but with the right test segmentation and scheduling strategy, optimal test and platform coverage is entirely attainable. The bad/good news is that many of these decisions require a conversation across development, testing, and operations team members...in other words, DevOps teams have a leg up on finding the right fit here.
There may need to be a small test code investment...for instance, using code annotations to inform your build verification process why and when to execute certain groups of UI tests. Your branching strategy will also play a role here; specifically when merges occur, you'll likely need to execute additional tests beyond what covers the feature or issue that required the branch to begin with. Annotating tests by module, feature, complexity, and/or persona (which are not mutually exclusive) helps you execute the right tests at the right times in automated build/test processes.
You'll also need to draw up a conceptual test execution schedule and then iteratively adapt it to the way development cycles really play out with your project(s). For instance, as a conversation starter with development teams, I like to use a basic construct like this and have them fill it in to figure out where they want to be in terms of testing coverage and device usage:
Again, more in-depth thoughts on this are in the second chunk of my research available for download from the Perfecto site.
Continuous Integration Requires Fresh and Snappy Test Suites
As we see an increase in a code-first approach to testing, tests should be treated with the same amount of hygiene and retirement protocol as features, screens, and navigation schemes. As you write code (app, test, pipeline management or otherwise), considering the lifecycle of that asset helps to minimize technical debt up front and maintain a steady state of velocity as new code is introduced.
A key thing to consider is your test retirement strategy. It's one thing to remove tests that are breaking because you removed functionality, but what about tests which no longer represent realistic workflows or usage cases?
The time to ask yourself these types of questions is...well...always, but if you're like me and can't do all the things all the time, picking the right moment in the design/development lifecycle to inject the question of test requirement is a progressive step (away from losing days) towards dealing with a mountain of test debt all at one go. You have a few options here for when to trigger a test retirement/refreshment audit session:
- When new features invalidate old features
- When user stories or BDD scenarios change
- When CI execution speed of UI testing reaches a threshold
- When target devices or platforms are cycled in or out of test plans
Responsible Developers Count on UI Tests, Too
Testing is an activity that everyone contributes to; quality isn't confined to a single role. For web and mobile app developers who are increasingly responsible for the user experience, continuous UI testing that integrates early and complete feedback into every build helps to keep rework in check.
Final thought: apps that are worth your time...building, fixing, using...are worth testing too. What's the point in shipping something if you can't prove that it does what you built it to do?
For the complete set of these three downloadable research pieces, visit the Perfecto website hereTags: Automated Mobile Testing