Organizations today face two major challenges when building and sustaining a mobile testing lab that improves app quality and speeds up time to market.
The first challenge is around selecting the right device/OS models on which to test their applications in a market where there are constant changes. Tackling this challenge can be done by using analytics and other tools such as Perfecto’s Digital Test Coverage Index reports and its Digital Test Coverage Optimizer. These guides will help solve the first problem of selecting the most accurate devices and operating systems for mobile testing.
Once organizations know which devices/OSes they should be using, the second problem they usually face is matching the device list to their organization’s requirements and sizing the lab accordingly. If the lab size is not accurate, improving test cycle velocity and overall quality will be difficult.
In this post, we’ll highlight one approach to sizing a mobile test lab based on specific requirements. To help with your lab sizing, we’ll use the following calculations:
- Number of automated test cases
- Average duration of a single test automation
- Test cycle time constraints (e.g. complete automation regression took XX hours)
- Number of unique devices for the lab
- Test automation suite classification/prioritization (How many tests are critical? How many are low priority?)
Let’s add some numbers to these metrics.
Assuming a customer has defined 16 unique devices (iOS/Android) for their mobile project, and the time constraints for the test automation cycle is 18 hours. Each test is estimated to take 10 minutes end-to-end (including test/device setup, execution, cleanup and moving to the next case), and the test suite consists of 400 different tests.
Some simple math for the above would look like this:
As seen above, the initial number of 16 devices cannot meet the required QA cycle duration of 18 hours. To meet this required cadence, a team needs to multiply the set of devices by four and size the lab with up to 64 devices.
Organizations that face this lab sizing challenge can optimize their mobile testing efforts by categorizing both the devices/OS list and the tests themselves into different buckets.
As recommended in the image above, teams can create two buckets within the list of 16 unique device/OSes and call the buckets Primary devices and Secondary devices. Figuring out which devices go in which bucket relies on either analytics and web traffic trends or industry reports about test and device coverage.
In addition, DevTest teams should divide their test suite into different buckets based on the value of the tests themselves. Teams that categorize their test suite into critical path test cases, high priority test cases and low priority test cases will be able to reduce the test cycle length and at the same time meet their quality requirements. Teams can figure out what category a test falls into by looking at the test execution history to see what value was added per test case (i.e. defects found per test case, duplicated test cases, niche test cases).
When an organization is able to reduce the required number of devices from 64 to 50 and prioritize certain tests over others, it’s in a position to save money and spend less time trouble shooting and more time delivering great digital experiences to customers.