As mobile applications become increasingly complex, so do the challenges for your testing teams. As a matter of fact, based on recent stats published by Gartner, Tablet shipments should jump by 68% this year, while PC’s to slip by 10% (Figure 1 below)
Figure 1: Gartner predictions around Tablets vs. PC’s
In a highly dynamic and unpredictable mobile market, short development cycles and continuous QA are needed to ensure that applications remain true to market requirements. This means that testing teams must implement agile processes and methodologies that enable them to continuously test the application deliverable from the development team and accelerate time to market.
Mobile apps are already leveraging many of the newly introduced technologies, such as NFC, voice related applications (e.g., checking your bank account balance using voice commands), location-based mobile services and more.
While automating your functional testing to handle this level of complexity is far from straightforward, it is imperative for staying in sync with the market. To achieve this, organizations should employ strong automation methodologies, tools and techniques.
This blog post focuses on the need for a hybrid approach to visual objects and native (application level) objects as part of the test automation and scripting solution. Using a combination of these methods can solve many of the challenges facing developers and testers of mobile applications.
A Word about Visual Objects and Native Objects
The application objects are the main building blocks of the applications under test. They consist of edit boxes, lists, links, text objects, etc.
If the test automation team cannot identify the objects on the mobile device screen, they cannot interact with it and cannot perform the actions (e.g., press a button, input text into an edit box, etc.) required in the test execution.
In the mobile space there are three main objects types:
- Visual Objects – Usually recognized by OCR (Optical Character Recognition) engines. OCR essentially uses a smart software engine that converts scanned images of handwritten, typewritten or printed text into machine-encoded text.
- Native Objects – These are OS-level application objects. Getting insights into these objects is possible using a small library compiled with the application under test, also called ‘instrumentation’.
- DOM Objects – Used in web-based mobile applications.
Each of the above objects has its pros and cons. The key to successful test automation is to enable testers to use the respective object types in the appropriate testing scenarios.
Let’s look at an example of a mobile webpage on the Travelocity website, as accessed from a real iPad3 device (Figure 2)
Figure 2: Travelocity visual defects – can only be identified via OCR analysis
In this example, we want to develop an automation script which would simply select the Save Flight + Hotel radio button (circled in red above), and use a native object to perform the Select option. This script would have succeeded in pressing the button; however, from a visual perspective, the test will not be effective since it will not report to the end-user that there is a serious truncation.
The same thing goes for the GO button (at the top of the screen), and the list of options on the left pane (both circled in red).
Conversely, if you were only to use visual analysis, you would identify these GUI defects and other cross-device GUI compatibility defects. However, you would not be able to find a complex object (e.g., an object which has several instances in various location on the screen would be easier to find using its unique properties in opposed to scanning the screen) on the screen, or find an object in languages not supported by your OCR engine, etc. To do this you would need to use OS-level objects.
It is also important to note that using visual analysis usually takes longer than OS-level objects, and the reliability of visual objects is by definition less than that of OS-level objects.
The graphic below (Figure 3) presents a consolidated comparison between these two approaches:
Figure 3: Visual Analysis vs. OS Level objects comparison
Bottom Line: Leverage both Visual and Native (OS Level) objects for your automation testing
Last week’s iOS7 announcement triggered many discussions around the pace of technology market and the sustainability of automated testing. Many practitioners realize that the validation of the quality of frequent builds require a commitment to continuous testing and a high degree of automation – this can only be done using both visually analyzing the applications under test together with testing the real OS level objects.
As demonstrated above, there is a clear need for the use of a Hybrid mobile object approach as part of your mobile test automation strategy. This requires use of the proper tools to enable development of a robust script which can validate the mobile objects at both the visual and application levels. This is the best way to assure that your application both functions as it should, as well as appearing correctly on the GUI of your target devices to assure an outstanding end user experience.
The screenshots below show this hybrid approach in action – identifying the same object both visually (Figure 4) and using native object analysis (Figure 5).
Figure 4: Visual Analysis using OCR engine – retrieves only the text in the textbox
Figure 5: Native object analysis – retrieves the full text in the textbox
For more information on mobile automation best practices, feel free to contact me (firstname.lastname@example.org).
To learn more about best practices in mobile automation using a hybrid approach, please refer to Perfecto Mobile’s white paper on this subject.