Insurance companies are making their way into the digital space after a slow uptake compared to fintech and retail verticals who have responded to the rapid digital disruption. There’s no denying that QA, test and dev teams in this industry face many hurdles in terms of rolling out quality mobile and web apps, and providing optimal UX across the many complex digital touch points associated with insurance transactions.
The big question now is how do insurance companies provide the flawless digital experience their end users expect, and what does that mean in terms of digital goals, testing frameworks and environments to ensure app quality and velocity?
Fierce competition from the rise of insurtechs means traditional insurance companies need to innovate – and fast. In terms of the process, insurers need to consider the establishment of ‘continuity’ across the SDLC. This applies to continuous integration, deployment, and quality. Tools need to be set up and teams need to learn the culture of developing and testing in very short cycles, as the cost of bugs exponentially multiplies with the life of a project.
In terms of quality testing, here are some guidelines and advice from our experience working with insurers trying to ensure ongoing quality experience for their end users in an agile environment:
• Your Lab
– A lab in the cloud is key for continuous integration, deployment, and agility:
– Composition and relevance: your lab needs to be current with the devices and browsers your end users are using. Best to leverage your analytics to learn what those are, but if you want to familiarize yourself with the relevant devices for other insurers in relevant countries, visit Perfecto’s index to get an idea.
– Coverage and scale: your lab needs to enable you to automate as many test cases as possible across devices and browsers. For example, if you need to include fingerprint to authenticate, or simulate a moving vehicle at different speeds, or a heartbeat device transmitting data to an application, your lab should be capable of automating these test cases. The alternative is risky and is likely to represent anecdotal coverage for test cases that will manifest themselves as including costly bugs in production. Also, your lab should scale across geographical locations and across devices. As your digital offering matures, so will the need for coverage and test cases. You will need a variety of devices at a capacity that will meet your coverage needs.
– “Bionic eyes”: a fundamental requirement for UI testing is to validate the rendered content and measure the responsiveness of the app. Whether it’s OCR or image-based, you will need an automated way test checkpoints and control the result of your testing based on whether the user can indeed complete a transaction and how long it took them to do so.
– Developer tools: for developers who need to access devices after a test failure, it is to provide a developer-oriented set of tools to isolate and correct the bug. To do that, the ability to remote debug on the target device is needed, as well as the ability to execute scripts in tools familiar to developers such as Espresso and XCTest.
• Your test cases
– Consider your end user flows across devices and screens and prioritize: you’re likely to build a matrix of user flows across devices – one that grows rapidly. You will need to prioritize which tests to run on which devices and browsers. Also, keep in mind the users’ expectation for a continuous journey across screens (research insurance quote on a desktop screen, interact with the vendor through the app, etc.). Mature organizations also test journeys, not just singular test cases.
– Sustainability and scale: to achieve maximum efficiency, drive towards sustainable, reusable scripts. Script-once technology, based on reliable object identifiers (object ID or smart xPath), is a good way to consolidate code into a sustainable object repository.
– Consider real user conditions: real user experiences differ from ideal lab conditions significantly, and those are the ones which matter to the end user. Typically, those conditions, such as a network, changing location, device orientation, competing background apps, etc., are beyond the control and predictability of the app. Ensure you have a way to identify the characteristics of your end users by persona, and embed those key personas in your automated testing. Consider providing persona-oriented reporting to the decision makers as that will translate immediately to their business reality.
– Leverage your work force: a typical organization will host a myriad of skilled testers. Some will be experienced coders; others will be manual testers. Selection of a good framework will allow collaboration and use of all resources: the coders can build the object repository and advanced test cases. The manual testers can begin coding via BDD (Cucumber), essentially English-oriented scripts.
– Reporting: if you are able to write many scripts and execute them, perhaps nightly, you will need a way to quickly identify trends across failures and drill into root cause. A strong reporting suite will allow you to group and customize reports in order to achieve that, as well as view logs, screenshots and other assets. For the product owner or Scrum Master, ongoing visibility into the health of the build vs. yesterday, for example, helps decide the priority for today (develop feature or fix bugs) and for build go/no-go decisions.
• SDLC and DevOps: Lastly, one needs to ensure the term ‘continuous quality’ is applied indeed across the app life cycle:
– Developers can simply set a small amount of automated test cases to be triggered on code commit (commit, build, test) so they receive indication within 2-3 minutes of whether the code base is broken.
– Testers inside the app team can set a nightly regression test across more devices to ensure the build is in good shape.
– Outside the cycle, most of the non-functional testing will happen: security, accessibility, performance. However, there is a growing trend to introduce 1-2 of each of those into the regression or smoke test for sanity and efficiency purposes.
– Production insight: in traditional SDLC, this is where the team needs to hand over information to the operations team about new features in the new build, and wait until the dedicated monitoring solution is ready to monitor the new feature. Also, if something breaks in production, it is very difficult to get a timely response from developers because OPS are using a different tool. Our recommendation to bridge this gap is to use the same solutions (lab, process, test cases) that have been used for continuous testing pre-production, in production. Essentially, use devices and browsers with the scheduler, test scripts, reporting, etc., against the application in production. This approach eliminates barriers between teams, time to launch is no longer a barrier, and fixing a bug in production can be done by developers in the app team.
For further information on the next steps you should be taking on your journey to digital transformation, we recommend you take a look at Device, the Digital Test Coverage Index report. A guide to help you decide how to build your test lab and which devices you should be testing on in order to deliver a flawless UX for your customers.