How to evaluate a new technology from QA perspective

Experiences in the pursuit of an efficient cross-platform solution.

08.28.2023
QA team - Choco intro -

In today's rapidly evolving technological landscape, choosing the right platform or technology for digital products is crucial. The decision-making process requires a thorough evaluation from various perspectives, including quality assurance (QA). What are important evaluation criteria from QA perspective? How can you prioritise best efforts when evaluating 3 different platforms? This blog unravels experiences and lessons learned from a Task Force in the pursuit of an efficient cross-platform solution. It concludes with tips on QA evaluation (criteria) that can kickstart any significant transition.

 -

QA evaluation process

When evaluating any new technology or platform from a QA perspective, there is a generic pattern that can help guide the process:

  1. Define criteria: Begin by examining the context of the company and the proposed change. Identify the stakeholders involved in this transition and determine what is important to them. For example, when choosing a new test automation tool, the requirements for testers may differ from those of developers. Testers may prioritise ease of use and debugging capabilities, while developers may prefer tools that align with the language and syntax of the application code they are familiar with.

Apart from these requirements, it's crucial to consider the broader ecosystem and support surrounding the technology or platform. Assess the availability of documentation, community support, forums, and resources that can assist with troubleshooting and maintenance. A robust ecosystem and reliable support can significantly impact the overall quality assurance process and the long-term success of the chosen solution.

2. Prioritise criteria: Collaborate with the stakeholders to finalise and prioritise the evaluation criteria. Assign points or weights to each criterion to distinguish between more and less important factors. This step helps ensure that the evaluation process focuses on the most critical aspects.

3. Evaluate criteria: Depending on the available time and resources, the investigation and implementation should be done efficiently. Let the prioritised criteria guide you in determining the order of evaluation. Start with the most important criteria and gather the necessary information to assess them. For instance, if a tool does not support a specific requirement, it can be eliminated from consideration. Keep in mind that there can be discrepancies between expectations and reality, so it's important to reflect on the evaluation process afterward and extract valuable learnings.

Evaluation of cross-platform solutions

When moving from a native to a cross-platform solution, most criteria were based on mobile end-to-end (E2E) test automation tooling. It was seen as the biggest challenge in this change due to multiple operating systems (e.g., iOS and Android), a multitude of device models, screen sizes and resolutions, and native features.

Since React Native and Ionic operate in a JavaScript ecosystem, there were several tools to choose from. Jest was chosen as the de-facto standard for unit, snapshot and integration testing. Regarding E2E test solutions the evaluation was based on WebdriverIO for Ionic and Detox, a tool specifically developed for React Native applications. Flutter is a different ballgame, or rather, a Dart game. Dart is the language used for Flutter applications, and the Flutter test toolkit providing a similar syntax for test creation on all layers. Despite the promising capabilities of the Flutter test toolkit, React Native and Ionic received the highest QA scores. This is largely due to the maturity, reuse of tooling, and familiar environment offered by both platforms. The other scores can be found here.

How to flutter - graphic -

Learnings

  • Look beyond E2E test automation solutions!

Flutter emerged as the chosen platform. As the migration to Flutter progressed, the team discovered that a comprehensive Flutter test toolkit providing a similar approach for all layers of test automation outweighed the need for mature end-to-end (E2E) test automation solutions.

Establish evaluation criteria that compare the entire stack of test automation with the developer experience. Consider not only familiar environments and reusing existing tooling, but also the ability to gain confidence through lower-level automated tests (unit/component/snapshot) and the similarity of the different levels to each other.

  • Incorporate the complexity and risk of a changing technology

In addition to evaluating based on QA criteria, an analysis was conducted on the cross functional complexity and risk of the three different platforms. Aside from specific issues like build crashes and native modules that break easily, there are two risks associated with every platform solution in comparison to native solutions:

  • Platform compatibility
  • Performance issues

In hindsight, these two generic risk topics apply to every migration from a native to a cross-platform solution. These cross functional aspects are still a small part of the complexity and risk of a changing technology. What about developer experience with the new technology, understanding the business domain on what to migrate, the migration strategy itself and setting up workflows and processes?

It is very useful to prioritise and take these factors into account early on. Looking back, these factors could have also helped to define the QA evaluation criteria, with a focus on verifying and mitigating these risks. Analyse the complexity and risk of a significant change, identify common factors, and use them in the most appropriate way.

  • Implementing automated tests for a prototype? Pick your battles!

Within a month, the Taskforce evaluated three platforms by implementing a set of essential features and integrating them with Choco's existing native application. At one point, one of the prototypes contained both domain logic and UI behaviour tightly coupled with dependencies. Yet another confirmation that implementing test automation in a rapidly changing environment is very difficult and not recommended. It also revealed that architecture and implementation have the greatest impact on testability, regardless of the platform or technology used.

So pick your battles when there is limited time to evaluate QA. An example project can already help a lot to give people an idea of the different types of test automation.

Conclusion

When evaluating cross-platform solutions, it is important to consider quality assurance (QA) to ensure a culture of integrating quality in the development process, or 'built-in quality'. This approach enables the team to focus and prioritise QA from the start and account for common risk factors during early implementation stages.

During the migration to Flutter, it became apparent that a comprehensive Flutter test toolkit was more beneficial than relying on mature E2E test automation solutions. Consider the complexity and risk of the change, the entire stack of test automation, and the developer experience during the QA evaluation, and kickstart your significant change!