There are many different ways to configure and run tests in a plan, and what works for one team may not work for yours. This article offers general guidelines on how to organize tests into plans so that you can improve collaboration, maximize efficiency, and reduce maintenance:
Identify testing categories
Work with your team to identify testing categories based on how you maintain your app. Use the following questions to guide your discussion:
- How is your team structured?
- What do you focus on for ownership?
While no two teams are alike, here are some common ways to group tests into plans:
- User role: Plans can contain tests that verify features for a specific user role, such as administrators or users.
- Feature sets: If your team is working on a specific feature, such as search, filtering, navigation, or settings, you can create a plan that contains the tests that cover all the scenarios for one specific feature.
After your team decides how to organize plans, establish plan naming conventions. When applied consistently, naming conventions can help everyone in the workspace understand the purpose of different plans and collaborate more effectively.
Manage plan execution
To control how tests execute within a plan, use plan stages and concurrency settings.
Plan stages
By default, all plans are created with one stage that executes all tests in parallel. You can add additional plan stages to run tests in groups in a set order. Stages are particularly useful for structuring plans that involve different types of tests or require specific setup and teardown procedures.
Examples of organizing different test types into plan stages:
Setup - test - teardown: populate test data before running the main tests and clean up data at the end:
- Stage 1 (Setup): Run API tests to set up data.
- Stage 2 (Test): Run the main mobile or browser tests using that data
- Stage 3 (Teardown): Run API tests to clean up data.
API pre-check: validate an application’s underlying APIs first and proceed to the main UI tests only if the API tests pass. Because API tests consume fewer credits than UI tests, this approach can help conserve credit consumption.
- Stage 1: Run API tests to validate expected responses.
- Stage 2: Run browser/mobile tests
Cross-platform validation: conduct a sequential validation across different platforms:
- Stage 1: Run mobile tests to validate core functionality.
- Stage 2: Run browser tests to verify changes on an admin site or other web-based components.
Concurrency settings
Within plan stages, tests can execute in parallel or sequentially.
Parallel execution: significantly reduces test run time and speeds up the feedback cycle. Use parallel execution if the tests in the stage can run independently or if you need to simulate specific conditions by running different test types simultaneously.
Example of combining different test types in one stage
For example, if you combine performance tests with browser or mobile tests in a single stage that runs everything in parallel, you can validate how your application behaves under load.
Sequential execution: if the tests in a plan stage require a specific order to run successfully, configure the stage to run tests sequentially.
If you use plan stages and/or sequential execution, you can also leverage shared variables to pass values from one test to another.
Optimize plan size
While it is possible to run a plan containing 100+ tests, creating smaller plans makes it easier to troubleshoot when something goes wrong. For example, if a large, complex plan in your workspace starts to fail intermittently, you can break up the plan into several smaller plans to reduce dependencies and isolate the issue.
Smaller plans also lend themselves to CI/CD setups. If you integrate mabl into your CI/CD pipeline, you can leverage plan labels to trigger a specific set of plans on deployment with mabl deployment events.
With plan labels, you can define specific test suites, such as regression, targeted regression, smoke, the team associated with the tests, or a certain functionality or feature area.