The amount of time it takes for a test to run is dependent on a range of factors, from the latency and capacity of the test environment to the strategy and intention of individual steps. By understanding duration trends, you can identify long-running tests and regressions in the performance and run time of test suites.
This guide outlines some steps you can take to reduce the overall run time of tests and flows. Optimizing test performance is especially beneficial for the following situations:
- Reducing release cycle times for your app: If mabl tests are running as a part of your CI/CD pipeline, shorter runtimes means a faster workflow
- Quicker validation on individual tests: If you are running tests ad hoc to verify an update or troubleshooting step, optimizing performance means you spend less time waiting to verify whether the update works as expected.
Understand the baseline
Start by monitoring trends. For each test details and flow details page, there is a Performance tab, which includes the following information:
- Daily average app load time: This chart measures the total time your app takes to load between every step of this test, which is what an end user would experience as the performance of your app when completing this user journey manually. This is separate from and will always be shorter than the test run time. Only Chrome and Edge runs are included in this metric.
- Test/flow run time: This section shows fastest, average, and slowest times it took to run the test/flow on Chrome and Edge, averaged by day.
- Daily average run time: This chart displays the trend of test and flow run time on Chrome and Edge runs over a 90-day period.
In this guide, we will focus on steps you can take to optimize run times.
The Performance tab shows the daily average for app load time and test/flow run time.
Here are some questions you can ask yourself to guide your analysis:
- Are the results consistent?, or do they vary a lot?
- Is it getting better or worse?
Whether there's an overall increase in runtime or occasional spikes, identifying and addressing the causes can lead to faster, more predictable runs.
Identify opportunities for improvement
If you've identified a test or flow that runs slower than you'd like, either occasionally or consistently, you can use the following suggestions to optimize performance. The ideas in this section are organized according to a top-down perspective. To reap the greatest benefit, we recommend implementing more holistic measures before tackling individual steps.
Check app performance
Review the Daily average app load chart in the Performance tab to understand your app's baseline performance. An increase in app load time could indicate a performance regression. If app load times consistently spike at a certain time, determine whether the test environment has the capacity to run mabl tests.
Understand the strategy of the test or flow
In some cases, you might want to re-examine the overall testing strategy. Some examples of optimizing measures include the following:
- Using an API test in a plan to speed up the login flow
- Eliminating test steps that don't verify the end goal: Is every click and assertion necessary? Are there quicker ways to navigate to the page you want to test?
- Creating more customized flows: If many tests are using the same long-running flow, optimizing that flow can have a big impact. Options include creating several smaller flows that are more targeted, or identifying and optimizing slow steps in the flow.
- Identifying the slowest test in a parallel plan run: If a plan runs all tests in parallel and one test takes much longer than the others, optimizing the runtime of that slowest test can speed up the overall runtime of the plan.
Examine individual steps
Use the Test/Flow run time section to identify the slowest run. By clicking on the Slowest time, you can view the test output page for that specific test run and use the step timeline to identify steps that are taking a long time to execute. Here are a few different steps you can try to reduce run times:
- Add context to slow steps with Configure Find: Some steps take a long time because mabl has little information upon which to locate elements. If the mabl Activity logs indicate that mabl consistently takes a long time to search for a given element, consider adding additional context with Configure Find. For more information on adding context to test steps, check out our guide on finding the correct element.
- Update Configure Find values: If a step takes a long time and is already using Configure Find, check to see whether the Configure Find value is up-to-date. For example, if the step is configured to search for an element with a specific text, but the text of this element has been changed, mabl may be waiting a specified amount of time before auto-healing to the up-to-date version of the element. Changing the Configure Find value to the current expected value can help reduce the time a step takes to complete.
- Modify or remove wait steps: Wait steps are useful if you want to make sure an application is in a proper, actionable state. If your test includes many wait steps, understand why these wait steps exist. Depending on the reason, it may be possible to remove the wait steps and rely on intelligent wait, replace the wait step with a wait until step, or reduce the total wait time.
- Remove unnecessary hovers: When you record hovers, mabl collects a large amount of hidden data and steps that may slow down the execution of your plans. You can speed up your test run by removing any unnecessary hover actions (see deleting hovers).
Optimizing tests and flows is a collaborative and retrospective exercise. During the process, you may identify other areas of your team's workspace that could use improvement. For a list of general testing practices, check out our guide on working as a team in mabl.