When a performance test runs, mabl captures your application's performance in several charts and tables. This article explains how to filter performance test output to understand and identify issues with the current state of your app's performance under load, including:
Performance test output
Keep in mind the following points when reviewing performance test status:
- Performance tests fail if the threshold for at least one failure criterion is met.
- If a performance test has no failure criteria, it will always pass. For example, a performance test with no failure criteria and a 100% functional test failure rate will still be marked as passing.
- If you stop a performance test, it takes up to one minute for all tests to stop running. Stopped performance tests consume the number of billable VUH up until the test stopped.
Customizing the performance test summary chart
The performance test summary chart tracks performance test metrics across the duration of the test.
Not sure what different performance metrics represent? Check out the article on measuring system performance for an overview.
Displaying different metrics
To customize the metrics displayed on the chart, click on View more metrics and select up to four different metrics to display.
Customizing metrics to view in the performance test summary
Filtering by test
To show data from a specific functional test or browser test step, select it from the Select functional test dropdown above the chart.
Filtering browser test metrics
The Browser test metrics tab aggregates performance metrics across all individual browser test runs and shows the 75th percentile results for each metric.
Use the columns to filter steps with performance issues. Here are some recommendations for reviewing browser performance:
- Review LCP: click on the LCP column to surface steps with the highest time for LCP. LCP is often considered the best approximation of a page’s perceived performance
- Review step duration: click on the Step duration column to surface slow steps. For single page applications (SPAs), Chrome cannot collect Core Web Vitals, and step duration is a good alternative for measuring performance.
- Investigate network calls: click on the step itself to view the network requests triggered during the step categorized by domain and resource type. Sometimes a network request is the reason why a step takes longer than expected.
If you find a slow step, expand the step to view network activity for each test aggregated by step.
API steps in browser tests collect data on error rate and response time. These are the same metrics that mabl collects in the API test metrics tab.
Filtering API test metrics
The API test metrics tab aggregates error rates and API response times across all individual API test runs.
Suggested methods for reviewing API performance include:
- Click the Error rate column to find endpoints with the highest error rates. To review a breakdown of the most common response errors, click on the request.
- Click on the response time percentiles to find slow endpoints. For example, to identify the slowest endpoints at the 95th percentile, click on the 95th column.
Comparing to other runs
View results for the current run alongside results for other runs in the Test run history tab. This table can be a good starting point for identifying changes in performance:
- Scaling up load: monitor your app's performance as you increase concurrency.
- Check for regressions: monitor performance as you introduce new changes to your app.