The output for a performance test includes a performance test summary and details on specific metrics.
Performance test output
Performance test summary
The Performance test summary chart shows the overall pass/fail rate and performance test results across the duration of the test. The chart shows up to four metrics. Click on the View more metrics button to select the metrics that you want the chart to display.
Performance tests fail if the threshold for at least one failure criterion is met.
If a performance test has no failure criteria, it will always pass. For example, a performance test with no failure criteria and a 100% functional test failure rate will still be marked as passing.
You can also filter the chart to show data from a specific functional test or browser test step using the dropdowns above the chart.
Customizing metrics to view in the performance test summary
Performance test run details
The table below the Performance test summary gives more granular data on specific metrics:
- Failure criteria
- Browser test metrics
- API test metrics
- Test run history
Not sure what different performance metrics represent? Check out the article on measuring system performance for an overview.
The Failure criteria tab shows results for failure criteria across all tests.
Browser test metrics
The Browser test metrics tab aggregates performance metrics across all individual browser test runs and shows the 75th percentile results for each metric.
Use the columns to filter steps with performance issues. Here are some recommendations for reviewing browser performance:
- Review LCP: click on the LCP column to surface steps with the highest time for LCP. LCP is often considered the best approximation of a page’s perceived performance
- Review step duration: click on the Step duration column to surface slow steps. For single page applications (SPAs), Chrome cannot collect Core Web Vitals, and step duration is a good alternative for measuring performance.
- Investigate network calls: click on the step itself to view the network requests triggered during the step categorized by domain and resource type. Sometimes a network request is the reason why a step takes longer than expected.
If you find a slow step, expand the step to view network activity for each test aggregated by step.
API steps in browser tests collect data on error rate and response time. These are the same metrics that mabl collects in the API test metrics tab.
API test metrics
The API test metrics tab aggregates error rates and API response times across all individual API test runs.
Suggested methods for reviewing API performance include:
- Click the Error rate column to find endpoints with the highest error rates. To review a breakdown of the most common response errors, click on the request.
- Click on the response time percentiles to find slow endpoints. For example, to identify the slowest endpoints at the 95th percentile, click on the 95th column.
Test run history
View results for the current run alongside results for other runs in the Test run history tab. This table can be a good starting point for identifying changes in performance:
- Scaling up load: monitor your app's performance as you increase concurrency.
- Check for regressions: monitor performance as you introduce new changes to your app.