Reviewing performance tests

The output for a performance test includes a performance test summary and details on specific metrics.

Performance test output

Performance test output

Performance test summary

The Performance Test Summary chart shows the overall pass/fail rate and performance test results across the duration of the test. The chart shows up to four metrics. Click on the "View more metrics" button to select the metrics that you want the chart to display.

📘

Performance test status

Performance tests fail if the threshold for at least one failure criterion is met.

If a performance test has no failure criteria, it will always pass. For example, a performance test with no failure criteria and a 100% functional test failure rate will still be marked as passing.

You can also filter the chart to show data from a specific functional test or browser test step using the dropdowns above the chart.

Customizing metrics to view in the performance test summary

Customizing metrics to view in the performance test summary

Performance test run details

The table below the Performance Test Summary gives more granular data on specific metrics:

  • Failure criteria
  • Browser test metrics
  • API test metrics
  • Test run history

📘

Not sure what different performance metrics represent? Check out the article on measuring system performance for an overview.

Failure criteria

The Failure criteria tab shows results for failure criteria across all tests.

Browser test metrics

The Browser test metrics tab aggregates performance metrics across all individual browser test runs and shows the 75th percentile results for each metric. Click on the "Columns" button to select which metrics you want to include in the table.

Use the columns to filter steps with performance issues. Here are some recommendations for reviewing browser performance:

  • Click on the LCP column to surface steps with the highest time for LCP. LCP is often considered the best approximation of a page's perceived performance
  • Click on the Step duration column to surface slow steps. For single page applications (SPAs), Chrome cannot collect Core Web Vitals, and step duration is a good alternative for measuring performance.

If you find a slow step, expand the step to view network activity for each test aggregated by step.

📘

API steps

API steps in browser tests collect data on error rate and response time. These are the same metrics that mabl collects in the API test metrics tab.

API test metrics

The API test metrics tab aggregates error rates and API response times across all individual API test runs.

Suggested methods for reviewing API performance include:

  • Click the Error rate column to find endpoints with the highest error rates. To review a breakdown of the most common response errors, click on the request.
  • Click on the response time percentiles to find slow endpoints. For example, to identify the slowest endpoints at the 95th percentile, click on the 95th column.

Test run history

View results for the current run alongside results for other runs in the Test run history tab. This table can be a good starting point for identifying changes in performance:

  • Scaling up load: monitor your app's performance as you increase concurrency.
  • Check for regressions: monitor performance as you introduce new changes to your app.