Get a quick understanding of your test suite at a glance on the coverage overview dashboard: Coverage > Overview. This article explains how mabl calculates metrics on this dashboard.
- Latest pass rate
- Cumulative tests run
- Top tests with increased load time
- Quality metrics
- Test status
Before you start
Before analyzing metrics on the coverage overview dashboard, update the filters to define the tests that you would like to evaluate, such as tests with a specific label or all the tests in the dev environment from the past two weeks.
Note that the “application” and “plan labels” filters only include tests that are associated with a plan.
Latest pass rate
The Latest pass rate tells you the most recent status of the active tests in your defined test suite. This metric can help you understand the health of a release.
For example, if you ran a suite of 100 tests for an upcoming release and the latest pass rate is 40%, then 40 out of 100 tests passed on their most recent run. The remaining 60 tests failed on their most recent run.
Cumulative tests run
The Cumulative tests run chart shows how many unique tests ran out of all the active tests that match the filters you set within the specific date range. This metric gives a quick idea of what percentage of your test suite you’re running for a release, as defined by your filter.
For example, if the cumulative tests run is 149/177, it means that 149 unique tests ran out of 177 total tests that match your filters.
Applying filters will modify the metric for cumulative tests run. For example, if a workspace has 100 active tests, and 20 of those tests have the test label “checkout”, the number of tests counted depends on the following filters:
- No filters set = 100 total tests counted
- Test label filter set to “checkout” = 20 total tests counted
- Just start/end time set = 100 total tests counted
- Just environment set = 100 total tests counted
Top tests with increased app load time
The Top tests with increased app load time chart identifies potential performance regressions for the tests that match your filters. This chart shows:
- The browser and API tests that had the greatest slowdown in performance in the underlying pages and endpoints accessed by those tests. Click on the test or percent change to view more details on that specific test’s performance.
- The overall increase or decrease in average app load time and average API response time, averaged across all tests run for the release, as defined by your filters.
App load time and API response time
mabl uses app load time and API response time to calculate test performance:
- Cumulative app load time is the sum of the speed index across every step of the test. You can use cumulative app load time to measure the total time an end-user would experience waiting on the app to load excluding the time that mabl takes to execute the test.
- Cumulative API response time measures the sum of API response time across every request within the test, while excluding other ‘mabl’ time, such as the time in between requests.
Only completed test runs that are part of a plan run are included in performance data. In order to appear in the data, tests must meet these criteria:
- Run in a plan: Data from ad-hoc runs are not included in performance metrics.
- Minimum number of runs: The test must have run at least three times in the first half of the date range and at least three times in the second half of the date range.
- Matching filters: A test needs to match the filters you set on the release coverage page in order to be included in the performance card.
- Worsening performance trend: Only tests with an increase in app load time (browser tests) or API response time (API tests) between the first half and the second half of the date range are included in the performance card.
Quality metrics
The Quality metrics tab gives more detail on test activity, test run history, average app load time, and categorized failures for the filtered group of tests.
Test activity
The cards at the top show test activity for the filtered range of tests:
- Test runs: the total number of browser and API tests run during the filtered range. This number includes test runs from the default “Visit home page” and “Link crawler” tests for the filtered range of tests.
- New tests: the number of new tests created during the filtered range.
- Updated tests: the number of updated tests during the filtered range.
Note
The coverage overview dashboard measures test runs differently from the usage page: Settings > Usage. The usage page does not count the default “Visit home page” and “Link crawler” tests towards the total number of test runs.
Test run history
The Test run history chart shows the run history across the date range that you set in the filters.
Average app load time
View average app load time for tests that match your filters. Average app load time is automatically collected from the Google Lighthouse’s “Speed Index” metric for every mabl test that runs in a plan on Chrome and Edge.
Use this chart to identify larger performance trends, both improvements and regressions, across all of your testing.
Categorized failures
If you add failure reasons to failed test runs, this chart shows categorized failures for the tests that match your filters.
Test status
The Test status table summarizes the execution status for all tests, as defined by your filters. Click on column headers, such as Pass rate or Avg test run time, to surface underperforming tests. This table is especially useful for conducting regular reviews of your automated testing.
Reporting API
Use the batch results endpoint to get information on test runs programmatically.