Whether you're looking to understand the readiness of a new release or new version of your app, or you're just curious to check in on a key set of tests, the release coverage dashboard in mabl allows you to quickly view the current status of your testing in one centralized place.
Visit the release coverage dashboard now to explore.
The release coverage dashboard
This guide provides an overview on how to filter and interpret metrics on the release coverage dashboard.
Filtering data
The filters are the starting point for using the release coverage dashboard. When you set and remove filters, the data in the release coverage dashboard updates to count only the tests that match your filters.
Release coverage filters
By setting the filters to your definition of a release, such as tests with a specific feature label or all tests in the dev environment in the past two weeks, you can use the metrics on the release coverage dashboard to monitor your key tests and user flows.
The “application” and “plan labels” filters only include tests that are associated with a plan.
Metrics at a glance
Use the high-level metrics cards to get a quick understanding of your test suite, as defined by the filters.
Latest pass rate
The Latest pass rate tells you the most recent status of the active tests in your defined test suite. This metric can help you understand the health of a release.
For example, if you ran a suite of 100 tests for an upcoming release and the latest pass rate is 40%, then 40 out of 100 tests passed on their most recent run. The remaining 60 tests failed on their most recent run.
Cumulative tests run
The Cumulative tests run chart shows how many unique tests ran out of all the active tests that match the filters you set within the specific date range. This metric gives a quick idea of what percentage of your test suite you’re running for a release, as defined by your filter. To find out which specific tests haven't run, use the Test status table.
Performance
Use the performance card to identify potential performance regressions at the release level. The performance card shows:
- The browser and API tests that had the greatest slowdown in performance in the underlying pages and endpoints accessed by those tests. Click on the test or percent change to view more details on that specific test's performance.
- The overall increase or decrease in average app load time and average API response time, averaged across all tests run for the release, as defined by your filters.
Performance data is calculated from tests run as part of a plan. Data from ad-hoc runs are not included in performance metrics.
To learn more about what test runs provide performance data, click here.
For more information on understanding performance, see our guide on monitoring app performance over time.
Quality metrics
The Quality metrics tab gives more detail on test activity, test run history, average app load time, and categorized failures for the filtered group of tests.
Test activity
The cards at the top show test activity for the filtered range of tests:
- Test runs: the total number of browser and API tests run during the filtered range. This number includes test runs from the default "Visit home page" and "Link crawler" tests for the filtered range of tests.
- New tests: the number of new tests created during the filtered range.
- Updated tests: the number of updated tests during the filtered range.
Test activity cards
The release coverage dashboard measures test runs differently from the usage page: Settings > Usage. The usage page does not count the default "Visit home page" and "Link crawler" tests towards the total number of test runs.
Test run history
The Test run history chart shows the run history across the date range that you set in the filters.
Average app load time
You can also view average app load time for your application as filtered on the release coverage dashboard. This data is collected automatically from the Google Lighthouse's "Speed Index" metric for every mabl test running on Chrome and Edge in the cloud, so long as they're not run ad hoc.
This chart helps you identify larger performance trends, both improvements and regressions, across all of your testing.
Categorized failures
If you add failure reasons to failed test runs, you can use this chart to categorize test failures.
Test status
The Test status table displays results for specific tests with a handy set of filters.
The test status table
Using the filters in the Test status table, you can:
- Find out which tests haven't run.
- Uncover broken tests
- Identify long-running tests
- Identify performance change for individual tests
Find out which tests haven't run
Click on the Latest status column header to find out which tests have the Not run
indicator. You can either visit these tests directly to kick off an ad hoc run, or view them to add them to an existing plan to close up that gap in testing coverage.
Uncover broken or unused tests
Click on the Pass rate column header to identify tests that have run and have a pass rate of 0%. Additionally, click on the # of runs column header to find tests that have 0 runs. You can click the link to access these tests to fix them or turn them off to exclude them from the page entirely.
Identify long-running tests
If you want to optimize your test suite, click on the Avg test run time column header to identify long-running tests. See our guide on optimizing test performance for suggestions on how to reduce the overall run time of your tests.
Identify performance change for individual tests
If you want to view performance change data beyond the top three tests that appear in the performance card, click on the Performance change column header. When you click on the percent change, you can view the performance page for that particular test.
For more information on retrieving test results programmatically, check out our resources on the mabl Reporting API: