Whether you're looking to understand the readiness of a new release or new version of your app, or you're just curious to check in on a key set of tests, the release coverage page in mabl allows you to quickly view the current status of your testing in one centralized place.
Visit the coverage page now to explore.
This guide provides an overview on how to filter and interpret metrics on the Release Coverage page.
The filters are the starting point for using the Release Coverage page. When you set and remove filters, the data in the Release Coverage page updates to count only the tests that match your filters.
By setting the filters to your definition of a release, such as tests with a specific feature label or all tests in the dev environment in the past two weeks, you can use the metrics on the Release Coverage page to monitor your key tests and user flows.
The “application” and “plan labels” filters only include tests that are associated with a plan.
Use the high-level metrics cards to get a quick understanding of your test suite, as defined by the filters.
The "Latest pass rate" tells you the most recent status of the active tests in your defined test suite. This metric can help you understand the health of a release.
For example, if you ran a suite of 100 tests for an upcoming release and the latest pass rate is 40%, then 40 out of 100 tests passed on their most recent run. The remaining 60 tests failed on their most recent run.
The "Cumulative tests run" chart shows how many unique tests ran out of all the active tests that match the filters you set within the specific date range. This metric gives a quick idea of what percentage of your test suite you’re running for a release, as defined by your filter. To find out which specific tests haven't run, use the Test Status table.
If you're on an Enterprise plan, you can use the performance card to identify potential performance regressions at the release level. The performance card shows:
- The browser and API tests that had the greatest slowdown in performance in the underlying pages and endpoints accessed by those tests. Click on the test or percent change to view more details on that specific test's performance.
- The overall increase or decrease in average app load time and average API response time, averaged across all tests run for the release, as defined by your filters.
Performance data is calculated from tests run as part of a plan. Data from ad-hoc runs are not included in performance metrics.
To learn more about what test runs provide performance data, click here.
For more information on understanding performance, see our guide on monitoring app performance over time.
The Quality Metrics tab gives more detail on test activity, test run history, average app load time, and categorized failures for the filtered group of tests.
The cards at the top show test activity for the filtered range of tests:
- Test runs: the total number of browser and API tests run during the filtered range. This number includes test runs from the default "Visit home page" and "Link crawler" tests for the filtered range of tests.
- New tests: the number of new tests created during the filtered range.
- Updated tests: the number of updated tests during the filtered range.
The release coverage dashboard measures test runs differently from the Usage page (
Settings > Usage):
- The Usage page tracks API usage as it is billed, measuring the total number of API steps rather than the total number of API test runs.
- The Usage page does not count the default "Visit home page" and "Link crawler" tests towards the total number of test runs.
The "test run history" chart shows the run history across the date range that you set in the filters.
Enterprise users have access to the average app load time for your application as filtered on the Release Coverage page. This data is collected automatically from the Google Lighthouse's "Speed Index" metric for every mabl test running on Chrome and Edge in the cloud, so long as they're not run ad hoc.
This chart helps you identify larger performance trends, both improvements and regressions, across all of your testing.
If you add failure reasons to failed test runs, you can use this chart to categorize test failures.
The Test Status table displays results for specific tests with a handy set of filters.
The Avg. performance and Performance change column headers only appear in workspaces on an Enterprise plan.
Using the filters in the Test Status table, you can:
- Find out which tests haven't run.
- Uncover broken tests
- Identify long-running tests
- Identify performance change for individual tests
Click on the Latest status column header to find out which tests have the
Not run indicator. You can either visit these tests directly to kick off an ad hoc run, or view them to add them to an existing plan to close up that gap in testing coverage.
Click on the Pass rate column header to identify tests that have run and have a pass rate of 0%. Additionally, click on the # of runs column header to find tests that have 0 runs. You can click the link to access these tests to fix them or turn them off to exclude them from the page entirely.
If you want to optimize your test suite, click on the Avg test run time column header to identify long-running tests. See our guide on optimizing test performance for suggestions on how to reduce the overall run time of your tests.
If you want to view performance change data beyond the top three tests that appear in the performance card, click on the Performance change column header. This column header only appears in Enterprise workspaces. When you click on the percent change, you can view the performance page for that particular test.
Mabl Reporting API
For more information on retrieving test results programmatically, check out our resources on the mabl Reporting API:
Updated 6 months ago