Release coverage

Whether you're looking to understand the readiness of a new release or new version of your app, or you're just curious to check in on a key set of tests, the release coverage page in mabl allows you to quickly view the current status of your testing in one centralized place. Visit the coverage page now to explore.

The main release coverage dashboard with information on high-level quality metrics.The main release coverage dashboard with information on high-level quality metrics.

The main release coverage dashboard with information on high-level quality metrics.

Use cases

Identifying risk

The release coverage page helps you understand two key aspects of the risks related to your testing.

First, understanding if your application is thoroughly tested, whether this is defined as a version, release, branch, or time-based (such as a two-week sprint). The main sections at the top of the page will help you narrow down what you care about most, such as tests with a specific feature label or all tests on your dev environment during a given two-week period. The charts at the top will help you identify the most recent status of the tests you've run, as well as whether you've run all the tests that are available to run.

Second, understanding the additional risk associated with your release, even when your tests are all passing. This highlights information like the last status and result of each test run in this current release.

Monitoring your key tests and critical user flows

Use the test labels feature to limit the scope of tests that are included within the dashboard view. Quickly switch between test labels to review the level of testing against different features or product areas.

Monitoring performance across your app

Enterprise users will have access to the average app load time for your application as measured during your release. This data is collected automatically from the Google Lighthouse's "Speed Index" metric for every mabl test running on Chrome in the cloud, so long as they're not run ad hoc. It's then filtered to the tests matching the labels above, such as "checkout experience", and averaged by day. This helps you identify larger performance trends, both improvements and regressions, across all of your testing without the need for a separate tool.

Identifying flaky tests

Visit the "Test status" page to view the pass rate of your tests during this period, in addition to info on their latest status. Sort by tests with the lowest pass rate to quickly identify broken tests or even those that are flaky.

The test status table is where one can view the total pass rate as well as the latest status and result for each test.The test status table is where one can view the total pass rate as well as the latest status and result for each test.

The test status table is where one can view the total pass rate as well as the latest status and result for each test.

Finding which tests haven't been run

Visit the "Test status" page and sort by status to view tests marked as "Not Run". You can either visit these tests directly to kick off an ad hoc run, or view them to add them to an existing plan to close up that gap in testing coverage.

Uncovering broken tests

Visit the "Test status" page and pass rate to view tests that have been run but never passed, marked with a pass rate of 0%. You can click the link to access these tests directly to fix them or simply to turn them off and exclude them from the page entirely.

🚧

Filtering by application

Please note that the workspace-level app filter will not filter this dashboard today. We recommend using test labels to identify key groups of tests, such as those testing a specific feature. Please reach out to our in-app chat with any questions.

FAQs

How is the denominator (total # of tests) calculated?

This is all active tests (those not set to "OFF") in your workspace by default. Adding a test label to your filters will alter the denominator to only count tests that match that test label. Thus, you can look at your coverage related to a specific feature of your app that you care most about. No other filters today will alter the denominator.

Examples

For the below examples, we assume the workspace has 100 tests. 20 of those tests have the test label checkout.

  • No filters set = 100 total tests counted
  • Test label filter set to checkout = 20 total tests counted
  • Just start/end time set = 100 total tests counted
  • Just environment set = 100 total tests counted

How is the numerator (total # of tests run) calculated?

This metric is the cumulative number of unique tests run. So long as a test is counted as part of the "total # of tests" (the denominator), any run of that test in the cloud will count towards a unique test run so long as it matches the filters you have set.

There are a few exceptions, based on the filters you select:

  1. The environment filter will only count runs against the environment you specify
  2. The start and end time filters will only count runs within that period (max of 60 days)

Can I filter by part of a test name, plan name, plan label, or application?

Not at this time, please reach out to the mabl team if this is critical to your use case.


Did this page help you?