Release coverage FAQs
Commonly asked questions about the Release Coverage page
Cumulative tests run
How is the denominator (total # of tests) calculated?
The denominator is all active tests (those not set to "OFF") in your workspace by default. Adding filters alters the denominator to only count tests that match that filter. Thus, you can look at your coverage related to a specific feature of your app that you care most about.
For the following examples, assume that the workspace has 100 active tests. Twenty of those tests have the test label "checkout."
- No filters set = 100 total tests counted
- Test label filter set to "checkout" = 20 total tests counted
- Just start/end time set = 100 total tests counted
- Just environment set = 100 total tests counted
How is the numerator (total # of tests run) calculated?
This metric is the cumulative number of unique tests run. So long as a test is counted as part of the "total # of tests" (the denominator), any run of that test in the cloud will count towards a unique test run so long as it matches the filters you have set.
There are a few exceptions, based on the filters you select:
- The environment filter only counts runs against the environment you specify
- The date range filter only counts runs within that period (max of 60 days)
Performance data
Which tests are included in the performance card?
Only completed test runs that are part of a plan run are included in performance data. In order to appear in the performance card, tests must also meet these criteria:
- Minimum number of runs: The test must have run at least three times in the first half of the date range and at least three times in the second half of the date range.
- Matching filters: A test needs to match the filters you set on the Release Coverage page in order to be included in the performance card.
- Worsening performance trend: Only tests with an increase in app load time (browser tests) or API response time (API tests) between the first half and the second half of the date range are included in the performance card.
How is the performance metric measured?
The percent change in the performance card and the Test Status table represents the change between runs in the second half of the selected time period compared to runs in the first half of the selected time period.
- In browser tests, the performance metric uses the cumulative app load time for browser test runs.
- In API tests, the performance metric uses the cumulative API response time for API test runs.
For example, if the date range is the last 60 days, the performance widget shows the percent change in Cumulative App Load Time between runs in the second half (days 31-60) compared to runs in the first half (days 1-30).
What is cumulative app load time?
Cumulative app load time is the sum of the speed index across every step of the test. You can use cumulative app load time to measure the total time an end-user would experience waiting on the app to load excluding the time that mabl takes to execute the test.
What is cumulative API response time?
Cumulative API response time measures the sum of API response time across every request within the test, while excluding other ‘mabl’ time, such as the time in between requests.
Why don't I see performance data on the Release Coverage page?
Only trial users and customers on an Enterprise plan have access to performance data. Contact your CSM to upgrade.
Updated 7 months ago