When testing complex applications with many parts, it's important to see the big picture. In mabl, you can add a failure reason to failed test runs to help your team understand why your tests are failing and the state of your overall development.
Adding a failure reason to a failed test run
mabl aggregates failure reasons on the home page and the coverage overview dashboard. The more you and your team mark the reason for failed tests, the more you'll get an idea of your overall application's quality. Use cases for adding failure reasons include:
- Filtering failed runs: on the results page, filter recent runs by failure category and export these results via the Download CSV button above the filters.
- Saving time on review: when you add a failure reason to a failed run, you help your teammates spend less time trying to understand why things went wrong.
This article describes the different types of failure reasons you can add to a test run.
View historic test data
The best way to access historic test data is with the BigQuery export integration, which gives you the ability to report on all your test runs from one place. With this integration, you can also view all of your categorized failures from the time you set it up.
Regression
This failure reason means that mabl has caught a bug that caused your test to fail, such as a button disappearing after a recent release or a popup appearing where it shouldn't have previously.
Environment issue
Environment issues are any failures caused by something local to your testing, development, or other environments, such as dev credentials that are no longer valid or an environment suddenly becoming private.
Network issue
Network issues are any failures that may be related to mabl failing to connect to your app.
Test implementation issue
Test implement issues failures are related to how the test was originally trained. For example, a failed test with a test implementation issue could be caused by recording steps in the wrong order or accidentally deleting an important step.
Timing issue
Timing issues are failures related to the performance of your application, such as an application failing to load an element on the page in time for mabl to interact with it. We recommend using a wait until step in that situation to make your tests more robust.
Accessibility issue
Accessibility issues indicate that the test did not meet the criteria of an accessibility check.
Performance issue
Performance issues represent tests that failed because of performance issues in the testing environment or performance tests that failed because they met the failure criteria as defined by the test.
Other issue
This failure reason is for anything that might not fit into the categories above.
mabl issue
This failure reason is for any failures that you believe are related to how mabl is executing your tests. If you believe a test failed due to a mabl issue, we recommend also reaching out to the mabl support team.