Adding a failure reason to test runs

Every failed test in mabl can be given a failure reason to help you understand why your tests are failing and the state of your overall development.

Every classified failure will be reported on right from the main dashboard and can be exported from the results page to use in your team's own reporting tools. While you're on the results page you can also filter these recent runs by failure category, or lack thereof. Additionally, click on any of the failure reasons in the dashboard chart to be taken right to that page prefiltered.

Use cases and workflow

Why are my tests failing?

It's important to see the big picture when testing complex applications with many parts. The more you and your team mark the reason for failed tests, the more you'll get an idea of your overall application's quality. Right from the dashboard, you'll see your top failure periods for the last 30 days filtered for whatever environment you've selected directly above. If most of your tests fail because of test implementation issues, you may want to try out our branching feature to make sure that your regression or smoke tests aren't being affected by new features added to your app.

Additionally, you can also export these results from the Results page via the Download CSV button in the filter bar. The report will be filtered by any application you have set as well as any other filters you have selected. These filters will persist between sessions automatically, so be sure to clear them once you leave the page.

Has anyone else debugged this test?

Whether you're logging into the mabl app and landing on the dashboard or you're clicking on a direct link to a test failure, the failure reasons filter will help you spend less time trying to understand why things went wrong.

Viewing the output page

In the top right corner of the output page of a failed test, you'll see the Select failure reason dropdown. If you don't, it's probably because someone already set a failure reason. Just hover over the element to see who changed it last and when.

Selecting a failure reason

Selecting a failure reason

If someone hasn't already set the failure reason, just set a new one here to keep track of it for later once you've debugged it. You and your teammates will all be able to view the failure reason from here and the Results page. If you want to see what other tests failed because of that reason, just click the failure reason on "Top failure reasons" dashboard chart or use the "Failure reason" filter from the Results page. This page doesn't show every run that ever happened, but it will show the most recent ones.

Finding uncategorized failures

If you want to easily find what failures haven't been categorized yet by your team, you'll want to go right to the Results page. Find the Failure reason filter up at the top and select Uncategorized. This will filter all the recent test results and exclude those with existing failure reasons. As mentioned above, you can also easily export this list of tests with the Download CSV button in that same filter bar. Use the workspace application filter in the header to limit these results to a single application.

Where's my historic data?

The best way to access historic test data is with our BigQuery export integration, which gives you the ability to report on all your test runs from one place. With this integration, you're also able to view all of your categorized failures from the time you set it up.

How to select a failure reason

Setting a failure reason

First, you'll want to navigate to any failed test. Now look where Edit Steps appears in the top right, you'll also see a dropdown for Select failure reason. Simply click it to set a failure reason.

Viewing your failure report

If this is the first failure that you've categorized, you should see the chart on the dashboard say 100% for whatever reason you chose. Each subsequent test that's categorized will update this chart as long as the test ran within the last 30 days.

Ad hoc runs

Ad hoc test runs, which are test runs without a plan, can be classified with a failure reason. However, they are excluded from the "Top failure reasons" dashboard chart as well as the "Test run history" chart.

Removing failure reason

If you want to remove the failure reason, just click the X on the dropdown. It'll remove the existing failure category. This still counts as an edit, so if you hover over the dropdown it will show the last person to remove the reason.

Failure reasons

Regression

This failure reason just means that mabl has caught a bug that's caused your test to fail, such as a button disappearing after a recent release or a popup appearing where it shouldn't have previously.

Environment issue

This issue is any failure caused by something local to your testing, development, or other environments. This could be a variety of issues, such as dev credentials that are no longer valid or an environment suddenly becoming private.

Network issue

Network issues are any failures that may be related to mabl failing to connect to your app.

Test implementation issue

These failures are ones related to how the test was originally trained. This could be that a branch wasn't merged for a new feature, the steps were recorded in the wrong order, or than an important step was accidentally deleted.

Timing issue

Timing issues are failures related to the performance of your application. This is great for situations where an element that needs to be interacted with fails to load in time for mabl to interact with it. We recommend using the Wait Until Step in that situation to make your tests more robust.

Other issue

This issue type is for anything that might not fit into the category above. It's really up to you what you'd like to classify here.

mabl issue

This issue is for any failures that are believed to be related to how mabl is executing your tests. We recommend also reaching out to the mabl support team in-app if this does affect you.

Updated 3 months ago

Adding a failure reason to test runs


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.