Troubleshooting failed plan runs can be a time-consuming process, and clicking into individual test runs becomes especially tedious when testing at scale. To ease this burden, we’re happy to introduce Auto Test Failure Analysis (Auto TFA) for plans!
Use Auto TFA to get a quick overview of why plans failed, and spend less time overall troubleshooting issues.
Auto TFA is available as part of the Advanced AI add-on. Reach out to your customer success manager to learn more.
How it works
When you open the output page for a failed plan run, mabl sends the output to a large language model. The model returns an analysis of which test(s) failed and the likely reason why.
Expand the analysis to view more details about why the plan failed, including follow-up actions:
- Regenerate the analysis - click Regenerate to have the model reanalyze the failure and provide a new analysis.
- Give feedback - use the thumbs up and thumbs down icons to give us feedback on the quality of the failure analysis.
To generate a failure analysis, mabl sends the following plan run output data to the model:
- Plan name and description, including the last updated time and the last user to update
- Plan run information, including status, trigger method, and run time
- Contents of the plan, including an ordered list of stages and tests
- AI-generated analyses and failure reasons for failed tests in the plan run
For tests that ran before June 16, 2025, test failure analyses have not been automatically generated. Viewing a plan run from before this date will only analyze up to 10 failed tests from the plan run.
To learn more about Auto TFA, check out the docs.