We're happy to share that results analysis is now generally available for browser, mobile, and API tests!
Troubleshooting failed tests takes time. With results analysis, you can quickly understand failures, identify patterns and trends, and reduce overall time spent on identifying and resolving bugs.
How it works
When you open up the test output for a failed test run, mabl sends the output to a large language model for analysis and returns a succinct explanation of why the test failed.
The failure analysis appears in the purple box
Expand the summary to view more details about why the test failed, including follow-up actions:
- Assign a failure reason - when applied consistently and accurately to failed test runs, failure reasons can help your team understand overall application quality and save time on review.
- Regenerate the failure analysis - click Regenerate to have the model reanalyze the failure and provide a new summary.
- Give feedback - use the thumbs up and thumbs down icons to give us feedback on the quality of the failure analysis.
An expanded failure analysis
What's changed since EA
Since introducing results analysis back in August, we made the following enhancements:
- Expanded support to include API tests
- Added support for failure analyses in Japanese
- Improved the accuracy of the suggested failure category
- For network issues, network logs are included to improve the depth of the summary
- Updated the failure analysis to appear inline with the test output instead of as a popover element.
Learn more
To learn more about the test output data that mabl uses to generate test failure analyses, check out the docs.