When a test fails, figuring out why often means manually checking screenshots, scanning logs, comparing to previous runs, and trying to determine if the failure is a real regression or just flaky. With results analysis in mabl, you can turn that investigation into a conversation.
- Review the agent analysis - this static, high-level summary represents the starting point for your investigation
- Ask followup questions about a specific failure - the agent can pull in diagnostics, such as screenshots and logs, and details on previous runs to dig deeper into root causes, patterns, and trends.
- Export the full analysis as a custom PDF report - share the complete results of your investigation with the wider team
Results analysis is available for failed test runs, plan runs, and deployment events:
- Test run analysis: Investigate a single test failure with access to all run artifacts and historical comparisons
- Plan run analysis: Analyze failures across multiple tests in a plan run to investigate issues that need further attention
- Deployment event analysis: Review results from a suite of plan runs to identify common patterns, including go/no-go evaluations based on the results
Try it out
Open any failed test, plan, or deployment run and review the Agent analysis. In the chat interface, type a question about the failure to initiate an investigation with the mabl agent, such as:
“When did this test start failing?”
"Are other tests in this plan seeing similar issues?”
“What changed in the network requests between this run and the last passing run?”
If you want to share the analysis with your team, click on the download icon to export the supporting evidence as a custom PDF report.
Learn more
To learn more about what output mabl uses to perform a results analysis, check out the docs.