With conversational results analysis, you can chat with the mabl agent to investigate failed tests, plans, and deployment events. Instead of manually reviewing logs, screenshots, and artifacts, ask the agent targeted questions about a failure and export the results of your investigation as a custom PDF report to share with your larger team.
Agent analysis
When you open the output for a failed run, the Results summary tab displays an automatically generated agent analysis of the failure. This static, high-level summary represents the starting point for your investigation.
Prompting guidelines
The chat window provides some open-ended questions to get you started, such as “Why did this fail?” To get even more out of your interaction, ask questions that are more focused on specific aspects of the failure, such as:
- Step 10 failed to find the ‘Add to Cart’ button. Did the element’s ID or CSS class change? Compare the DOM snapshot from this failed run to the last passing run.
- Three tests in this plan failed on the same login step. Do they share the same failure reason or error?
- Why did this plan fail? Show me the failing tests and step-level errors with screenshots.
- Are any of the failures in this plan run new (not seen in the last 5 runs of this plan) versus recurring?
- Compare this deployment to the last 3 deployments — is the failure rate a regression?
- Should we roll back this deployment based on the test results? Give me a go/no-go with reasoning.
As you investigate the failure, the agent compiles supporting evidence into a complete analysis. If you need to communicate important details with your team, you can export the full analysis as a custom PDF report.
Available data
The mabl agent has access to a full array of test diagnostics and execution details. The following section lists available data by type of failure.
Test failures
When investigating test failures, the mabl agent has access to the following:
- Test name, description, and steps
- Test output logs and screenshots from the failing step
- Logs and screenshots from the same step in a recent passing run, if available
- Chrome traces, HAR files, and console output
- Variables and their values
- Request/response details for up to 5 failing steps (API tests)
- Pre-request scripts, post-request scripts, and assertions (API tests)
The agent also suggests a failure reason to help your team categorize failures faster.
Plan failures
When investigating plan failures, the agent has access to the following:
- Plan name, description, and configuration
- Plan run information, including status, trigger method, and run time
- An ordered list of stages and test runs
- Individual test failure analyses and failure reasons for failed tests in the plan
Deployment event failures
When investigating deployment event data, the agent has access to the following:
- Deployment event metadata
- Status
- All failure analyses for failed plan and test runs in the deployment event
- Details on related deployment events for comparison.
Limitations
Results analysis is not available for:
- Local runs and Playwright runs (cloud runs only)
- Performance test failures
- The default “Visit Home Page” and link crawler plans