Mabl not only allows you to combine your accessibility testing and functional testing in one place but also leverages our existing reporting capabilities to provide visibility across all accessibility checks running against your app in a simple dashboard with intelligence and noise reduction built-in.
For information about configuring and running the checks that power this dashboard, visit our accessibility testing overview documentation.
Accessibility testing is currently available for all new Growth and Enterprise customers that close after June 1st, 2022 as well as all active trials. Customers that signed before June 1st will need to upgrade to gain access.
Mabl's reporting is broken down not only by the count of issues, but also by the severity. This enables you to track your app's overall accessibility trends across multiple severities at once.
Leverage our configurable target to set a threshold for your team to stay under. Keep up to date on your progress relative to this target and make the decision to sound the alarm easier than ever.
At a glance, mabl highlights the most severe, widespread issues first.
Mabl automatically groups similar issues that may be detected across multiple test runs so that you can spend less time chasing down false positives and more time focusing on the issues that matter.
Accessibility is more than just a checkbox. It's also good design, helps include those who are differently abled, and makes our products more usable and testable.
Mabl is not a replacement for manual testing and audits, which remain incredibly valuable and oftentimes legally required for compliance. Our goal is to help empower software teams to keep up with accessibility for the issues that can be automatically checked so that audits can focus on more nuanced issues.
The accessibility dashboard aggregates data from across your testing. There are three core pieces of the dashboard today:
- filters (including the app filter)
- rule violation history trends
- a list of active rule violations
With your filters set, the list of active rule violations will represent that last day in the rule violation history chart.
All of the cards in the list are expandable and are sorted by severity, then by the number of instances, and then alphabetically by name. Thus, any
Critical issues will always appear first with
Minor always being last.
The number of instances is calculated not by the number of tests that hit a certain violation, but by the number of unique pages that hit a violation. This means that the more frequently the violation appears throughout your app, regardless of the amount of testing you're conducting, the greater number of instances will be counted. This is a helpful indicator of not only how widespread an issue is, but it can also be helpful to determine how impactful the issue is as well by giving you the tools to quickly determine which user experiences are the most affected.
The cards themselves include quick links to not only the documentation on the rule itself, found via the vertical
... menu at the top right of the card, but also links to the test runs the issue was last detected in. Click these links to be taken to the test output page filtered for accessibility violations.
Mabl automatically aggregates all your accessibility testing results from Chrome runs in the cloud (using the Unified Runner) into the accessibility dashboard, providing a single view of all issues that may impact your users and key experiences.
Related to aggregation, the data gathered from your accessibility testing is trended over time. You can even set a target on the main chart to identify a set number of issues for your team to stay under. The combination of these helps quantify your accessibility improvements as well as quickly understand when further action is needed.
Mabl intelligently, and automatically, groups accessibility violations that may be related to one single issue. Rather than reporting on a new violation every time multiple tests hit the same page, it will identify that as a common issue and highlight that within the specific rule violations on the lower portion of the page.
Even if you use a single-page app (SPA) with dynamic URLs, mabl will leverage its existing path grouping functionality to identify and group these dynamic portions of the URL path without user input. This means that even if your app is testing with a new user, project, or workspace every time, mabl will still be able to group these issues together automatically without overreporting.
A rule violation is an instance, or multiple instances, of a given accessibility rule as defined by axe-core. These violations all have a severity that represents their impact. You can review the details here, as well as the related information on tags, severity, and descriptions.
The dashboard only populates data that's run in the cloud, local runs will not appear here. Similarly, accessibility check steps are only supported with the Unified Runner for Chrome. If you are on the legacy runner or running on browsers other than Chrome, your accessibility steps will not run nor populate this dashboard.
Mabl keeps track of the page and test that the violation occurred in. Thus, violations will be removed once that same test no longer identifies that violation on that page. You must run all tests that detect a specific rule violation before it is fully marked as resolved, but individual test runs will still mark instances of a single rule violation as resolved.
First, we recommend verifying your filters are still set to the application that the rule violation was first identified in. Secondly, it is likely that the violation was automatically detected as resolved by mabl (see the above for criteria) or that the violation was last seen outside of the rolling period that was selected by the
Date range filter. Mabl does not report on every historic violation, but rather on those seen within the selected date range. Running your tests on a somewhat regular schedule, i.e. <2 weeks, helps keep the data fully up-to-date.
Within the rule violation card components shown at the bottom mabl will link to the specific test runs where it last saw the violation. Click the name of the test for a specific URL path to quickly access the latest test run.
The team is working on the ability to include these violations in our dashboard. In the meantime, they will always be accessible from the test output page for the failed test run.
Updated 14 days ago