Looking to monitor for performance regressions in your app? Or maybe just want to understand your app's baseline performance for your users?
Visit the "Performance" tab of any user-created mabl browser test to view the cumulative speed index, which is the time your application takes to load your entire test run, charted over the previous 90 days. As shown in the image below you can identify when the key user journey you're testing starts taking longer to complete, or simply confirm when your app performance improvements have started to positively impact those same user journeys.
Enterprise plan feature
This feature is currently only available to Enterprise plans and mabl trials. To upgrade, please contact your mabl CSM.
A great use of this chart is to monitor for performance regressions, especially if you're running your tests on a regular cadence. See the average time shoot up and to the right, and stay that way too? Check for deployments and major changes made around that time to your app, as it may indicate larger performance regressions that have been introduced.
Dive into individual test outputs to review the "Performance" chart for each step using the same Speed Index metric. In the example below, we can see how an individual step contributes to a worsening performance trend but is also a performance anomaly that may be worth investigating further.
This chart measures the performance of your app without the "mabl" parts of your tests like time spent on finds, waiting, asserting, and more. As a result, this metric and changes over time represent the real experience of your application for your end-user. If the trend worsens for your login flow for example, it may be a sign that users are starting to spend more time waiting for your app to load for the first time.
Mabl will report on the performance for the last 90 days, helping you understand the baseline performance of your app over time. Since you can also filter your results by the environment they ran against, you can also understand performance differences between your different environments and how that may influence your testing strategies.
Resolving performance regressions
Here are some recommendations from web.dev on how to improve speed index scores in your application.
Test duration, as shown in the run history table, captures the entire test from startup to completion including time spent auto-healing or starting up the browser. The cumulative speed index, as charted below, captures the total time your app took to load for each test run. This reflects how users are likely to perceive the load speed of your application user interface (UI) as they complete their journey through your app and is generally a much smaller figure than the actual duration of the test.
Test duration includes things such as:
- Browser startup time
- Wait times
- The time mabl took to find an element or complete a click/hover/drag-and-drop
Cumulative speed index only includes:
- The time your app took to load for each step - even for steps that do not fully re-render the app
Speed Index in basic terms tracks page load performance over time, the time it takes before mabl can take an action against it. Learn more, please note that mabl uses the Speed Index and not Speed Index Score.
The chart and data on this page are filterable by environment via the dropdown selector, as well as your selected application filter. When using either, the chart will only display runs that occurred on the app and environment you've selected.
The environment filter can be helpful to isolate environments that may be experiencing degraded performance, as well as focus on stable environments that may have performance that better mirror's your production experience.
The application filter can be helpful to pinpoint, especially in larger workspaces, which app may be experiencing issues if your tests or flows are reused across multiple applications.
You don't need to do anything to start populating performance information automatically, so long as your tests are running in a way that mabl will automatically collect the relevant performance data.
If your performance chart is empty or doesn't have as many runs as you'd expect, there are a few things to check to make sure results start populating:
- Does this test run on Chrome? Only Chrome run will populate step traces necessary to load performance data
- Is your test passing? Only passing tests will populate the chart
- Are you running tests ad hoc or locally? Only tests run as part of a plan in the cloud will populate results today, although this is changing soon
Updated about 1 month ago