Looking to monitor for performance regressions in your app? Or maybe just want to understand your app's baseline performance for your users?
You can view performance metrics for your browser and API tests by clicking on the Performance tab of any user-created mabl test:
- Cumulative app load time (browser tests): this chart measures the total time your app takes to load between every step of this test (measured using Speed Index), which is what an end user would experience as the performance of your app when completing this user journey manually. App load time is separate from and will always be shorter than the test run time. Only Chrome and Edge runs are included in this metric.
- API response time (API tests only): this chart measures the total API response time across every request within this test. It excludes other factors, such as the time in between requests. API response time is separate from, and will always be shorter than, the test run time.
- Test run time: Measures the total time your test takes to run, averaged by day. For browser tests, only Chrome and Edge runs are included in this metric.
Calculating performance metrics
Metrics in the Performance tab are calculated using passing test runs from the past 90 days.
You can use these metrics to identify when the key user journey you're testing starts taking longer to complete, or simply confirm when your app performance improvements have started to positively impact those same user journeys.
The dots on the performance tab are clickable and take you to the specific test run.
The performance tab in a browser test
Use cases
Monitoring for performance regressions
A great use of the app load time and API response time charts is to monitor for performance regressions, especially if you're running your tests on a regular cadence. Did the average time shoot up and to the right, and stay that way too? Check for deployments and major changes made around that time to your app, as it may indicate larger performance regressions that have been introduced.
You can also dive into output from individual test runs. The test output page includes a Performance tab that shows the performance of individual steps over time. In browser tests, the performance tab for individual steps shows the speed index metric over time. For example, the following speed index chart shows how an individual step contributed to a worsening performance trend but is also a performance anomaly that may be worth investigating further.
Speed index for a test step
In API tests, the performance tab for individual steps is located under the Response tab. It shows the API response time for the individual step over time.
API response time for a test step
Understanding your app's baseline performance
Mabl will report on the performance for the last 90 days, helping you understand the baseline performance of your app over time. Since you can also filter your results by the environment they ran against, you can also understand performance differences between your different environments and how that may influence your testing strategies.
Understanding your users' experience
In browser tests, the cumulative app load time chart measures the performance of your app without the "mabl" parts of your tests like time spent on finds, waiting, asserting, and more. As a result, this metric and changes over time represent the real experience of your application for your end-user. If the trend worsens for your login flow for example, it may be a sign that users are starting to spend more time waiting for your app to load for the first time.
Frequently Asked Questions (FAQs)
Why do I see different numbers than in my test's run history?
Test run time, as shown in the run history table, captures the entire test from startup to completion. In a browser test, this includes time spent auto-healing or starting up the browser. For API tests, test run time includes the time spent between steps.
Test run time is not the same as the cumulative app load time or the API response time. App load time, for example, captures the total time your app took to load for each run. This reflects how users are likely to perceive the load speed of your application user interface (UI) as they complete their journey through your app and is generally a much smaller figure than the actual duration of the test.
Test run time for a browser test includes things such as:
- Browser startup time
- Wait times
- The time mabl took to find an element or complete a click/hover/drag-and-drop
Cumulative app load time only includes: - The time your app took to load for each step - even for steps that do not fully re-render the app
What's Speed Index?
Speed Index in basic terms tracks page load performance over time, the time it takes before mabl can take an action against it. Learn more, please note that mabl uses the Speed Index and not Speed Index Score.
What is API response time?
API response time is the time between sending the API request to receiving a response.
How can I filter the data?
The chart and data on this page are filterable by environment using the environment dropdown selector and by application using the application filter in the top right corner of the app. When using either filter, the chart will only display runs that occurred on the app or environment that you've selected.
- The environment filter can be helpful to isolate environments that may be experiencing degraded performance, as well as focus on stable environments that may have performance that better mirror's your production experience.
- The application filter can be helpful to pinpoint, especially in larger workspaces, which app may be experiencing issues if your tests or flows are reused across multiple applications.
How can I populate more results? Or, why is there no data?
If your performance chart is empty or doesn't have as many runs as you'd expect, there are a few things to check to make sure results start populating:
- Chrome and Edge: For browser tests, only Chrome and Edge runs will populate step traces necessary to load performance data.
- Passing tests: Only passing tests will populate the chart.
- Plan runs: Only tests run as part of a plan in the cloud will populate results. Ad hoc cloud runs and local test runs do not contribute to performance metrics.