While performance tests use the same mabl cloud infrastructure as functional tests, the way they execute is slightly different. This article explains how performance test execution works, from startup to completion.
Performance tests are supported for execution in the mabl cloud. They cannot be executed locally.
Test startup
When a performance test is triggered, mabl starts up runners in parallel according to the test’s defined load configuration. Each runner functions as a virtual user that executes a functional test.
Connecting to private networks
If you are testing in private environments, the way mabl runners access your application and API depends on how you configured private network access in mabl:
-
mabl static IP addresses - the mabl IP range for performance testing -
34.31.138.224/27- was added to your company's allow list so that mabl performance test traffic can access your testing environment. - mabl Link - mabl runners access your network through a Link Agent running on a host in your private network. Running performance tests over mabl Link introduces additional steps to the path the test traffic takes, which may result in increased latency as measured by the performance test. For this reason, we recommend using the mabl Link approach primarily if the static IP address approach isn’t feasible for your team.
The default concurrency limit for performance runs over Link is 100 runners per performance test execution. If your performance test specifies more than 100 runners, mabl automatically scales it down proportionately at execution time so that it does not exceed 100 when running over Link.
For example, if your performance test has 2 workloads, one with a concurrency of 50 and another with a concurrency of 100 (total 150), mabl scales those down to 33 and 67, respectively, when the test executes over Link. Running the same test in an environment that does not use Link will use the originally configured concurrency.
If you need higher concurrency, reach out to the mabl Support Team.
Execution
After the runners start up, each virtual user starts running their respective functional test. mabl continues executing the functional tests in a loop until the end of the configured duration For example, if a performance test is configured to run one API test for 10 virtual users for 15 minutes, each virtual user runs the API test over and over again for the 15-minute duration.
Handling DataTables
If the performance test is associated with a DataTable, mabl assigns a specific scenario to each virtual user, and the virtual user cycles through that scenario for the duration of the test.
For example, if you configure the functional test to use a DataTable with 100 scenarios and set the concurrency to 50 virtual users, then the performance test uses the first 50 scenarios for the duration of the test.
On the other hand, if you configure the functional test to use a DataTable with 50 scenarios and set the concurrency to 100 virtual users, the performance test uses each scenario for two virtual users for the duration of the test.
DataTables settings at the plan level don’t apply
When performance tests run in plans, they don’t use plan-level Datatable settings. DataTable settings for performance tests are always configured at the test level.
Browser test specifics
To ensure tests run consistently and with minimal overhead, browser tests that run in a performance test don’t include the following:
- Auto-heal attempts
- GenAI Assertions
- Collection of detailed diagnostics, like screenshots or step trace
All browser tests execute on Chrome.
To make sure your browser tests are using up-to-date information to find elements on the page, we recommend running them as regular functional tests in a plan on a regular basis. For example, you could create a plan that runs those browser tests once a week. That way, if there are changes in the application, mabl can keep its find values for your app up to date.
Functional test failures
Performance tests don’t stop running when functional tests fail. When a functional test fails, as long as there is time remaining, the virtual user starts running the test again for the configured duration.
Test completion
At the end of the configured duration, mabl shuts down the runners, pushes the final metrics to the cloud, and evaluates the failure criteria. Based on the failure criteria, mabl assigns a passed or failed status to the performance test.
For more information on performance test results, check out the following articles:
- Understanding performance test output - key concepts on the performance test output page
- Reviewing performance tests - how to filter performance test output to identify issues