This article describes the settings you can configure when creating a new performance test: New test > Performance test.
Test details
At a minimum, you must give your test a name. You may also give the test a description and labels.
We recommend defining naming conventions for tests with your team so that you can easily locate tests and understand what they do. Use descriptions and labels to add clarity and structure to your workspace.
Add functional tests
Select one or more functional tests to run in the performance test.
Adding functional tests to a performance test
Credentials
If the functional test uses mabl credentials, Click on the Credentials button and select the appropriate credentials to associate with the test. For functional browser tests, you may also configure HTTP basic auth credentials if needed.
DataTables
If you add a functional test that is associated with one or more DataTables, mabl runs those DataTable scenarios in the performance test by default. To override default DataTable settings for a test, click on the DataTables button and select a different DataTable.
When running DataTable scenarios in performance tests, each scenario is associated with a different virtual user. If there are more virtual users than scenarios in the DataTable, then some scenarios will be reused by multiple virtual users. To learn more, refer to the article on how performance test execution works.
You can use DataTables that contain credentials to simulate scenarios that involve concurrent users. Providing multiple sets of credentials helps avoid limitations on concurrent users from the application under test, the server operating system, or the database of the application under test.
Load configuration
Set the concurrency. In performance tests, concurrency represents the number of virtual users repeatedly cycling through the test at the same time. Each user runs the test as many times as they can until the test time limit is reached.
The sum total of concurrent users across all functional tests cannot exceed 1000.
Failure criteria
You can set tests to pass or fail based on the functional test failure rate and/or based on specific performance metrics for browser and API.
If you are still figuring out your application's baseline performance, you can run the test with no failure criteria until your team defines expectations. See Getting started with performance tests for more details.
Failure criteria depend on your team's performance requirements. Consider the following examples:
- If you wanted to ensure that 95% of tests passed at the configured concurrency, you could set the Functional Test Failure Rate to fail if more than 5% of tests fail.
- If you want to monitor the perceived loading of your page, you could set a criterion that the test fails if the largest contentful paint (LCP) for your application exceeds the "Poor" threshold.
- If your team expects API response time to stay below 300 ms at a given concurrency, you could set a criterion that the test fails if the response time at the 95th percentile response time is greater than 300 ms.
Click here to learn more the performance metrics you can use to set failure criteria.
Test settings
Additional test settings include duration of test and ramp-up time:
- Duration of test: set a duration for the performance test up to a maximum of 60 minutes.
- Ramp-up time: set the period of time over which the performance test will linearly ramp up from 0 to the configured concurrency of virtual users. After ramp-up, the test will continue at the configured concurrency for the remaining duration of the performance test.
Create your test
Click on the Save button to create your performance test!