To get started, click on the New test button in the left-hand navigation and select Performance test.
Creating a new performance test
Test details
At a minimum, you must give your test a name. You may also give the test a description and labels.
As a best practice, we recommend defining naming conventions for tests with other members of your workspace so that you can easily locate tests and understand what they do.
While descriptions and labels are optional, they are another great way to add clarity and structure to your workspace and are strongly encouraged.
Add functional tests
Select one or more functional tests to run in the performance test.
Adding functional tests to a performance test
DataTables
Indicate which DataTable(s) you want to use when running the functional test in a performance test.
If you add a test that is associated with one or more DataTables, mabl runs those DataTable scenarios in the performance test by default. To override default DataTable settings for a test, click on the DataTables and select a different DataTable.
When using DataTable(s) in performance tests, each row will be assigned to a different virtual user. If there are more virtual users than rows in the table, then rows will be reused by multiple virtual users
Associating a performance test with a DataTable that contains a large number of credentials is useful for simulating scenarios that involve concurrent users.
Providing multiple sets of credentials helps avoid limitations on concurrent users from the application under test, the server operating system, or the database of the application under test.
Load configuration
Set the concurrency. In performance tests, concurrency represents the number of virtual users repeatedly cycling through the test at the same time. Each user runs the test as many times as they can until the test time limit is reached.
The sum total of concurrent users across all functional tests cannot exceed 1000.
Failure criteria
You can set tests to pass or fail based on the functional test failure rate and/or based on specific performance metrics for browser and API.
If you are still figuring out your application's baseline performance, you can run the test with no failure criteria until your team defines expectations. See Getting started with performance tests for more details.
Failure criteria depend on your team's performance requirements. Consider the following examples:
- If you wanted to ensure that 95% of tests passed at the configured concurrency, you could set the Functional Test Failure Rate to fail if more than 5% of tests fail.
- If you want to monitor the perceived loading of your page, you could set a criterion that the test fails if the largest contentful paint (LCP) for your application exceeds the "Poor" threshold.
- If your team expects API response time to stay below 300 ms at a given concurrency, you could set a criterion that the test fails if the response time at the 95th percentile response time is greater than 300 ms.
Click here to learn more the performance metrics you can use to set failure criteria.
Test settings
Additional test settings include duration of test and ramp-up time:
- Duration of test: set a duration for the performance test up to a maximum of 60 minutes.
- Ramp-up time: set the period of time over which the performance test will linearly ramp up from 0 to the configured concurrency of virtual users. After ramp-up, the test will continue at the configured concurrency for the remaining duration of the performance test.
Create your test
Click on the Create Test button to create your performance test!