Performance testing setup
How to create a new performance test
To get started, click on the New test button in the left-hand navigation and select "Performance test."
At a minimum, you must give your test a name. You may also give the test a description and labels.
Naming conventions, descriptions, and labels
As a best practice, we recommend defining naming conventions for tests with other members of your workspace so that you can easily locate tests and understand what they do.
While descriptions and labels are optional, they are another great way to add clarity and structure to your workspace and are strongly encouraged.
Add functional tests
Select one or more functional tests to run in the performance test.
Indicate which DataTable(s) you want to use when running the functional test in a performance test.
When using DataTable(s) in performance tests, each row will be assigned to a different virtual user. If there are more virtual users than rows in the table, then rows will be reused by multiple virtual users
DataTables for performance tests
Associating a performance test with a DataTable that contains a large number of credentials is useful for simulating scenarios that involve concurrent users.
Providing multiple sets of credentials helps avoid limitations on concurrent users from the application under test, the server operating system, or the database of the application under test.
A performance test may have up to 1000 concurrent users.
Performance tests currently support a total 1000 concurrent users across all functional tests. For example, if a performance test contains three functional tests, the sum total of concurrent users from the three functional tests cannot exceed 1000.
Failure criteria are optional, and they include API response time, HTTP error rate, and functional test failure rate.
- API response time: the time it takes for the API to process the request and return a response.
Measuring API response time with percentiles
In addition to averages, you can set failure criteria for API response time to fail according to percentiles. Percentiles tell you which response times fall below a given value. For example, if the API response time for the 95th percentile is 500ms, 95% of the API response times were lower than 500ms.
Percentiles are less sensitive to outliers than averages. Using percentiles to measure API response time can help you determine whether most of your users are getting good performance and give a more complete understanding of your system’s response to load.
- HTTP error rate: the percentage of API requests with error responses
- Functional test failure rate: the percentage of functional tests with a "failed" status, meaning that assertions in the underlying functional test were violated.
There can be some overlap between HTTP error rate and functional test failure fate since API tests have default assertions that requests return a 200 response.
Failure criteria for a performance test depend on your team's requirements:
- For example, if your team expects API response time to stay below 300 ms at a given concurrency, you could set a criterion that the test fails if the response time at the 95th percentile response time is greater than 300 ms.
- If you wanted to ensure that 95% of tests passed at the configured concurrency, you could set the Functional Test Failure Rate to fail if more than 5% of tests fail.
- If you do not set any failure criteria, the performance test has a "passed" status by default.
Additional test settings include duration of test and ramp-up time:
- Duration of test: set a duration for the performance test up to a maximum of 60 minutes.
- Ramp-up time: set the period of time over which the performance test will linearly ramp up from 0 to the configured concurrency of virtual users. After ramp-up, the test will continue at the configured concurrency for the remaining duration of the performance test.
Create your test
Click on the Create Test button to create your performance test!
Updated about 1 month ago