Application performance is a critical aspect of the user experience, impacting the overall software quality of an application and its ability to meet end users' expectations. In particular, poor application performance can have a significant business impact. If a customer cannot complete a purchase on your application, they're more likely to turn to a competitor. With performance testing in mabl, you can test business critical user journeys and validate how your system performs under load.
Mabl performance tests take existing functional tests and configure them to run for a specific load of concurrent users on mabl's scalable, cloud-based platform. Using the test output, your team can understand the overall performance of your application and quickly detect and fix performance issues as they happen.
Early access program
Performance testing is currently an early access program. If you're interested in trying performance testing in early access, you can enable it on the Labs page:
Settings > Labs.
During the early access program, participants may use up to 500 virtual user hours (VUH) in total per account. Access and pricing are subject to change once the feature is available for general availability. Prior to general availability, changes to performance testing may be made without notice.
You can use performance tests in mabl to achieve the following:
Use performance tests to ensure that your application continues to meet service-level agreements (SLAs) as new changes are introduced.
Compare performance test output from the current release to performance test output from the previous release to check for any regressions in app performance.
Test your application's performance ahead of an anticipated season peak in traffic to make sure your APIs can handle it.
At a high level, you can take the following steps to establish a performance testing workflow in mabl:
To establish baseline performance for your application, create a series of load tests with increasing concurrency. For example, you may run the performance test with five users for 30 minutes, 50 users for 30 minutes, 500 users for 30 minutes, and then 1000 users for 30 minutes.
The output from the baseline test runs can help your team identify a baseline based on what your team defines as "adequate" performance and when performance begins to deteriorate.
Using this information, you can identify the appropriate settings for your performance test runs. For example, if the highest acceptable number of concurrent users for your application is 1000, and most API calls had an API response time of 300ms or less at this concurrency, you could configure the following settings for your performance test:
- Concurrency: 1000 virtual users
- Failure criterion: An API response time greater than 300ms at the 95th percentile
After you've identified the concurrency and failure criteria, you can run the performance test as new changes are introduced to your app to identify performance regressions and ensure everything is working as expected.
To learn more about running performance tests in your workspace, take a look at the following guides:
Updated about 1 month ago