Application performance represents an important part of overall quality. When users interact with a slow or unresponsive site, they may develop a negative impression of the brand or organization behind the site or even abandon your application altogether. Performance testing offers a way to monitor your application, prevent poor user experiences, and prepare for seasonal peaks in traffic.
With mabl performance tests, you can use existing functional tests to simulate a load on your application and review the output to ensure your application is meeting users' expectations. This tutorial walks through the basics of mabl performance tests for new users.
Performance testing is available as an add-on feature to your subscription. Contact your customer success manager to learn more.
Before you start
Before you create your first performance test, discuss the following questions with your team:
- What tests do we want to use for the performance test?
- What are our current performance expectations?
Choose functional tests
Mabl performance tests reuse existing browser and API tests. Before you create a performance test, work with your team to identify which tests you want to use. A good candidate for a first performance test is a test that regularly passes in mabl and targets an area of your application with known performance issues.
Select functional tests that validate your application in a pre-production or QA environment or a development version of your app. By conducting performance tests on your application before it is deployed to prod, you can identify issues early and avoid causing performance bottlenecks or failures in your production environment.
To test APIs in private environments, add mabl static IP addresses to an allow list. The specific IP range for performance testing is 34.31.138.224/27
.
Mabl Link is not yet supported for performance tests.
Define expectations
Performance expectations can vary depending on the application. Expectations in mabl performance tests are defined by failure criteria.
- If your application already has clearly defined performance expectations, perhaps in the form of service-level objectives (SLOs), see the article on measuring system performance to learn more about options for failure criteria.
- If your team has not yet defined performance expectations for your application, you can start by running performance tests without failure criteria to get an idea of how your application is performing.
Create a test
To create your first performance test, take the following steps:
- From the mabl home page, click on the New test button in the left-hand navigation.
- Select "Performance test."
- Give your test a name.
With your team, discuss what naming conventions and labels you can use to distinguish practice tests from other tests in your workspace. Consider the examples below:
- Naming conventions: "Walkthrough performance test - Your name"
- Labels: "tutorial", "walkthrough", or "practice are good labels for distinguishing your practice tests.
- Click on the + API test or + Browser test button to add a functional test. For your first performance test, limit the scope to one or a few tests.
- Select a test from the dropdown.
- Set a concurrency. Concurrency represents the number of virtual users.
- Set the duration. Duration represents how long the performance test runs.
- Select a default application and environment to associate with the performance test.
- Click Save to create the test.
For your first performance test, choose a low concurrency and duration. For example, set a concurrency of five users and a duration of 15 minutes.
As you understand more about how your application performs under load, you can gradually increase the concurrency in subsequent tests.
For your first performance test, we recommend skipping the failure criteria if your team does not have defined performance SLOs. Once you have an understanding of how your application performs and you've aligned with your team on performance SLOs, you can set failure criteria accordingly.
Setting up a performance test
For more details about performance test setup, check out this guide.
Run the test
On the test details page for the new performance test, take the following steps to run the test:
- Click the Run test button.
- In the ad hoc run panel, select the application associated with your performance test.
- Click Start 1 run.
When the test starts running, its status appears below the Start 1 run button. Click on the status to view the performance test output page.
Triggering an ad hoc run of a performance test
For more details on running performance tests, check out this guide.
Review the output
Test output for performance tests consists of two sections:
- A chart showing performance test metrics across the duration of the test.
- A series of tables with more granular details about each metric.
Performance test output
As you understand more about your application's performance, you can use these metrics to determine failure criteria and identify regressions in performance. For more details on reviewing performance test output, check out this guide.
Next steps
After creating and running your first performance test, share the results with your team to discuss the next best steps. Some ideas for next steps include:
- Making a plan to scale up concurrency and duration to understand when your application starts to fail.
- Working with your team to define performance expectations. Use these expectations to set failure criteria.
Once you identify the baseline performance expectations for your application, you can start testing as new changes are introduced to the app to detect performance regressions before they make it to production.
For more information on performance testing, check out this overview.