Running API tests

Running API tests; integrating API tests with browser tests

This guide outlines the different ways you can run API tests in mabl and how API tests can be used to augment end-to-end browser tests.

Running API tests

API tests can be triggered from within the API Test Editor, on an ad-hoc basis (local and cloud runs), or within a plan.

The API Test Editor

If you want to get quick feedback on a test that you're working on, click on the play button in the top left corner to run the entire test in the API Test Editor. Running an API test from the API Test Editor is equivalent to a local ad hoc run.

10881088

Running a test in the API Test Editor

Ad-hoc runs

You can trigger a single API test to run locally or in the cloud by clicking on the Run Test button on the Test Details page. See our documentation on ad hoc test runs for more information.

Within a plan

When API tests are added to a plan, they can be run ad hoc, on a schedule, or by a deployment trigger within an existing CI/CD workflow.

📘

Max runtime duration

An API test can run up to a maximum of 30 minutes. If you have a case where you need an API test to run longer than that, please contact our support team via in-app chat or [email protected] to discuss your options.

Integrating API tests and browser tests into a plan

API tests can be added to a plan to set up the conditions for browser tests and, if needed, restore the environment to its original condition after browser tests execute. Data setup and teardown are good candidates to be handled by API tests for two reasons:

  1. They run faster than browser tests
  2. They can handle the parts of testing that aren't central to what you want to test.

Setup - test - teardown

By organizing a plan sequentially or by plan stages, you can create a setup-test-teardown model that speeds up the testing process and isolates your browser tests to a specific end goal.

For example, if you wanted to validate that an ecommerce app was showing the correct information in a customer's order history, you could create the following setup:

  1. Setup: create one or more orders in an API test.

    • Store data about newly created orders in variables.
    • Enable variable sharing in order to pass data to a browser test.
  2. Test: Train a browser test to validate that the application order history shows the correct information*

    • Log in.
    • Navigate to the order history page.
    • Assert that the correct information appears on the order history. (While training, you can use test data-driven variables as placeholder values.)
    • Enable variable sharing to pass data to the subsequent API test for teardown.
  3. Tear down: create an API test that deletes the orders.

    • Assert that the delete requests return a successful status code.

📘

Testing additional functionality

Depending on the setup stage, the test stage could also include browser tests that validate additional functionality.

For example, the test phase of this sample plan could include other browser tests that validated functionality related to existing orders, such as applying discount codes to an order or modifying the delivery address.

Learn more

If you'd like to learn more about how you can combine API tests with browser tests to optimize your automated tests, check out the following guides: