In mabl, data-driven testing is the process of loading data that is external to your functional tests to parameterize your tests to run across several different inputs.
Common types of data-driven testing cases include:
If you want to assert that your application or API produces a specific outcome with different sets of data, you can use a DataTable. DataTables run the same test multiple times with a different scenario with a different set of data each time.
For example, if your app is localized to different languages, you could create a unique test to validate page content for each language, but each test would have the same steps and just different data. By associating the test with a DataTable, you can create one test that runs the same steps in each language listed in the DataTable.
By sharing variables, you can pass variable values from a previous test run to the current test run within a plan run. Both browser and API tests can pass and receive shared variables.
A common application of shared variables is the setup - test - teardown pattern. For more information on this testing approach, see our documentation on integrating API and browser tests into a plan.
Environment variables pass in values at runtime according to the environment.
For instance, if your team uses different login credentials per environment, you can add those credentials as environment variables.
There are a number of benefits of making your tests data-driven. Some include:
- Reusability: a single test can be executed multiple times with varying inputs.
- Separation of logic: data-driven tests allow for the clean distinction of test logic from the actual test data.
- Efficiency: you can update data for a test outside of the mabl Trainer or API Test Editor.
- Stronger test coverage: you can continually change the input test data and cover a broad scenario of inputs.
Updated 9 months ago