In mabl, you can set up your tests to align with your team’s A/B testing implementation method to ensure test coverage across the different versions of your app. This article outlines some guidelines for handling A/B tests in mabl.
Understand your team’s implementation method
Before creating tests in mabl, make sure you understand how your team implements A/B testing. The following table lists some common A/B implementation methods and how you can accommodate them in your mabl tests:
| A/B implementation method | Corresponding mabl options |
|---|---|
| Query parameters | Use a “Visit URL” step to navigate the mabl test to the appropriate starting URL: + (Add step) > Visit URL. |
| Browser cookie | Add a set cookie step to the mabl test before navigating to the app URL to ensure it shows the correct variant. |
| Feature flags and APIs | Add a mabl API step to send a request to your feature flag provider and enable a specific feature for the test user’s account or environment |
| UI-based toggles | Create a flow to navigate to that dashboard and toggle a given feature “on” or “off” before testing the variant. |
| User accounts | Use login flows with different mabl credentials to access the correct variant. |
| Environment-based variants | Put the variant URLs in separate environments in mabl. |
Choose a strategy for testing different variants
After identifying how to accommodate your teams’ A/B implementation method in mabl, align with your teammates on a strategy for structuring the test logic. This section identifies some strategies for testing different variants:
Test variants in the same test
If the divergence between the two variants is small, you can create one mabl test and use conditionals to handle the variant steps.
For example, if your A/B tests cover a small part of an e-commerce checkout experience, you could create a conditional to run the appropriate mabl steps depending on what the current variant is. Or if you’re testing environment-based variants that require the same interactions, you could configure a plan to run the same test against both environments.
Test variants in separate tests
If the experience differs significantly between one variant and the next, it may be easier to create separate tests for the different variants.
For example, if your A/B testing is evaluating a new “One-Page Checkout” experience and comparing it to the original “Multi-Step Checkout”, you can create distinct tests for both experiences and use flows for any parts that are the same.
Test variants with DataTable scenarios
If the difference between variants can be managed through data-driven inputs, you could use a DataTable to run the same test against multiple scenarios. This approach works best when the experience across variants is similar enough to be validated by the same functional test steps.
For example, if you are testing several different promotional banners triggered by query parameters, you can store those parameter values in a DataTable. mabl then runs the same test multiple times—once for each row in the table—to verify that each variant displays and functions correctly without needing to change the underlying test logic.
Validate consistent outcomes with API and database steps
Regardless of which strategy you choose, you can use mabl API steps or database query to validate that the final data is correct across all variants.
For example, if you’re testing two different checkout experiences that both result in a $20 purchase, you can add an API or database step to a shared flow to the end of your tests that validates the backend record is consistent.