When you create a browser test with Test Creation Agent, the agent uses your prompt to create a test outline and then build out the steps in the Trainer. To get the most out of the training session, follow these guidelines for writing an effective browser test prompt.
Before you write your prompt
Before writing your test prompt, make sure you have the following ready:
- A clear goal - know the outcome you want to validate, not just the steps to get there. For example, “Verify that a new customer can complete checkout” is more helpful than a list of clicks.
- Test data - have any credentials, shipping addresses, SKUs, or other input data ready to include in the prompt.
- The correct starting state - confirm that your application is in the state the test expects. For example: verify the app is logged out if the test starts with a login flow.
Plan your test
Start with intent
Focus on intent and overall outcomes. As your app changes over time, the clicks required to reach the outcome may change, but if mabl knows the underlying intent, it can use auto-heal to intelligently adapt.
Structure your prompt
For longer or more complex test prompts, organize your instructions into the following sections:
- Preconditions - starting state, such as a direct URL or initial data setup.
- Failure cases - invalid inputs or error states you want to test. Run these before success cases to avoid unintended state changes.
- Success cases - the main workflow, or “happy path.”
- Verification - what the test should assert at the end.
- Cleanup - any reset actions, like logging out, so the app is ready for subsequent tests.
Not every test needs all five sections. A simple login test might only need preconditions, a success case, and a verification. But for tests that cover both valid and invalid inputs, this structure helps the agent execute steps in the right order.
Write your instructions
Give clear, discrete instructions
Describe the test case as if you were telling a manual tester how to validate the workflow. If the high-level goal of the test is “Purchase a paintbrush”, include discrete instructions on how to interact with the app:
“Search for a paintbrush using the search bar. Select the first matching item. Verify that the item detail page shows more than one image for the item. Perform the checkout process with the following shipping info: 100 Maple St. Boston, MA 02111.”
Detailed instructions can be especially helpful if the test interacts with a confusing area of your app. For example, if you need the test to navigate to a difficult to find page, include additional details on that portion of the test:
“Navigate to the user settings by first clicking on configuration, then in the sub menu clicking, then settings.”
The agent will automatically import relevant flows. If you want to increase the likelihood that a specific flow gets imported, call it out in the prompt. Just be sure to indicate any prerequisite steps before running the flow:
“Use the flow 'Logout - App' after updating contact information.”
Formatting
Formatting can help the model do a better job at interpreting your prompt. If you have a long list of instructions, we recommend putting each item in the list on a new line instead of using a comma-separated list.
Describe target elements with visible attributes
The agent uses screenshots as a primary source of context when creating steps. In your prompt, make sure to describe target elements based on their visual appearance or with visible attributes, like text. For example:
“Click on the blue ‘Submit’ button in the bottom right corner of the form.”
The agent does not have access to the page’s HTML. It cannot identify elements based on hidden attributes, such as aria-label or data-testid.
Call out important validations
The agent automatically adds assertions to validate your app is working as expected. If you don’t specify validations, the agent adds assertions based on what the model thinks is important. For example, after adding an item to a cart, the agent typically asserts that the item was successfully added.
If you need to validate something specific on the page, include it in the prompt. For example, if you want the test to validate the format of an order confirmation page, call it out in the prompt:
“Verify the order confirmation includes the customer’s name and item selected”
Handle special cases
Explain how to use associated resources
The agent has access to all test variables, including any associated DataTables or credentials. If you want the test to use specific variables, call them out explicitly and explain how you want the agent to use them.
For example, if you associate the test with a DataTable to validate localization, you might include the following explanation in the test prompt:
“Use the DataTable variables to validate page headers: validate the login header with
{{@login}}and validate the user settings header with{{@user_settings}}to validate user settings page headers.”
Indicate any actions that require manual intervention
The agent has a bias towards trying to accomplish tasks without asking for help. If you know that the test requires a specific unsupported step type, make a note of it in the test prompt. For example, if you add the following to the prompt - “Ask for help performing the file upload” - the agent knows when to ask for your input.
When you add steps manually, perform only the requested actions and then restart the agent. If you add additional unrelated steps, it can cause unexpected behavior from the agent.
Iterate
Customize prompts for your app
If you notice that the agent makes certain mistakes when interacting with your app, refine your prompt to guide it toward the correct behavior.
Make instructions more specific - if the agent isn’t interacting with the right element, try adding more context to the relevant instruction. A strong instruction includes the action you want the agent to take, the context that helps it find the right element, and optionally a constraint that narrows the expected behavior:
Before: “Click the blue button.”
After: “Click the ‘Submit Application’ button to advance to the confirmation screen.”
Add navigation hints - if the agent fails to navigate to the correct page, give more explicit instructions:
“Always navigate by using the menu. Settings are hidden under the preferences option.”
Constrain the agent’s behavior - if the agent adds steps that you didn’t want, include instructions like “Do not add steps not outlined in the test case.”
Add a notes section for known issues - at the end of your prompt, include a notes section that documents past failure patterns and their workarounds. The agent can reference this information during planning. For example:
“Notes:
The modal on the checkout page won’t close by clicking outside of it. Use the Esc key instead. The search bar requires pressing Enter to submit; it does not auto-search on input.”