Use the mabl Test Creation Agent to jumpstart test authoring and implement best practices. Read on to learn how to set it up for a browser test:
Define the intent of the test
When you create a new browser test, include a description of what you want the test to accomplish. The Test Creation Agent will use this description to start building out your test.
Providing an intent for a test
For guidelines on how to write an effective test prompt, click here.
Launch the agent
After writing the test prompt, fill out the remaining information on the test creation form. If you want the test to use a specific set of credentials or a DataTable, make sure to add them before you click on Create test.
When the Trainer launches, mabl gathers information about the flows in your workspace and creates a plan for your test. The agent starts generating steps using the following context:
- A screenshot of the app
- The task plan
- Any existing test steps located before the cursor
To prevent the agent from getting into an unexpected state, avoid interacting with the page while the agent is generating tasks and steps.
Editing the prompt
If the plan doesn’t align with your prompt, lacks important details, or includes something you don’t want, pause the agent and edit the test prompt. When you restart the agent, it will update the plan based on the edited prompt and any existing steps located before the cursor.
Note
The agent is biased towards English. If you write the test prompt in a non-English language, the Trainer may still generate tasks and assertions in English.
Viewing agent activity
When the agent is running, it adjusts the plan as it learns more about your application. To review all of the agent’s decisions and actions, click on the clock icon to view agent activity.
Manually train steps as needed
The Test Creation Agent supports most common UI interactions, including filling out forms, selecting from date pickers and dropdowns, and clicking on elements. If the task plan includes unsupported interactions, the agent may prompt you to take over training.
For a complete list of supported and unsupported interactions, click here.
In ambiguous situations, the agent may get off track if it doesn’t know enough about your app to make good decisions. If this happens, pause the agent and switch to manual mode to add in tasks and steps manually.
The Trainer in manual mode
To create a new task header, insert an echo step and add the special symbol “–>” before the text.
Inserting a new task header
Restart the agent
After manually training the test, you may restart the agent to continue generating steps.
The agent updates its plan based on the test intent and any existing steps that come before the cursor. When the agent is learning next steps, it has no awareness of steps located after the cursor.