Compared to using the mabl app or the CLI, working with the mabl MCP can feel much more open-ended. While effective prompts can automate your workflow and save time, unclear prompts and ambiguous instructions can lead to wasted time or unexpected results. To ensure you get the most out of your interactions with the mabl MCP, follow the steps outlined in this article to craft effective prompts with confidence.
Build a starter prompt
To ensure the AI agent makes decisions that match your goals, start with a prompt that provides context and sets ground rules up front. Here is an example of a starter prompt that works with the mabl MCP:
“I want to create a new UI test in the ”Staging” environment for the “Admin Dashboard” application. My role is a QA Engineer. I’m going to use the mabl MCP tool. My goal is to create a test that validates the new user registration flow. How would you test this using an end-to-end tool?
Let me know what other information should be included in the prompt to get the best results.”
And here is another start prompt that combines the mabl MCP with the MCP servers:
“Let’s create a prompt that I can provide you later on to create test cases in JIRA.
Objective: Plan and generate automated tests for the “Database query creation” feature in the mabl app. Create a Jira TEST issue, and then create executable tests via the mabl MCP agent.
Here’s the information I want you to use when working on this:
- Scope: mabl web app (https://app.mabl.com) in prod environment
- The codebase
- Figma designs using the MCP server for Figma
- Playwright MCP to capture acceptance criteria and negative cases
- mabl MCP to check if there are existing test cases and to create new ones
- Atlassian MCP to create/edit the test cases in JIRA
Let me know what other information should be included in the prompt to get the best results”
Provide helpful context
Providing context in your starter prompt helps prevent misinterpretation. Depending on what you are doing, context can include:
- The mabl workspace, application, and environment you are working on
- The functionality being tested
- The role you want the AI agent to assume
- The MCP tools you plan to use
- A rough idea of the actions to take using the available tools
Set ground rules
For your first interactions, It’s okay if you don’t have clear ground rules. As you start using the mabl MCP, though, you’ll start learning how to optimize your interactions. You can use those insights to set ground rules in future starter prompts.
For example, if you want the AI agent to always ask you for confirmation before carrying out tasks, add that as an instruction in the starter prompt: “Always ask for my confirmation before you create a new test or modify an existing one.”
Or if there is a certain step that requires human intervention, make it explicit: “Pause and ask me to perform the login manually. Do not proceed until I reply ”Logged in”.
Manage interactions
After providing a starter prompt, use back-and-forth dialogue to progressively refine your intent.
Instead of:
“Analyze the failure of the ‘checkout flow’ test that ran last night. After you find the root cause, create a new Jira issue in the ‘QA’ project, title it ‘Bug: Checkout flow fails to load’, and include the failure analysis in the description. Then, link the test run results to the new issue.”
Better:
User: “Using the mabl MCP, please analyze the failure of the ‘checkout flow’ test from last night.”
AI: “I’ve analyzed the test failure from the run on July 10th. The error was a broken button on the product page.”
User: “That’s not right. The test I’m talking about ran on July 15th, and it was a failure on the payment page, not the product page. Please analyze the correct test run.”
AI: “My apologies, I see the correct run now. The failure was a timeout on the payment API call. Would you like me to create a new Jira issue for this?”
User: “Yes. Can you use the Atlassian MCP to create an issue in the ‘QA’ project? The title should be ‘Bug: Payment API timeout during checkout’.”
AI: “I have created the issue. Do you want me to add the full failure analysis and a link to the test run results to the description?”
User: “Yes, please do.”
Breaking down your tasks into back-and-forth dialogue helps reduce the chances that the AI will misinterpret your instructions.
Keep the context clear throughout your session
Even if you provide a lot of context in your starter prompt, It’s important to maintain an accurate context with your AI agent throughout your working session:
- Manage multiple MCPs: be clear about which MCP you want to perform a certain task. Otherwise your AI agent might guess and choose the wrong one.
- Tell the agent not to cache results: sometimes the agent might reuse inputs from the last time it carried out a task, but the data has changed since then. In this case, be explicit: “Data might have changed since the last time you ran this, so make sure to follow the steps carefully and do not skip any steps.”
Prompt your AI agent for a preview
Instead of directly invoking MCP tools, prompt the agent for a preview before finalizing actions. This practice reduces the likelihood that the MCP uses the wrong arguments or performs tasks before you’re ready:
User: How would you test this using an E2E tool? Give me an intent and a list of steps.
AI: (Evaluates the change and provides a list.)
User: Perfect, let’s create a test with that info. Use the URL generated in the terminal.
For guidelines on how to write an effective test prompt to work with mabl’s Test Creation Agent, click here.
Optimize interactions over time
As you continue to work with MCPs, you’ll get a better sense of what works and what doesn’t. Use that insight to optimize your experience.
Save successful prompts
When you have success with a particular prompt, save them for future use. As you learn more about what works well, you can tweak your prompts to optimize them over time.
Debug unsuccessful prompts
If something goes wrong, work with the agent to figure out what happened. In your AI client, you should be able to expand on a specific MCP action to see which arguments and inputs were used.
For example, this screenshot shows how you can expand on a get-mabl-deployment action to see what the input was in GitHub Copilot.
If the argument looks wrong, ask the agent about its decision-making process and how you could avoid the mistake:
- “Why did you pick that input?”
- “What should I have told you to avoid this?”
As you learn how the agent makes decisions and what it needs, you can develop ground rules to incorporate into future interactions.