To expand on previously unsupported use cases, we’re happy to introduce a new option for training in mobile tests: visual find!
If you’ve ever tried to create an automated mobile test, you probably understand how difficult it can be to rely on text-based selectors alone with certain UI components. Instead of using text-based attributes in the page source, visual find leverages GenAI to find the correct element based on visual characteristics.
Currently supported for mobile tap steps, visual find is particularly helpful for interacting with pixel-based content that is not exposed in the page source, such as image and canvas elements.
Try it out
Make sure to update the mabl Desktop App to version 2.25.0 or later.
After launching your mobile application in the mabl Trainer, get your application in the correct state and add a tap step: + (Add step) > Tap. Drag your mouse over the area you want to target. mabl will send a screenshot of the target area to the model, which will send back an AI-generated description of the target area.
Modify the step as needed and click Save when you’re satisfied with the step.
On execution, mabl sends the GenAI description and the screenshot at run time back to the model, which returns the bounding boxes. To perform the tap step, mabl uses the x-y coordinates within the bounding box specified during training.
For a more detailed walkthrough on how to create tap steps with visual find, check out the help doc.
Visual Assist
Visual finds are the first release for Visual Assist, a new suite of capabilities that uses generative AI to understand what UI elements look like, expanding supported use cases and improving test reliability during execution.
Stay tuned for more exciting Visual Assist releases coming soon!