This is a public beta feature. It is available to everyone.
What is ML Tap?
ML Tap uses machine learning to analyze the screenshot of your application to detect elements and determine which element you tapped during recording. While editing, it will show you all of the found elements highlighted in boxes so you can see the breakdown of elements found on the page and easily select a different element for the test execution engine to tap.
Why use ML Tap?
ML Tap is much faster than using the page source to determine the element that has been selected. ML Tap is particularly useful for selecting text, images and clear blocks of elements.
ML Tap may be less efficient for small symbols or logos that are not unique on the page. In cases where ML Tap cannot consistently find an element in a test, it may help to select something easier to recognize, for example text instead of an arrow.
How do I use ML Tap?
The default setting for your workspace
Each new Scenario or Step Group recorded will use either ML Tap or Page Source. The default setting for your workspace can be found on the workspace settings page. If ML Tap is set to On, ML Tap will be the default option for all new recordings.
The option setting for ML Tap by each scenario
In addition, you can choose to set the option directly when you start recording a new scenario or step group by using the ML Settings button on the recording settings page.
In the ML Settings page you can then set the option for ML Tap
Scenarios recorded in ML tap format are labeled "MLUI" on the screen.