Best practices for creating browser tests

Setting up your browser tests for success

This guide outlines best practices to ensure your browser tests are more effective and easier to manage, including:

  • Have a clear intention for the test
  • Optimize the test feedback cycle
  • Facilitate collaboration

Have a clear intention for the test

When testing an application, you may have to validate multiple user journeys and outcomes for different features. Before you begin creating a browser test, think about the end goal, using these questions as a guide:

  • What user journey are you trying to validate?
  • What actions do you need to take to validate the outcome of this user journey?
  • How can you assert that each action works as expected?

👍

By focusing on one specific user journey for a single browser test, you can start identifying the actions required to validate that outcome.

If you want to validate that a user can complete a web form over multiple logged in sessions, you might decide that these are the actions required to validate the correct outcome:

  1. Log into the application
  2. Complete half of the form
  3. Log out
  4. Log in to the application with the same credentials
  5. Finish the remainder of the form
  6. Submit the form

After identifying the actions required to validate the outcome, think about how you will assert that each action works as expected.

👍

Assertions are essential for confirming that specific functionality works as expected. For more information on assertions best practices, see this guide.

For example, to assert the action "Log into the application", you might record the following steps:

  • Enter username
  • Enter password
  • Click submit
  • Assert that a "Logout" button is present on the page

By working backwards from the user journey and identifying the actions required to validate the outcomes, you can ensure that your test validates the end goal. As an added benefit, by avoiding unnecessary details, you can make your tests easier to interpret and troubleshoot.

Optimize the test feedback cycle

Reducing the time it takes to get feedback on tests helps your team move faster. Some steps you can take to speed up the test feedback cycle include:

  • Running tests in parallel
  • Reducing test run time

Running tests in parallel

If a plan doesn't require a specific, sequential order of execution, then you can speed up the feedback cycle by running tests in parallel. If the environment under test has limited underlying resources, you can set concurrency limits to fine-tune the number of tests that run in parallel.

📘

For more information on parallel test runs and managing concurrency, see our guide on plan stage settings.

Reducing test run time

The ideal size for a browser test depends on the end goal that you want to validate, which means that there is no one-size-fits-all approach to testing. Nevertheless, when a test contains hundreds of test steps, a few obstacles emerge:

  1. Maintenance burdens: If a browser test fails on step 142, you may have to spend more time getting your application in the correct state for troubleshooting.
  2. Longer feedback cycles: Longer tests slow down the feedback cycle in your development pipeline. Plans that execute tests in parallel are only as efficient as the test with the longest run time.

If a browser test takes a long time to run and troubleshoot, consider one of the following approaches to reduce test run time:

  1. Break the test into multiple tests: If you can accomplish the testing end goal over multiple user sessions, consider breaking up a single test into multiple smaller tests.
  2. Modify long-running test steps: If a test contains steps that consistently take a long time to execute, take a closer look at the individual test steps. For more advice on decreasing the run time of a browser test, consider the options outlined for individual test steps in Optimizing test performance.

Facilitate collaboration

Browser tests are most effective when they are easy to understand. To that end, you should take steps to facilitate collaboration in your browser test, including:

  • Establishing naming conventions
  • Adding a test description
  • Optimizing flows
  • Using echo steps
  • Renaming steps

Establishing naming conventions

Work with your team to identify the ideal format and terms to use for naming your browser tests. For example, you could name tests using the following format: "Application - Module - Test."

👍

By adhering to an agreed-on set of naming conventions, you can ensure that all members of your workspace can find tests and understand what those tests do.

Adding a test description

The browser test creation form includes an optional field for "test description." Adding an explanation of the test helps other users in the workspace understand what is being tested without reading through the individual steps.

Optimizing flows

Flows are a great way to encapsulate a common series of test steps for others to reuse. To get the maximum benefit from flows, make a plan with your team that addresses the following questions:

  • If someone creates a flow, how does the rest of the team know about it?
  • How can we make flows discoverable?

By establishing best practices, such as naming conventions and a review process, you can ensure that the flows in your workspace are easier to find and understand.

Optimizing your flows can reduce overall test maintenance and prevent team members from making several nearly identical flows to accomplish the same task.

Echo steps

Add echo steps to clarify the intent behind steps and flows or to visually break up the test into smaller parts for improved readability.

Renaming steps

You can rename a step to clarify what the step is trying to accomplish by double-clicking on the step in the Trainer window and giving it a more descriptive name.