Selenium Automation in Practice: Reliable UI Tests with Page Objects, Explicit Waits, and Test Data

Selenium Automation in Practice: Reliable UI Tests with Page Objects, Explicit Waits, and Test Data

End-to-end (E2E) UI tests are often the first thing teams abandon when they get flaky. The good news: most “Selenium is flaky” pain comes from a few fixable causes—timing issues, brittle selectors, and tests that try to do too much.

In this hands-on guide, you’ll build a small Selenium test setup in Python that’s designed to be stable:

  • Use explicit waits instead of time.sleep()
  • Use Page Object classes so tests read like user journeys
  • Prefer resilient selectors (data-testid, roles, stable attributes)
  • Keep test data predictable and isolate state

What you’ll build

We’ll automate a simple login flow and a “create item” flow for a fictional web app:

  • Navigate to login
  • Sign in
  • Create a new item in a list
  • Verify it appears

You can adapt this structure to any app with minimal changes.

Setup: install Selenium and a browser driver

We’ll use:

  • selenium for browser automation
  • pytest as the test runner
  • webdriver-manager to download/configure ChromeDriver automatically
pip install selenium pytest webdriver-manager

Create a basic project layout:

tests/ pages/ base_page.py login_page.py items_page.py test_e2e_items.py conftest.py

Rule #1 for stability: Explicit waits (never sleep)

UI tests fail when they click something before it’s ready. Replace time.sleep() with explicit waits that watch for conditions like “element is clickable” or “text is visible”.

Let’s implement a BasePage with helper methods that always use explicit waits.

# tests/pages/base_page.py from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC DEFAULT_TIMEOUT = 10 class BasePage: def __init__(self, driver, base_url): self.driver = driver self.base_url = base_url def open(self, path: str): self.driver.get(self.base_url.rstrip("/") + "/" + path.lstrip("/")) def wait_for_visible(self, by: By, value: str, timeout: int = DEFAULT_TIMEOUT): return WebDriverWait(self.driver, timeout).until( EC.visibility_of_element_located((by, value)) ) def wait_for_clickable(self, by: By, value: str, timeout: int = DEFAULT_TIMEOUT): return WebDriverWait(self.driver, timeout).until( EC.element_to_be_clickable((by, value)) ) def click(self, by: By, value: str): el = self.wait_for_clickable(by, value) el.click() def type(self, by: By, value: str, text: str, clear: bool = True): el = self.wait_for_visible(by, value) if clear: el.clear() el.send_keys(text) def text_of(self, by: By, value: str) -> str: return self.wait_for_visible(by, value).text

Choose resilient selectors: add data-testid in your app

If you can influence the app code, add stable test hooks like data-testid. It prevents tests from breaking when CSS classes or layout changes.

Example HTML in your app (recommended pattern):

<input data-testid="email" /> <input data-testid="password" type="password" /> <button data-testid="login-submit">Sign in</button>

In Selenium, you can target these with CSS selectors like [data-testid="email"].

Implement Page Objects (so tests stay readable)

A Page Object wraps page interactions behind a clean API. This reduces duplication and makes selector changes localized.

Login page object

# tests/pages/login_page.py from selenium.webdriver.common.by import By from .base_page import BasePage class LoginPage(BasePage): EMAIL = (By.CSS_SELECTOR, '[data-testid="email"]') PASSWORD = (By.CSS_SELECTOR, '[data-testid="password"]') SUBMIT = (By.CSS_SELECTOR, '[data-testid="login-submit"]') ERROR = (By.CSS_SELECTOR, '[data-testid="login-error"]') def go(self): self.open("/login") def login_as(self, email: str, password: str): self.type(*self.EMAIL, text=email) self.type(*self.PASSWORD, text=password) self.click(*self.SUBMIT) def error_message(self) -> str: return self.text_of(*self.ERROR)

Items page object

After login, suppose you land on /items where you can create an item.

# tests/pages/items_page.py from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from .base_page import BasePage, DEFAULT_TIMEOUT class ItemsPage(BasePage): NEW_BTN = (By.CSS_SELECTOR, '[data-testid="new-item"]') NAME_INPUT = (By.CSS_SELECTOR, '[data-testid="item-name"]') SAVE_BTN = (By.CSS_SELECTOR, '[data-testid="save-item"]') TOAST = (By.CSS_SELECTOR, '[data-testid="toast"]') LIST = (By.CSS_SELECTOR, '[data-testid="items-list"]') def go(self): self.open("/items") def create_item(self, name: str): self.click(*self.NEW_BTN) self.type(*self.NAME_INPUT, text=name) self.click(*self.SAVE_BTN) # Wait until a success toast appears (or any "saved" indicator) WebDriverWait(self.driver, DEFAULT_TIMEOUT).until( EC.visibility_of_element_located(self.TOAST) ) def item_is_listed(self, name: str) -> bool: # Wait for list container to be visible container = self.wait_for_visible(*self.LIST) return name in container.text

Wire up pytest: one browser instance per test (or per session)

A clean baseline is “fresh browser per test” for isolation. It’s a bit slower but dramatically reduces shared-state bugs. Later you can optimize to “per session” once stable.

# conftest.py import os import pytest from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options @pytest.fixture def base_url(): # Change this to your dev/staging URL return os.getenv("BASE_URL", "http://localhost:3000") @pytest.fixture def driver(): options = Options() # Headless is great for CI. For local debugging, comment it out. options.add_argument("--headless=new") options.add_argument("--window-size=1280,900") options.add_argument("--disable-gpu") options.add_argument("--no-sandbox") service = Service(ChromeDriverManager().install()) drv = webdriver.Chrome(service=service, options=options) # Implicit waits often mask issues; prefer explicit waits only. drv.implicitly_wait(0) yield drv drv.quit()

Write the test: small, focused, deterministic

Two key practices here:

  • Keep each test to one “story” (login + create item)
  • Use unique test data so reruns don’t collide
# tests/test_e2e_items.py import time from tests.pages.login_page import LoginPage from tests.pages.items_page import ItemsPage def unique_name(prefix="item"): # Simple uniqueness without extra dependencies return f"{prefix}-{int(time.time() * 1000)}" def test_user_can_create_item(driver, base_url): login = LoginPage(driver, base_url) items = ItemsPage(driver, base_url) login.go() login.login_as("[email protected]", "password123") items.go() name = unique_name() items.create_item(name) assert items.item_is_listed(name)

Debugging flakiness: the top 5 fixes

  • Replace brittle selectors. If you’re using long CSS paths or dynamic classes, expect pain. Use data-testid, stable IDs, or accessible roles where possible.

  • Wait on the right thing. Don’t wait for “presence” when you need “clickable”. Prefer conditions like element_to_be_clickable and visibility_of_element_located.

  • Stop testing animations. If a modal slides in, wait for the field inside it to be visible/clickable, not for arbitrary timeouts.

  • Capture artifacts on failure. Screenshot + HTML source is often enough to diagnose.

  • Make tests independent. If test A creates state and test B assumes it exists, you’ll get random failures depending on order.

Bonus: capture screenshots and HTML on failure

This is a small addition that pays off immediately in CI logs. Add a pytest hook to save artifacts when a test fails.

# conftest.py (append this) import pathlib from datetime import datetime def pytest_runtest_makereport(item, call): if call.when != "call": return outcome = call.excinfo is not None if not outcome: return driver = item.funcargs.get("driver") if driver is None: return artifacts = pathlib.Path("artifacts") artifacts.mkdir(exist_ok=True) ts = datetime.utcnow().strftime("%Y%m%d-%H%M%S") name = item.name.replace("/", "_") screenshot_path = artifacts / f"{name}-{ts}.png" html_path = artifacts / f"{name}-{ts}.html" driver.save_screenshot(str(screenshot_path)) with open(html_path, "w", encoding="utf-8") as f: f.write(driver.page_source)

Now when something fails, you can open artifacts/*.png and artifacts/*.html to see exactly what the browser saw.

Make it CI-friendly: headless + consistent environment

Common CI tips:

  • Run in headless mode (--headless=new)
  • Set a consistent viewport size (--window-size=1280,900)
  • Keep timeouts reasonable (start with 10s, increase for slow environments)
  • Use a dedicated test user and reset the database (or use isolated test environments)

If you can, run UI tests against a stable environment (staging) and avoid pointing them at a local dev server with hot reload and variable performance.

A practical checklist for stable Selenium tests

  • data-testid or other stable selectors for key elements
  • No time.sleep(); use explicit waits for visible/clickable conditions
  • Page Objects for maintainability
  • Unique test data (or cleanup) to avoid collisions
  • Screenshots + HTML artifacts on failure
  • One clear “story” per test

If you adopt just those habits, Selenium stops feeling random and starts behaving like an honest test runner that tells you when the UI actually broke.


Leave a Reply

Your email address will not be published. Required fields are marked *