Skip to content

rcastaneda-dev/playwright-assessment

Repository files navigation

PW_Assessment_260122

Practical test assessment for an Automation QA Engineer role, implemented using:

  • Playwright
  • TypeScript
  • Node

Following the patterns:

  • Page Object Model (POM),
  • Factory pattern,
  • Playwright fixtures
  • Arrange-Act-Assert (AAA) and more.

For code quality and format:

  • ESLint
  • Prettier

Configured scripts to run tests with Playwright filtering capabilities

Getting Started

1. Clone the Repository

> git clone https://github.com/yourusername/PW_Assessment_260122.git
> cd PW_Assessment_260122

2. Install Dependencies

Run the following commands at the root level of the project:

npm install
npx playwright install

Note: npx playwright install downloads the necessary browser binaries for Playwright.

Environment Variables

Required: The following environment variables must be configured before running tests:

Variable Description Example
BASE_URL Base URL of the application under test (no trailing slash) https://my-testing-website.s3.us-east-1.amazonaws.com
DEMO_USERNAME Valid username for authentication tests myownadminuser
DEMO_PASSWORD Password for the demo user myownadminuser987

Setup Steps

  1. Copy the .env.example file to create your local .env file:

    cp .env.example .env
  2. Update the .env file with your actual values:

    BASE_URL=https://your-app-url.com
    DEMO_USERNAME=your-username
    DEMO_PASSWORD=your-password

Note: Tests will fail with a clear error message if these variables are not set.

How to run the tests

  1. Run all tests: npm run test

  2. Run all tests in UI mode: npm run test:ui

  3. Run the user registration test: npm run test:user-registration

  4. Run authentication tests (positive and negative): npm run test:auth

  5. Run product management tests (CRUD): npm run test:product-management

  6. Run all tests and generate the report: npm run test:report

Trade-offs and implementation decisions

  • I continued using the provided credentials to avoid generating excessive test data on every execution. Additionally, there were no specific requirements regarding which credentials should be used per test. This can be improved by introducing a setup fixture that handles authentication and persists the session (cookies/token) into a JSON file, for example:
await context.storageState({ path: user01AuthFile });

This state can then be reused across tests using:


test.use({ storageState: ".auth/user01.json" });
  • The Product Page POM was split into three separate files. This decision was made with scalability in mind, assuming that grid and filter components could be reused across multiple pages. This approach keeps selectors focused on their specific responsibilities (e.g., adding products, handling filters, or interacting with the items grid).

  • Implemented the Factory Pattern for test data generation.

  • Introduced the Fishery package to enhance and standardize factory-based test data creation.

  • Added Faker to generate more realistic and varied test data.

Enhancements

  • Running tests on CI via github actions (Secrets are already defined at repository level) but a runner is needed matching the: ubuntu-latest label
  • Add visual tests with snapshots to catch visual changes
  • Improve sign in process to make it reusable as a fixture
  • Add tests coverage to catch bugs around field validations and/or required fields

About

Practical test assessment for an Automation QA Engineer role, implemented using: Playwright/TypeScript

Resources

Stars

Watchers

Forks

Contributors