Skip to content

AI-powered cross-browser UI/UX testing tool that captures screenshots across devices and uses OpenAI vision models to analyze layout, responsiveness, and compatibility issues.

Notifications You must be signed in to change notification settings

NO-Product/frontend-support

Repository files navigation

frontend.support

AI-powered UI/UX testing across browsers and resolutions

frontend-support is a command-line tool that automates visual regression testing and UI/UX analysis by capturing screenshots across multiple browsers and resolutions, then using OpenAI's vision models to provide structured feedback on layout, responsiveness, and cross-browser compatibility.

What is frontend-support?

frontend-support helps developers and designers ensure their web applications look perfect across all devices and browsers by:

  1. Automated Multi-Browser Capture: Takes screenshots of your web pages in Chromium, Firefox, and WebKit at configurable resolutions
  2. Time-Sliced Loading Analysis: Captures screenshots at multiple stages during page load to identify loading issues and perceived performance problems
  3. AI-Powered Analysis: Uses OpenAI vision models (GPT-5) to analyze screenshots and provide structured feedback on UI/UX quality
  4. Visual Canvas Composition: Creates side-by-side comparison images for easy visual review
  5. Cross-Browser Compatibility: Automatically identifies and explains visual differences between browsers

How It Works

Input

Configure three JSON files:

  • targets.json: URLs to test, authentication cookies, and optional analysis requirements
  • matrix.json: Browser/resolution matrix and capture timing settings
  • openai.json: AI model configuration (reasoning level, verbosity, etc.)

Process

  1. Launch browsers with specified viewports
  2. Navigate to each URL and capture screenshots at timed intervals
  3. Optionally capture full-page screenshots for comprehensive analysis
  4. Compose visual canvases showing progressive loading and cross-resolution views
  5. Send screenshots and canvases to OpenAI for structured analysis
  6. Generate JSON reports with actionable UI/UX feedback

Output

Organized directory structure:

outputs/{project}/{date}/{run_id}/
├── screens/          # Raw screenshots
├── canvases/         # Composed comparison images
├── reports/          # JSON analysis reports
├── logs/             # Execution logs
└── manifest.json     # Complete run metadata

Key Features

🎯 Comprehensive Testing

  • Multi-Resolution Testing: Test desktop, tablet, and mobile layouts simultaneously
  • Loading State Capture: See how your page looks at different loading stages
  • Full-Page Screenshots: Capture entire scrollable pages (separately sent to AI, not in canvases)
  • Authenticated Sessions: Support for cookie-based authentication

🤖 AI-Powered Insights

  • Structured JSON Output: Get machine-readable feedback matching your defined schemas
  • UI/UX Analysis: Detailed feedback on layout, spacing, typography, colors, hierarchy, and accessibility
  • Compatibility Analysis: Cross-browser visual diff with likely causes and minimal fix recommendations
  • Custom Analysis Requirements: Specify URL-specific considerations to guide the AI's focus
  • Code Context Integration: Provide code/design context for more relevant suggestions

🔧 Developer-Friendly

  • Config-Driven: All settings in version-controllable JSON files
  • Flexible Execution: Run primary only, secondary only, or both phases
  • Dry-Run Mode: Capture screenshots without spending API credits
  • Analyze-Only Mode: Re-run analysis on existing screenshots (test prompts, try different models)
  • Detailed Logging: Console and file logs with UTC timestamps for debugging

📊 Production-Ready

  • Robust Error Handling: Graceful degradation with comprehensive error tracking
  • Self-Describing Filenames: All artifacts include metadata in their names
  • Manifest System: Complete reproducibility with run manifests
  • Concurrency Control: Configurable parallel capture to manage system resources

Installation

Prerequisites

Setup

  1. Clone the repository

    git clone https://github.com/yourusername/frontend-support.git
    cd frontend-support
  2. Install dependencies

    poetry install
  3. Install Playwright browsers

    poetry run playwright install --with-deps

    Note: On some Linux systems, --with-deps may prompt for elevated privileges to install system dependencies for WebKit. If this fails, try poetry run playwright install chromium firefox webkit and consult Playwright's documentation for OS-specific requirements.

  4. Configure your API key

    export OPENAI_API_KEY='sk-your-key-here'

    Or create a .env file:

    echo "OPENAI_API_KEY=sk-your-key-here" > .env
    export $(cat .env | xargs)

Configuration

frontend-support uses three main configuration files. Examples are provided in configs/examples/.

1. targets.json - What to Test

Define your projects and URLs:

{
  "projects": [
    {
      "project": "my-app",
      "urls": [
        { 
          "url": "https://example.com/",
          "additional_considerations": "Ensure hero section is responsive and CTA buttons are visible on all devices."
        },
        { 
          "url": "https://example.com/dashboard", 
          "alias": "dashboard",
          "additional_considerations": "Focus on data table overflow handling at mobile breakpoints."
        }
      ],
      "code_context": "context/my-app-context.md",
      "cookies": "cookies/my-app.txt"
    }
  ]
}

Fields:

  • project: Project name (used in output paths)
  • urls: Array of URLs to test
    • url: Full URL to capture (required)
    • alias: Short name for outputs (optional, derived from URL if not provided)
    • additional_considerations: Specific analysis requirements for this URL (optional)
    • cookies: URL-specific cookie file override (optional)
  • code_context: Path to markdown file with code/design context (optional)
  • cookies: Project-level cookie file for authentication (optional)

2. matrix.json - How to Test

Configure browsers, resolutions, and timing:

{
  "primary": {
    "browser": "chromium",
    "channel": "chrome",
    "resolutions": [
      { "label": "desktop-1920x1080", "width": 1920, "height": 1080 },
      { "label": "mobile-390x844", "width": 390, "height": 844 }
    ],
    "stage_offsets_seconds": [1, 5, 15],
    "max_time_seconds": 15,
    "full_screen": true
  },
  "secondary": {
    "browsers": ["firefox", "webkit"],
    "use_primary_resolutions": true
  }
}

Primary settings:

  • browser: Primary browser (chromium, firefox, webkit)
  • channel: Browser channel (chrome, msedge, etc.) - optional
  • resolutions: Array of viewport sizes to test
  • stage_offsets_seconds: When to capture during page load (seconds after navigation)
  • max_time_seconds: Maximum time to wait for page
  • full_screen: Capture full-page screenshots (sent to AI separately, not in canvases)

Secondary settings:

  • browsers: Additional browsers for compatibility testing
  • use_primary_resolutions: Reuse primary resolutions (true) or define custom ones

3. openai.json - AI Configuration

Configure the OpenAI model and analysis parameters:

{
  "model": "gpt-5",
  "reasoning_effort": "medium",
  "verbosity": "medium",
  "max_output_tokens": 4096,
  "request_timeout_s": 120,
  "max_images_per_request": 20
}

Fields:

  • model: OpenAI model name (gpt-5, gpt-4o, etc.)
  • reasoning_effort: GPT-5 reasoning depth (minimal, low, medium, high)
  • verbosity: Response detail level (low, medium, high)
  • max_output_tokens: Maximum tokens in AI response
  • request_timeout_s: API request timeout
  • max_images_per_request: Max images per API call (GPT-5 supports up to 20)

Authentication with Cookies

For authenticated pages, export cookies from your browser using a cookie export extension like "Get cookies.txt LOCALLY":

  1. Export cookies to Netscape format
  2. Save to configs/cookies/your-app.txt
  3. Reference in config (project-level or URL-level)
{
  "project": "my-app",
  "cookies": "cookies/my-app.txt",
  "urls": [
    { "url": "https://app.example.com/dashboard" },
    { 
      "url": "https://app.example.com/admin",
      "cookies": "cookies/my-app-admin.txt"
    }
  ]
}

Security: Add configs/cookies/ to .gitignore to avoid committing sensitive authentication data.

Usage

Basic Commands

Run from the project root using Poetry:

# Run with defaults (primary phase only)
poetry run python -m frontend_support

# Specify config files explicitly
poetry run python -m frontend_support \
  --targets configs/tests/targets.json \
  --matrix configs/defaults/matrix.json \
  --openai configs/defaults/openai.json

# Run specific project only
poetry run python -m frontend_support --project my-app

# Run both primary and secondary phases
poetry run python -m frontend_support --primary --secondary

Common Options

# Capture only (no AI analysis, no API charges)
poetry run python -m frontend_support --dry-run

# Show browsers during capture (debugging)
poetry run python -m frontend_support --headful

# Control concurrency
poetry run python -m frontend_support --max-parallel 4

# Adjust log level
poetry run python -m frontend_support --log-level DEBUG

# Show version
poetry run python -m frontend_support --version

Analyze-Only Mode

Re-run analysis on existing screenshots without recapturing:

poetry run python -m frontend_support \
  --analyze-only outputs/my-app/2025-10-06/20251006_220227 \
  --openai configs/defaults/openai.json

Use cases:

  • Test different prompt templates
  • Try different AI models or settings
  • Recover from API errors
  • Iterate on analysis without expensive recapture

Phase Control

# Primary only (default)
poetry run python -m frontend_support --primary --no-secondary

# Secondary only (requires existing primary canvases)
poetry run python -m frontend_support --no-primary --secondary

# Both phases
poetry run python -m frontend_support --primary --secondary

Output Structure

Every run creates a timestamped directory:

outputs/
└── my-app/
    └── 2025-10-06/
        └── 20251006_220227/
            ├── screens/
            │   ├── 20251006_220227__my-app__dashboard__chromium__1920x1080-desktop__stage03__20251006T220234.png
            │   └── ...
            ├── canvases/
            │   ├── 20251006_220227__my-app__dashboard__chromium__load__1920x1080__20251006T220240.png
            │   ├── 20251006_220227__my-app__dashboard__chromium__compat-ref__5res__20251006T220245.png
            │   └── ...
            ├── reports/
            │   ├── primary_dashboard.json
            │   ├── compat_firefox_dashboard.json
            │   └── ...
            ├── logs/
            │   └── frontend-support.log
            └── manifest.json

Filename Convention

All filenames follow a self-describing pattern:

{run_id}__{project}__{slug}__{browser}__{details}__{timestamp}.{ext}

Example:

20251006_220227__my-app__dashboard__chromium__1920x1080-desktop__stage03__20251006T220234.png
│              │        │           │          │                    │        │
Run ID         Project  URL slug    Browser    Resolution/Stage     Stage    Timestamp

Manifest Structure

manifest.json provides complete run metadata:

{
  "run_meta": {
    "run_id": "20251006_220227",
    "started_at": "2025-10-06T22:02:27Z",
    "headless": true,
    "max_parallel": 2,
    ...
  },
  "project": "my-app",
  "targets": { ... },
  "artifacts": {
    "screens": [ ... ],
    "canvases": [ ... ],
    "reports": [ ... ]
  },
  "errors": [ ... ]
}

Troubleshooting

OpenAI API Issues

Problem: Missing API key
Solution: Ensure OPENAI_API_KEY is exported in your shell:

export OPENAI_API_KEY=sk-your-key-here

Problem: Rate limits or quota exceeded
Solution: Adjust max_images_per_request in openai.json or use --dry-run to capture first

Browser Issues

Problem: Playwright cannot launch browsers
Solution: Run poetry run playwright install --with-deps

Problem: WebKit issues on Linux
Solution: Install system dependencies or exclude WebKit from matrix.json:

{
  "secondary": {
    "browsers": ["firefox"]
  }
}

Performance

Problem: Captures are slow
Solution: Increase --max-parallel (default is 2):

poetry run python -m frontend_support --max-parallel 4

Problem: High memory usage
Solution: Reduce parallel captures or test fewer resolutions

Analysis Quality

Problem: AI feedback is too generic
Solution:

  • Add code_context file with relevant code/design details
  • Use additional_considerations in URL config to guide analysis
  • Increase reasoning_effort to "high" in openai.json

Problem: Analysis validation errors
Solution: Check logs for schema validation issues and adjust prompts in src/frontend_support/prompts/

Development

Running Tests

poetry run pytest -q

Project Structure

src/frontend_support/
├── __init__.py          # Package entry point
├── __main__.py          # Module execution
├── cli.py               # CLI argument parsing
├── runner.py            # Main execution orchestration
├── config.py            # Configuration loading
├── capture.py           # Browser automation (Playwright)
├── canvas.py            # Image composition (Pillow)
├── openai_client.py     # OpenAI API integration
├── analysis.py          # Analysis pipeline
├── manifest.py          # Manifest management
├── types.py             # TypedDict definitions
├── utils.py             # Utility functions
├── naming.py            # Filename generation
├── logging_config.py    # Logging setup
├── exceptions.py        # Custom exceptions
├── version.py           # Version string
├── prompts/             # AI prompt templates
│   ├── primary.md
│   └── compatibility.md
└── schemas/             # JSON schemas for validation
    ├── primary.schema.json
    └── compatibility.schema.json

License

MIT License - see LICENSE file for details

Contributing

Contributions are welcome! Please feel free to submit issues and pull requests.

Credits

Built with: