AI-powered UI/UX testing across browsers and resolutions
frontend-support is a command-line tool that automates visual regression testing and UI/UX analysis by capturing screenshots across multiple browsers and resolutions, then using OpenAI's vision models to provide structured feedback on layout, responsiveness, and cross-browser compatibility.
frontend-support helps developers and designers ensure their web applications look perfect across all devices and browsers by:
- Automated Multi-Browser Capture: Takes screenshots of your web pages in Chromium, Firefox, and WebKit at configurable resolutions
- Time-Sliced Loading Analysis: Captures screenshots at multiple stages during page load to identify loading issues and perceived performance problems
- AI-Powered Analysis: Uses OpenAI vision models (GPT-5) to analyze screenshots and provide structured feedback on UI/UX quality
- Visual Canvas Composition: Creates side-by-side comparison images for easy visual review
- Cross-Browser Compatibility: Automatically identifies and explains visual differences between browsers
Configure three JSON files:
- targets.json: URLs to test, authentication cookies, and optional analysis requirements
- matrix.json: Browser/resolution matrix and capture timing settings
- openai.json: AI model configuration (reasoning level, verbosity, etc.)
- Launch browsers with specified viewports
- Navigate to each URL and capture screenshots at timed intervals
- Optionally capture full-page screenshots for comprehensive analysis
- Compose visual canvases showing progressive loading and cross-resolution views
- Send screenshots and canvases to OpenAI for structured analysis
- Generate JSON reports with actionable UI/UX feedback
Organized directory structure:
outputs/{project}/{date}/{run_id}/
├── screens/ # Raw screenshots
├── canvases/ # Composed comparison images
├── reports/ # JSON analysis reports
├── logs/ # Execution logs
└── manifest.json # Complete run metadata
- Multi-Resolution Testing: Test desktop, tablet, and mobile layouts simultaneously
- Loading State Capture: See how your page looks at different loading stages
- Full-Page Screenshots: Capture entire scrollable pages (separately sent to AI, not in canvases)
- Authenticated Sessions: Support for cookie-based authentication
- Structured JSON Output: Get machine-readable feedback matching your defined schemas
- UI/UX Analysis: Detailed feedback on layout, spacing, typography, colors, hierarchy, and accessibility
- Compatibility Analysis: Cross-browser visual diff with likely causes and minimal fix recommendations
- Custom Analysis Requirements: Specify URL-specific considerations to guide the AI's focus
- Code Context Integration: Provide code/design context for more relevant suggestions
- Config-Driven: All settings in version-controllable JSON files
- Flexible Execution: Run primary only, secondary only, or both phases
- Dry-Run Mode: Capture screenshots without spending API credits
- Analyze-Only Mode: Re-run analysis on existing screenshots (test prompts, try different models)
- Detailed Logging: Console and file logs with UTC timestamps for debugging
- Robust Error Handling: Graceful degradation with comprehensive error tracking
- Self-Describing Filenames: All artifacts include metadata in their names
- Manifest System: Complete reproducibility with run manifests
- Concurrency Control: Configurable parallel capture to manage system resources
- Python 3.11 or higher
- Poetry for dependency management
- An OpenAI API key
-
Clone the repository
git clone https://github.com/yourusername/frontend-support.git cd frontend-support -
Install dependencies
poetry install
-
Install Playwright browsers
poetry run playwright install --with-deps
Note: On some Linux systems,
--with-depsmay prompt for elevated privileges to install system dependencies for WebKit. If this fails, trypoetry run playwright install chromium firefox webkitand consult Playwright's documentation for OS-specific requirements. -
Configure your API key
export OPENAI_API_KEY='sk-your-key-here'
Or create a
.envfile:echo "OPENAI_API_KEY=sk-your-key-here" > .env export $(cat .env | xargs)
frontend-support uses three main configuration files. Examples are provided in configs/examples/.
Define your projects and URLs:
{
"projects": [
{
"project": "my-app",
"urls": [
{
"url": "https://example.com/",
"additional_considerations": "Ensure hero section is responsive and CTA buttons are visible on all devices."
},
{
"url": "https://example.com/dashboard",
"alias": "dashboard",
"additional_considerations": "Focus on data table overflow handling at mobile breakpoints."
}
],
"code_context": "context/my-app-context.md",
"cookies": "cookies/my-app.txt"
}
]
}Fields:
project: Project name (used in output paths)urls: Array of URLs to testurl: Full URL to capture (required)alias: Short name for outputs (optional, derived from URL if not provided)additional_considerations: Specific analysis requirements for this URL (optional)cookies: URL-specific cookie file override (optional)
code_context: Path to markdown file with code/design context (optional)cookies: Project-level cookie file for authentication (optional)
Configure browsers, resolutions, and timing:
{
"primary": {
"browser": "chromium",
"channel": "chrome",
"resolutions": [
{ "label": "desktop-1920x1080", "width": 1920, "height": 1080 },
{ "label": "mobile-390x844", "width": 390, "height": 844 }
],
"stage_offsets_seconds": [1, 5, 15],
"max_time_seconds": 15,
"full_screen": true
},
"secondary": {
"browsers": ["firefox", "webkit"],
"use_primary_resolutions": true
}
}Primary settings:
browser: Primary browser (chromium, firefox, webkit)channel: Browser channel (chrome, msedge, etc.) - optionalresolutions: Array of viewport sizes to teststage_offsets_seconds: When to capture during page load (seconds after navigation)max_time_seconds: Maximum time to wait for pagefull_screen: Capture full-page screenshots (sent to AI separately, not in canvases)
Secondary settings:
browsers: Additional browsers for compatibility testinguse_primary_resolutions: Reuse primary resolutions (true) or define custom ones
Configure the OpenAI model and analysis parameters:
{
"model": "gpt-5",
"reasoning_effort": "medium",
"verbosity": "medium",
"max_output_tokens": 4096,
"request_timeout_s": 120,
"max_images_per_request": 20
}Fields:
model: OpenAI model name (gpt-5, gpt-4o, etc.)reasoning_effort: GPT-5 reasoning depth (minimal, low, medium, high)verbosity: Response detail level (low, medium, high)max_output_tokens: Maximum tokens in AI responserequest_timeout_s: API request timeoutmax_images_per_request: Max images per API call (GPT-5 supports up to 20)
For authenticated pages, export cookies from your browser using a cookie export extension like "Get cookies.txt LOCALLY":
- Export cookies to Netscape format
- Save to
configs/cookies/your-app.txt - Reference in config (project-level or URL-level)
{
"project": "my-app",
"cookies": "cookies/my-app.txt",
"urls": [
{ "url": "https://app.example.com/dashboard" },
{
"url": "https://app.example.com/admin",
"cookies": "cookies/my-app-admin.txt"
}
]
}Security: Add
configs/cookies/to.gitignoreto avoid committing sensitive authentication data.
Run from the project root using Poetry:
# Run with defaults (primary phase only)
poetry run python -m frontend_support
# Specify config files explicitly
poetry run python -m frontend_support \
--targets configs/tests/targets.json \
--matrix configs/defaults/matrix.json \
--openai configs/defaults/openai.json
# Run specific project only
poetry run python -m frontend_support --project my-app
# Run both primary and secondary phases
poetry run python -m frontend_support --primary --secondary# Capture only (no AI analysis, no API charges)
poetry run python -m frontend_support --dry-run
# Show browsers during capture (debugging)
poetry run python -m frontend_support --headful
# Control concurrency
poetry run python -m frontend_support --max-parallel 4
# Adjust log level
poetry run python -m frontend_support --log-level DEBUG
# Show version
poetry run python -m frontend_support --versionRe-run analysis on existing screenshots without recapturing:
poetry run python -m frontend_support \
--analyze-only outputs/my-app/2025-10-06/20251006_220227 \
--openai configs/defaults/openai.jsonUse cases:
- Test different prompt templates
- Try different AI models or settings
- Recover from API errors
- Iterate on analysis without expensive recapture
# Primary only (default)
poetry run python -m frontend_support --primary --no-secondary
# Secondary only (requires existing primary canvases)
poetry run python -m frontend_support --no-primary --secondary
# Both phases
poetry run python -m frontend_support --primary --secondaryEvery run creates a timestamped directory:
outputs/
└── my-app/
└── 2025-10-06/
└── 20251006_220227/
├── screens/
│ ├── 20251006_220227__my-app__dashboard__chromium__1920x1080-desktop__stage03__20251006T220234.png
│ └── ...
├── canvases/
│ ├── 20251006_220227__my-app__dashboard__chromium__load__1920x1080__20251006T220240.png
│ ├── 20251006_220227__my-app__dashboard__chromium__compat-ref__5res__20251006T220245.png
│ └── ...
├── reports/
│ ├── primary_dashboard.json
│ ├── compat_firefox_dashboard.json
│ └── ...
├── logs/
│ └── frontend-support.log
└── manifest.json
All filenames follow a self-describing pattern:
{run_id}__{project}__{slug}__{browser}__{details}__{timestamp}.{ext}
Example:
20251006_220227__my-app__dashboard__chromium__1920x1080-desktop__stage03__20251006T220234.png
│ │ │ │ │ │ │
Run ID Project URL slug Browser Resolution/Stage Stage Timestamp
manifest.json provides complete run metadata:
{
"run_meta": {
"run_id": "20251006_220227",
"started_at": "2025-10-06T22:02:27Z",
"headless": true,
"max_parallel": 2,
...
},
"project": "my-app",
"targets": { ... },
"artifacts": {
"screens": [ ... ],
"canvases": [ ... ],
"reports": [ ... ]
},
"errors": [ ... ]
}Problem: Missing API key
Solution: Ensure OPENAI_API_KEY is exported in your shell:
export OPENAI_API_KEY=sk-your-key-hereProblem: Rate limits or quota exceeded
Solution: Adjust max_images_per_request in openai.json or use --dry-run to capture first
Problem: Playwright cannot launch browsers
Solution: Run poetry run playwright install --with-deps
Problem: WebKit issues on Linux
Solution: Install system dependencies or exclude WebKit from matrix.json:
{
"secondary": {
"browsers": ["firefox"]
}
}Problem: Captures are slow
Solution: Increase --max-parallel (default is 2):
poetry run python -m frontend_support --max-parallel 4Problem: High memory usage
Solution: Reduce parallel captures or test fewer resolutions
Problem: AI feedback is too generic
Solution:
- Add
code_contextfile with relevant code/design details - Use
additional_considerationsin URL config to guide analysis - Increase
reasoning_effortto "high" inopenai.json
Problem: Analysis validation errors
Solution: Check logs for schema validation issues and adjust prompts in src/frontend_support/prompts/
poetry run pytest -qsrc/frontend_support/
├── __init__.py # Package entry point
├── __main__.py # Module execution
├── cli.py # CLI argument parsing
├── runner.py # Main execution orchestration
├── config.py # Configuration loading
├── capture.py # Browser automation (Playwright)
├── canvas.py # Image composition (Pillow)
├── openai_client.py # OpenAI API integration
├── analysis.py # Analysis pipeline
├── manifest.py # Manifest management
├── types.py # TypedDict definitions
├── utils.py # Utility functions
├── naming.py # Filename generation
├── logging_config.py # Logging setup
├── exceptions.py # Custom exceptions
├── version.py # Version string
├── prompts/ # AI prompt templates
│ ├── primary.md
│ └── compatibility.md
└── schemas/ # JSON schemas for validation
├── primary.schema.json
└── compatibility.schema.json
MIT License - see LICENSE file for details
Contributions are welcome! Please feel free to submit issues and pull requests.
Built with:
- Playwright for browser automation
- Pillow for image composition
- OpenAI for vision-based analysis