Last Updated: November 23, 2025
Coverage Goal: 80%+ for all components
- Overview
- Web Application Tests
- Python Backend Tests
- Running Tests
- Test Coverage
- Writing New Tests
- CI/CD Integration
The MAPS testing suite provides comprehensive coverage for both the web frontend (React/TypeScript) and Python backend (FastAPI/Flask). Tests are organized by component type and include unit tests, integration tests, and end-to-end tests.
Web Frontend:
- Framework: Vitest + React Testing Library
- Mocking: Vitest mock functions
- Coverage: V8 code coverage
- Location:
web/src/**/*.test.{ts,tsx}
Python Backend:
- Framework: pytest
- Client: FastAPI TestClient
- Coverage: pytest-cov
- Location:
tests/test_*.py
web/src/
components/
FileUploader/
FileUploader.tsx
FileUploader.test.tsx
BatchProcessor/
BatchProcessor.test.tsx
Layout/
Header.test.tsx
pages/
Dashboard.test.tsx
Upload.test.tsx
History.test.tsx
Profiles.test.tsx
services/
api.test.ts
test/
setup.ts # Test setup and mocks
test-utils.tsx # Testing utilities and helpers
cd web/
# run all tests
npm test
# run tests in watch mode
npm run test:watch
# run tests with coverage
npm run test:coverage
# run tests with UI
npm run test:ui
# run specific test file
npm test Dashboard.test.tsx
# run tests matching pattern
npm test -- --grep "Upload"The test-utils.tsx file provides:
renderWithProviders()- Render components with React Query + RoutercreateTestQueryClient()- Create test query clientcreateMockFile()- Generate mock files for upload testswaitForLoadingToFinish()- Wait for async operations
Example Usage:
import { renderWithProviders, screen, waitFor } from '../../test/test-utils';
test('renders dashboard stats', async () => {
renderWithProviders(<Dashboard />);
await waitFor(() => {
expect(screen.getByText('150')).toBeInTheDocument();
});
});import { vi } from 'vitest';
import { apiClient } from '../../services/api';
vi.mock('../../services/api');
beforeEach(() => {
vi.mocked(apiClient.getDashboardStats).mockResolvedValue({
total_documents: 150,
total_jobs: 25,
// ...
});
});Component Tests:
- Render correctly with props
- Handle user interactions
- Display error states
- Show loading states
- Validate accessibility
Page Tests:
- Load data from API
- Handle navigation
- Submit forms correctly
- Display proper content
- Handle errors gracefully
Service Tests:
- API client methods
- Request/response handling
- Error handling
- Data transformation
tests/
test_api_comprehensive.py # All API endpoints
test_integration_workflows.py # End-to-end workflows
test_document_repository.py # Database operations
test_excel_export.py # Excel export functionality
test_pylidc_adapter.py # PYLIDC integration
test_foundation_validation.py # Core validation logic
test_xml_comp.py # XML parsing
# run all tests
pytest
# run with verbose output
pytest -v
# run specific test file
pytest tests/test_api_comprehensive.py
# run specific test class
pytest tests/test_api_comprehensive.py::TestProfileEndpoints
# run specific test method
pytest tests/test_api_comprehensive.py::TestProfileEndpoints::test_list_profiles
# run with coverage
pytest --cov=src --cov-report=html --cov-report=term
# run tests matching pattern
pytest -k "test_upload"
# run tests with markers
pytest -m "integration"
# show print statements
pytest -s
# stop on first failure
pytest -x
# run last failed tests
pytest --lftest_api_comprehensive.py:
- Health check endpoints
- Profile CRUD operations
- File upload endpoints
- Job management
- Export functionality
- Analytics endpoints
- Keyword operations
test_integration_workflows.py:
- XML parsing → DataFrame workflow
- XML → Excel export workflow
- Database create/read/update/delete
- Keyword extraction → storage
- Multi-format export
- API upload → process → export
Example Test:
import pytest
from fastapi.testclient import TestClient
from start_api import app
client = TestClient(app)
class TestMyFeature:
def test_feature_success(self):
"""Test successful feature execution"""
response = client.get("/api/v1/feature")
assert response.status_code == 200
data = response.json()
assert data["status"] == "success"
def test_feature_error_handling(self):
"""Test feature error handling"""
response = client.get("/api/v1/feature?invalid=param")
assert response.status_code == 400
@pytest.fixture
def sample_data(self):
"""Fixture for test data"""
return {"field": "value"}Common Fixtures:
@pytest.fixture
def sample_xml_file():
"""Create temporary XML file"""
with tempfile.NamedTemporaryFile(mode='w', suffix='.xml', delete=False) as f:
f.write(xml_content)
yield f.name
Path(f.name).unlink(missing_ok=True)
@pytest.fixture
def test_profile():
"""Create test profile"""
return {
"profile_name": "test",
"file_type": "xml",
"mappings": [],
"validation_rules": {"required_fields": []},
}| Component | Target | Current |
|---|---|---|
| Web Components | 80% | TBD |
| Web Pages | 80% | TBD |
| Web Services | 90% | TBD |
| Python API | 85% | TBD |
| Python Core | 80% | TBD |
| Python Exporters | 80% | TBD |
Web Coverage:
cd web/
npm run test:coverage
# view HTML report
open coverage/index.htmlPython Coverage:
pytest --cov=src --cov-report=html --cov-report=term
# view HTML report
open htmlcov/index.htmlWeb (vitest.config.ts):
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html', 'lcov'],
lines: 80,
functions: 80,
branches: 80,
statements: 80,
}Python (pytest.ini or pyproject.toml):
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
[coverage:run]
source = src
omit =
*/test/*
*/tests/*
*/__pycache__/*
*/venv/*
[coverage:report]
precision = 2
show_missing = True
skip_covered = Falseimport { describe, it, expect, vi, beforeEach } from 'vitest';
import { renderWithProviders, screen, waitFor } from '../../test/test-utils';
import { MyComponent } from './MyComponent';
describe('MyComponent', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('renders correctly with props', () => {
renderWithProviders(<MyComponent prop="value" />);
expect(screen.getByText('Expected Text')).toBeInTheDocument();
});
it('handles user interaction', async () => {
const mockCallback = vi.fn();
renderWithProviders(<MyComponent onAction={mockCallback} />);
const button = screen.getByRole('button', { name: /action/i });
await userEvent.click(button);
expect(mockCallback).toHaveBeenCalledTimes(1);
});
it('displays loading state', () => {
renderWithProviders(<MyComponent loading={true} />);
expect(screen.getByText(/loading/i)).toBeInTheDocument();
});
it('handles errors gracefully', () => {
renderWithProviders(<MyComponent error="Error message" />);
expect(screen.getByText(/error message/i)).toBeInTheDocument();
});
});import pytest
from fastapi.testclient import TestClient
class TestMyEndpoint:
"""Test MyEndpoint functionality"""
def test_success_case(self, client):
"""Test successful request"""
response = client.get("/api/v1/endpoint")
assert response.status_code == 200
data = response.json()
assert "expected_field" in data
def test_validation_error(self, client):
"""Test request validation"""
response = client.post("/api/v1/endpoint", json={})
assert response.status_code == 422
def test_not_found(self, client):
"""Test 404 response"""
response = client.get("/api/v1/endpoint/nonexistent")
assert response.status_code == 404
@pytest.fixture
def sample_data(self):
"""Fixture providing test data"""
return {"field": "value"}- Follow AAA Pattern: Arrange, Act, Assert
- One Assertion Per Test: Focus each test on one behavior
- Clear Test Names: Use descriptive names that explain what's being tested
- Mock External Dependencies: Isolate the code under test
- Test Edge Cases: Don't test happy paths
- Use Fixtures: Reuse common setup code
- Keep Tests Fast: Avoid slow operations like file I/O when possible
- Test Error Paths: Verify error handling
- Maintain Test Data: Use realistic but small test datasets
- Clean Up: Always clean up test artifacts
Create .github/workflows/test.yml:
name: Test Suite
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
web-tests:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./web
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run tests
run: npm run test:run
- name: Generate coverage
run: npm run test:coverage
- name: Upload coverage
uses: codecov/codecov-action@v4
with:
files: ./web/coverage/lcov.info
flags: web
python-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest pytest-cov
- name: Run tests
run: pytest --cov=src --cov-report=xml --cov-report=term
- name: Upload coverage
uses: codecov/codecov-action@v4
with:
files: ./coverage.xml
flags: pythonCreate .git/hooks/pre-commit:
#!/bin/bash
echo "Running pre-commit tests..."
# run web tests
cd web
npm run test:run --silent
WEB_EXIT=$?
# run python tests
cd ..
pytest --tb=short -q
PYTHON_EXIT=$?
# check results
if [ $WEB_EXIT -ne 0 ] || [ $PYTHON_EXIT -ne 0 ]; then
echo " Tests failed. Commit aborted."
exit 1
fi
echo " All tests passed!"
exit 0Make it executable:
chmod +x .git/hooks/pre-commitWeb Tests:
-
"Cannot find module" errors:
- Check import paths
- Verify
tsconfig.jsonpaths configuration - Ensure
vitest.config.tsresolve aliases are correct
-
Tests timeout:
- Increase timeout in test:
it('test', { timeout: 10000 }, async () => {}) - Check for unmocked API calls
- Verify async operations complete
- Increase timeout in test:
-
Mocking not working:
- Use
vi.mock()before imports - Clear mocks in
beforeEach() - Verify mock implementation
- Use
Python Tests:
-
Import errors:
- Verify
PYTHONPATHincludessrc/ - Check for circular imports
- Ensure
__init__.pyfiles exist
- Verify
-
Database connection errors:
- Use test database or mocks
- Check environment variables
- Verify database migrations
-
Fixture errors:
- Check fixture scope
- Verify fixture dependencies
- Use
pytest --fixturesto list available fixtures
- All new features have tests
- Tests cover happy path and edge cases
- Error handling is tested
- Coverage meets minimum thresholds
- Tests run successfully in CI
- No flaky tests
- Test names are descriptive
- Tests are independent
- Mocks are properly cleaned up
- Documentation is updated
- Review and update tests when features change
- Remove tests for deprecated features
- Refactor tests when patterns improve
- Keep test data realistic and current
- Monitor test execution time
- Fix flaky tests immediately
- Update dependencies regularly