This document explains how to set up and run tests for the Suntrace GeospatialAnalyzer project.
-
Install dependencies:
pip install -r requirements.txt
-
Run basic tests:
make test # or ./run_tests.sh
-
Run tests with coverage:
make test-coverage # or ./run_tests.sh -c
The test suite is organized into several files:
tests/test_geospatial_analyzer.py- Main test suite for GeospatialAnalyzertests/test_utilities.py- Unit tests for utility methodstests/test_integration.py- Integration tests requiring actual data filestests/conftest.py- Shared test fixtures and configuration
Tests are marked with different categories:
@pytest.mark.unit- Fast unit tests that don't require data files@pytest.mark.integration- Tests that require actual data files@pytest.mark.geospatial- Tests that work with geospatial data@pytest.mark.slow- Tests that take longer to run@pytest.mark.visualization- Tests that create visual outputs
make test
# or
./run_tests.sh -fmake test-all
# or
./run_tests.sh -s -imake test-integration
# or
./run_tests.sh -imake test-coverage
# or
./run_tests.sh -c./run_tests.sh -pTests are configured via pyproject.toml:
- Test discovery patterns
- Coverage settings
- Test markers
- Black and isort configuration
Integration tests require data files in the data/ directory:
data/lamwo_buildings_V3.gpkg- Building polygonsdata/updated_candidate_minigrids_merged.gpkg- Minigrid locationsdata/Lamwo_Tile_Stats_EE.csv- Tile statisticsdata/lamwo_sentinel_composites/lamwo_grid.geojson- Tile geometriesdata/sample_region_mudu/mudu_village.gpkg- Sample test region
You can check if data files are present:
make check-dataclass TestYourFeature:
"""Test suite for YourFeature."""
@pytest.fixture
def sample_data(self):
"""Create sample data for testing."""
return create_test_data()
@pytest.mark.unit
def test_basic_functionality(self, sample_data):
"""Test basic functionality."""
assert sample_data is not None
@pytest.mark.geospatial
def test_geospatial_operation(self, analyzer, sample_region):
"""Test geospatial operations."""
result = analyzer.some_geospatial_method(sample_region)
assert result is not NoneCommon fixtures available in conftest.py:
project_root_path- Path to project rootdata_dir_path- Path to data directorysample_data_paths- Dictionary of data file pathscheck_data_files- Ensures required data files exist
For tests that don't need real data:
from unittest.mock import Mock, patch
@pytest.fixture
def mock_analyzer(self):
"""Create analyzer with mocked data."""
with patch('utils.GeospatialAnalyzer.gpd.read_file') as mock_read:
mock_read.return_value = gpd.GeoDataFrame(...)
return GeospatialAnalyzer(...)For CI/CD pipelines, use:
make ciThis runs linting and tests with coverage.
-
Import Errors:
- Ensure you're running tests from the project root
- Check that
src/is in Python path (handled by conftest.py)
-
Missing Data Files:
- Integration tests will be skipped if data files are missing
- Use
make check-datato verify data file presence
-
Slow Tests:
- Use
-fflag to run only fast tests - Mark long-running tests with
@pytest.mark.slow
- Use
-
Memory Issues:
- Large geospatial data can consume significant memory
- Consider using smaller test datasets or mocking
- Use
pytest-xdistfor parallel execution:./run_tests.sh -p - Skip slow tests during development:
./run_tests.sh -f - Use specific test files:
pytest tests/test_utilities.py - Use test selection:
pytest -k "test_count_buildings"
Coverage reports are generated in htmlcov/ directory when using the -c flag.
Target coverage goals:
- Overall: > 80%
- Critical functions: > 90%
- New features: 100%
For development, you may want to:
- Create smaller test datasets
- Use synthetic data for unit tests
- Mock external data sources
- Version control test data separately
The project includes VS Code configuration for:
- Running tests from the editor
- Debugging test failures
- Coverage visualization
Use the Test Explorer in VS Code or run tests via the command palette.
-
Test Organization:
- Group related tests in classes
- Use descriptive test names
- Follow the Arrange-Act-Assert pattern
-
Test Data:
- Use fixtures for reusable test data
- Mock external dependencies
- Clean up after tests
-
Assertions:
- Use specific assertions (
assert isinstance(result, int)) - Test edge cases and error conditions
- Verify both positive and negative cases
- Use specific assertions (
-
Performance:
- Mark slow tests appropriately
- Use sampling for large datasets
- Profile test execution when needed