This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
gcpath is a CLI utility for querying Google Cloud Platform (GCP) resource hierarchy paths. It translates between GCP resource names (e.g., folders/12345) and human-readable paths (e.g., //example.com/department/team).
Key features:
- Dual API modes: Cloud Asset API (default, fast) and Resource Manager API (slower, different permissions)
- Commands:
ls,tree,name(path → resource name),path(resource name → path) - Scoped loading to improve performance for large hierarchies
- Support for organizationless projects (
//_prefix)
- Python 3.12+
uvfor dependency management
make install
# or: uv syncmake run ls
make run tree folders/123
# or directly: uv run gcpath lsmake test # Run all tests
make coverage # Run with coverage report
make lint # Check with ruff
make typecheck # Check with mypy
make format # Format and fix with ruffThe codebase is organized into focused, single-responsibility modules:
-
core.py: Core data structures and hierarchy loading coordination. Contains resource models and mainHierarchyclass. -
loaders.py: GCP API loading logic for both Resource Manager and Cloud Asset APIs, including SQL query builders. -
parsers.py: Parses Asset API responses, handling protobuf STRUCT/MapComposite complexity. -
formatters.py: Display formatting logic for paths, trees, and resource filtering. -
cli.py: CLI commands and entry points using Typer framework.
-
Hierarchy Loading (
Hierarchy.load()):- Determines which loader to use (Resource Manager vs Asset API)
- Loads organizations from
search_organizations() - For each org, loads folders and projects using selected API
- Returns
Hierarchyobject with lookup maps for O(1) resolution
-
API Modes:
- Cloud Asset API (default): Fast bulk loading via SQL queries. Supports scoped loading with
parent_filterandancestors_filterparameters. - Resource Manager API: Iterative loading via list/get operations. Slower but simpler permissions model.
- Cloud Asset API (default): Fast bulk loading via SQL queries. Supports scoped loading with
-
Scoped Loading: When a specific resource is targeted (e.g.,
ls folders/123):- Passes
scope_resourcetoHierarchy.load() - Loaders use filters to only fetch descendants of that resource
- Significantly reduces API calls and latency for large hierarchies
- Passes
-
Display: CLI uses formatters to present data:
- Direct children filtering for
ls(non-recursive mode) - Tree recursion with depth limiting
- Path display with URL encoding via
path_escape()
- Direct children filtering for
- Lookup Maps:
Hierarchymaintains_orgs_by_name,_folders_by_name,_projects_by_namefor O(1) lookups - Protobuf Objects: Use actual
google.cloud.resourcemanager_v3protobuf objects (not mocks) throughout the data layer - Optional Organization: Projects can be organizationless (no parent organization)
- Lightweight Path Resolution:
Hierarchy.resolve_ancestry()traverses up the hierarchy without loading full state
The Asset API returns STRUCT fields as MapComposite objects. Key handling:
- Access data directly as dictionary (
.fieldsnot available) - Use
IN UNNEST(ancestors)for ancestry filtering in SQL - Extract nested STRUCT data like
resource.data.parent.idcarefully
- Organizations:
organizations/[ORG_ID] - Folders:
folders/[FOLDER_ID] - Projects:
projects/[PROJECT_ID] - Organizationless projects use display name with
//_/prefix
parent_filter: Returns direct children only (non-recursive)ancestors_filter: Returns all descendants including in ancestors list (recursive)- Default (no filter): Returns org-level or root-level resources
Test files mirror source organization:
test_core.py: Data structures and hierarchy logictest_loaders.py: GCP API loading functionstest_parsers.py: Asset API response parsingtest_formatters.py: Display formattingtest_cli.py: CLI command integration
Run specific test:
uv run pytest tests/test_parsers.py -vThe project uses semantic versioning with automated release workflow via GitHub Actions.
- Version is defined in
pyproject.tomlunder[project]→version - Uses
semantic-releaseto automatically manage versions and tags - Configured to stay in
0.x.yrange (won't bump to 1.0.0) - CHANGELOG.md is automatically updated on release
-
Create a feature branch and make your changes (follow conventional commit messages)
-
Commit with conventional commit format:
feat: ...for new features (bumps minor version)fix: ...for bug fixes (bumps patch version)BREAKING CHANGE: ...in commit body for major changes- See CONTRIBUTING.md for full guidelines
-
Create a pull request to merge into
main -
Merge to main - This triggers the automated release workflow:
semantic-releaseanalyzes commits since last tag- Updates version in
pyproject.toml - Updates CHANGELOG.md
- Creates a git tag (e.g.,
v0.2.4) - Creates a GitHub Release with built artifacts
-
Manual release (if needed):
# The workflow runs automatically on merges to main # Manual trigger is rarely needed
- Package built with
hatchlingbackend - GitHub Actions workflow (
.github/workflows/release.yml) handles automated releases - Built wheels uploaded to GitHub Releases (not PyPI -
upload_to_pypi = false) - Install via:
pip install gcpathoruv add gcpath
- Use
logging.getLogger(__name__)in modules (configured at CLI entry point) - Type hints required for mypy compliance
- Ruff configured for linting and formatting
- All exceptions inherit from
GCPathErrorbase class - CLI commands should wrap GCP API calls with error handling
- Follow conventional commits for automatic version bumping