This guide explains how to design, structure, and maintain PolicyEngine rules inside the /rules/ folder for the 4th.GRC Agentic Governance Platform.
Rules define the evaluative logic used by PolicyEngine to assess systems against governance profiles (ISO 42001, NIST AI RMF, SOC 2, etc.).
Profiles reference rules, but rules themselves act as reusable building blocks.
Example:
rules/
│
├── bias_fairness.yaml
├── security_encryption.yaml
├── data_retention.yaml
└── explainability.yaml
Each file contains one rule module, which may include multiple related checks.
Rules:
- Provide atomic, testable evaluations
- Are referenced by profiles via:
rules: - id: bias_fairness.check_mitigation
- Are reusable across multiple governance standards
- Enable versioned, modular policy-as-code
Profiles = the “framework”
Rules = the “logic engine”
Each rule YAML file should follow this pattern:
rule_id: bias_fairness
version: 1.0.0
metadata:
title: "Bias & Fairness"
description: "Checks mitigation steps, data diversity, and fairness indicators."
tags: ["fairness", "bias", "ethics"]
checks:
- id: check_mitigation
title: "Model includes documented bias mitigation steps"
severity: medium
evidence_key: model_card
evaluator: equals
params:
path: has_bias_mitigation
value: true
- id: diversity_check
title: "Training data covers sufficient demographic diversity"
severity: high
evidence_key: dataset_report
evaluator: contains
params:
path: demographics
value: ["age", "gender", "race"]| Field | Required | Description |
|---|---|---|
rule_id |
Yes | A unique namespace for rule module |
version |
Yes | Semantic version |
metadata |
Yes | Title, description, tags |
checks[] |
Yes | List of independent rule checks |
checks[].id |
Yes | Unique within rule_id |
checks[].title |
Yes | Human-readable summary |
checks[].severity |
Yes | low / medium / high / critical |
checks[].evidence_key |
Yes | Which evidence object to inspect |
checks[].evaluator |
Yes | equals / contains / exists / not_exists / regex / numeric_range |
checks[].params |
Optional | Evaluator-specific arguments |
Evaluators control how rules inspect evidence:
evaluator: equals
params:
path: has_bias_mitigation
value: trueevaluator: contains
params:
path: demographics
value: ["gender", "age"]evaluator: regex
params:
path: model_description
pattern: "differential privacy"evaluator: numeric_range
params:
path: drift_score
min: 0
max: 0.1Before pushing:
python scripts/validate_profiles.pypython scripts/run_unit_tests.pypython scripts/run_integration_tests.pyFollow this checklist:
- Create new file in
/rules/ - Use lowercase + underscores for filename
- Add:
- rule_id
- version
- metadata
- one or more checks
- Validate:
python scripts/validate_profiles.py
- Reference the rule in a profile under
/profiles/ - Commit rule + updated profile(s)
Rules are versioned with semantic versions:
1.0.0— first stable1.1.0— new non-breaking checks2.0.0— breaking changes or renaming checks
Profiles should reference exact rule versions to guarantee reproducibility.
Profiles include rules like this:
rules:
- id: bias_fairness.check_mitigation
weight: 0.3
- id: security_encryption.at_rest
weight: 0.2- Keep rules atomic (one responsibility per check)
- Use human-readable titles
- Include good metadata
- Avoid mixing unrelated concerns in one file
- Prefer reusable checks across multiple frameworks
- Add test evidence samples in
/tests/data/
- Composite rules (multi-step logic)
- ML-based evaluators
- Remote rule registries
- Rule dependency graphs
- Explainability outputs (“rule fired because…”)
Dr. Freeman A. Jackson
4th.GRC™ – Agentic AI Governance Platform
Fourth Industrial Systems (4th)