Skip to content

TriStiX-LS/LggT-core

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 

Repository files navigation

lggt-core_README_fixed (1).md

LGGT+ — Logic-Guided Graph Transformers Plus

A neuro-symbolic AI architecture for verifiable reasoning over expert knowledge.

Tests License Python arXiv Zenodo


"The first AI architecture combining Łukasiewicz t-norms, graph transformers, and auditable proof trails simultaneously." — Systematic review of 12 neuro-symbolic architectures, March 2026

Citation

If you use LGGT+ or this benchmark in your research, please cite:

@misc{laabs2026tnorm,
  author    = {Laabs, Adam},
  title     = {T-Norm Operators for EU AI Act Compliance Classification:
               An Empirical Comparison of Łukasiewicz, Product, and Gödel
               Semantics in a Neuro-Symbolic Reasoning System},
  year      = {2026},
  publisher = {Zenodo},
  version   = {1.0.0},
  doi       = {10.5281/zenodo.19147739},
  url       = {https://doi.org/10.5281/zenodo.19147739}
}

Zenodo DOI: https://doi.org/10.5281/zenodo.19147739


What is LGGT+?

LGGT+ is a neuro-symbolic reasoning engine that combines:

  • Formal logic (Łukasiewicz many-valued logic, t-norms) for verifiable inference
  • Knowledge graphs (ontological directed graphs with typed nodes and edges)
  • Proof trails (step-by-step auditable derivation of every conclusion)

Unlike black-box AI systems, every LGGT+ classification comes with a formal proof — a structured record of which rules were applied, in what order, and with what confidence at each step.

from core.reasoning.engine import ReasoningEngine
from core.graph.knowledge_graph import KnowledgeGraph

engine = ReasoningEngine()
result = engine.reason(query="is_system_high_risk", source="system_A", target="high_risk")

print(result.confidence)     # 0.87
print(result.proof_trail)    # Step-by-step logical derivation
print(result.audit_json())   # NIS2 / EU AI Act Annex IV ready JSON

Output:

{
  "proof_trail": [
    {"step": 1, "check": "employment_context", "confidence": 0.94, "passed": true},
    {"step": 2, "check": "automated_decision", "confidence": 0.91, "passed": true},
    {"step": 3, "check": "recruitment_or_promotion", "confidence": 0.88, "passed": true}
  ],
  "fusion_method": "lukasiewicz_t_norm_chain",
  "final_score": 0.87,
  "rule": "high_risk_employment",
  "passed": true,
  "t_norm_context": "Runtime inference — Łukasiewicz t-norm applied deterministically."
}

Key Mathematical Property

The Łukasiewicz t-norm is used as a runtime inference operator (not as a training loss), which is a deliberate architectural choice:

T_L(a, b) = max(0, a + b - 1)

T_L(0.8, 0.9) = 0.7   ← not 0.72 (product t-norm)

The hard boundary (T_L = 0 when a + b ≤ 1) models legal conjunction semantics: a condition is either definitively present or it is not. Partial evidence does not accumulate. This is the core theoretical contribution described in our arXiv preprint.


Architecture

L1: core/logic/lukasiewicz.py        — Łukasiewicz + Product t-norms
    core/logic/log_fuzzy_logic.py    — Log-space fuzzy logic (logLTN)
    core/logic/logic_attention.py    — Logic-Augmented Attention (LAA)
    core/logic/aggregators.py        — LogMeanExp / LogSumExp
    core/logic/gumbel_relaxation.py  — Gumbel-Softmax annealing
    core/logic/hypergraph_encoder.py — N-ary hyperedge encoding (LKHGT)
    core/logic/sgat_modulator.py     — SGAT-MS bipartite attention

L2: core/graph/knowledge_graph.py    — Ontological directed graph (NetworkX)
    core/graph/temporal_node.py      — Temporal validity windows (TFLEX)
    core/graph/ontology_patcher.py   — Incremental ontology updates
    core/graph/adaptive_filter.py    — PSL triple filtering

L3: core/reasoning/engine.py         — ReasoningEngine + TNormMode
    core/reasoning/proof_tree_builder.py — Formal proof trees (NeqLIPS)
    core/logic/proof_trail.py        — Auditable proof trails

Domain modules (not included in this repository — see Commercial):

domains/eu_ai_act/knowledge_base.py  — EU AI Act ontology (proprietary)
domains/eu_ai_act/classifier.py      — Risk category classifier (proprietary)
generators/annex_iv/                 — PDF Annex IV generator (GoviX SaaS)

Installation

# Requirements: Python 3.11+
git clone https://github.com/TriStiX-LS/LggT-core.git
cd lggt-core
pip install -r requirements.txt  # networkx, pydantic, fastapi — no ML frameworks

No PyTorch. No TensorFlow. No JAX. Pure Python + stdlib math.


Quick Start

1. Define your knowledge graph

from core.graph.knowledge_graph import KnowledgeGraph
from core.graph.knowledge_graph import KnowledgeNode, KnowledgeEdge

kg = KnowledgeGraph()

# Add typed nodes
kg.add_node(KnowledgeNode(
    node_id="automated_decision",
    node_type="fact",
    label="System makes automated decisions",
    confidence=0.91
))
kg.add_node(KnowledgeNode(
    node_id="employment_context",
    node_type="fact",
    label="System operates in employment domain",
    confidence=0.94
))
kg.add_node(KnowledgeNode(
    node_id="high_risk",
    node_type="conclusion",
    label="System is high-risk under EU AI Act",
    confidence=1.0
))

# Add weighted edges
kg.add_edge(KnowledgeEdge(
    source="automated_decision",
    target="high_risk",
    relation="implies",
    weight=0.88
))

2. Run inference with proof trail

from core.reasoning.engine import ReasoningEngine, TNormMode

# Default: Łukasiewicz t-norm (runtime inference)
engine = ReasoningEngine(knowledge_graph=kg)
result = engine.reason(
    query="classify_risk",
    source="automated_decision",
    target="high_risk"
)

print(f"Confidence: {result.confidence}")          # T_L chain result
print(f"Steps: {len(result.proof_trail.steps)}")   # Auditable steps

3. Hypergraph encoding for N-ary rules

from core.logic.hypergraph_encoder import HypergraphEncoder, HyperedgeQuery

encoder = HypergraphEncoder()

# Encode a 3-ary EU AI Act rule as a hyperedge
query = HyperedgeQuery(
    query_id="art5_1d_check",
    nodes=[
        {"node_id": "real_time_processing", "node_type": "fact", "confidence": 0.97},
        {"node_id": "public_space",          "node_type": "fact", "confidence": 0.96},
        {"node_id": "biometric_id",          "node_type": "rule", "confidence": 0.98},
    ],
    query_type="conjunction",
    rule_id="prohibited_realtime_biometric"
)

result = encoder.encode(query)
print(f"Logical score: {result.logical_score:.3f}")   # ~0.97
print(f"Proof: {result.proof_contribution}")           # Per-node contribution

4. Build a formal proof tree

from core.reasoning.proof_tree_builder import ProofTreeBuilder

builder = ProofTreeBuilder("classification_001")

# Add observations (leaves)
builder.add_observation("obs_1", "employment_context",      0.94, "Annex III §4")
builder.add_observation("obs_2", "automated_decision",      0.91, "Annex III §4")
builder.add_observation("obs_3", "recruitment_or_promotion",0.88, "Annex III §4")

# Build conjunction
builder.add_conjunction("conj", ["obs_1","obs_2","obs_3"], "all_conditions", 0.87)

# Add threshold check and conclusion
builder.add_threshold_check("thr", "conj", threshold=0.5, confidence=0.87)
builder.add_conclusion("root", ["thr"], "high_risk", 0.87, "high_risk_employment")

tree = builder.build(
    "This AI system is HIGH RISK under EU AI Act Annex III §4.",
    "high_risk",
    "high_risk_employment"
)

# Validate the proof
is_valid, errors = tree.is_valid()
print(f"Valid: {is_valid}")          # True
print(f"Depth: {tree.depth()}")      # 4

# Export for legal audit
audit = tree.to_audit_json()
# → NIS2 Art. 23 / EU AI Act Art. 13 / Annex IV ready JSON

# Human-readable transcript
print(tree.to_human_readable())
# PROOF TRANSCRIPT — classification_001
# Conclusion: This AI system is HIGH RISK under EU AI Act Annex III §4.
# (1) employment_context observed [conf=0.940] by condition_check [Art. III §4]
# (2) automated_decision observed [conf=0.910] by condition_check [Art. III §4]
# ...
# QED. Category: high_risk

Benchmark: T-norm Comparison for EU AI Act Classification

We evaluated three t-norm operators as logical conjunction mechanisms for EU AI Act compliance classification (n=1035 cases, 14 rules, 4 categories; clear n=630, marginal n=325, borderline n=80).

T-norm Overall Clear Marginal Borderline False Positives
Łukasiewicz T_L 78.5% 96.3% 56.9% 25.0% 0
Product T_P 81.2% 99.5% 56.9% 35.0% 0
Gödel T_G 84.5% 100.0% 54.5% 85.0% 8 (0.8%)

Key findings: At n=1035, all three operators differ significantly (McNemar p<0.001). T_L and T_P maintain zero false positives — a conservative property required for regulatory AI — but miss borderline cases. T_G achieves highest accuracy (84.5%) and best borderline recall (85%), at the cost of 8 false positives (0.8%) via min-semantics over-classification. T_L and T_P are no longer decision-equivalent at this scale (28 discordant pairs, p<0.001). The dominant finding is that rule base completeness matters more than operator choice. Full results in the arXiv preprint.

Benchmark code and dataset: benchmark/


Tests

pytest tests/ -v --tb=short
# 201/201 passed

# Key invariant (must always pass):
pytest tests/unit/test_reasoning_engine.py::test_reason_confidence_uses_lukasiewicz -v
# T_L(0.8, 0.9) = 0.7 — the mathematical contract

Test coverage by module:

Module Tests Status
LukasiewiczLogic 30 ✅ 30/30
LogFuzzyLogic 14 ✅ 14/14
HypergraphEncoder 13 ✅ 13/13
ProofTrail + ProofTreeBuilder 14 ✅ 14/14
ReasoningEngine 18 ✅ 18/18
LogicAugmentedAttention 8 ✅ 8/8
Aggregators 7 ✅ 7/7
GumbelRelaxation 6 ✅ 6/6
TemporalNode 8 ✅ 8/8
OntologyPatcher 9 ✅ 9/9
SGATModulator 7 ✅ 7/7
AdaptiveFilter 9 ✅ 9/9
API + Integration 56 ✅ 56/56

Research Context

LGGT+ is developed by TriStiX S.L. (Alicante, Spain)

Research Hypotheses

  • H1 (Expressiveness): An architecture combining graph transformers with Łukasiewicz t-norms can express any domain-ontological implication with guaranteed differentiability via Logic-Augmented Attention (LAA).

  • H2 (Verifiability): The Proof Trail constitutes a formal proof in the sense of Łukasiewicz logic — each step is an inference rule instance, and the t-norm chain satisfies monotonicity.

  • H3 (Generalisation): LGGT+ trained on one domain ontology preserves its reasoning structure when transferred to a new domain, requiring only a new ontological graph.

Related Work

System Reference Relation to LGGT+
LTNtorch Badreddine et al., AIJ 2022 T-norm framework — no graph transformer
IBM/LNN Riegel et al., AAAI 2025 Łukasiewicz logic — no proof trail
GNN-QE Zhu et al., ICML 2022 Graph + fuzzy logic — product t-norm
SGAT-MS NeurIPS 2025 Spotlight Graph attention + SAT — no legal domain
LKHGT April 2025 Hypergraph transformer — inspiration for L1
logLTN Badreddine & Serafini 2023 Log-space logic — inspiration for LogFuzzyLogic

LGGT+ is the first system combining all three: Łukasiewicz t-norms + graph transformers + auditable proof trails.


Commercial Use

The lggt-core repository (this repo) contains the mathematical foundation layers (L1–L3) and is released under Apache 2.0.

The domain modules (EU AI Act ontology, Annex IV generator, GoviX SaaS platform) are proprietary and available through TriStiX S.L. commercial offerings.

Why open-source the core? The mathematical framework is our scientific contribution to the community. The domain knowledge (ontologies, rules, patches for delegated acts) is our commercial moat. You get the engine; we keep the maps.


Citation

If you use LGGT+ in your research, please cite:

@article{laabs2026lggt,
  title     = {T-Norm Operators for EU AI Act Compliance Classification:
               An Empirical Comparison of Łukasiewicz, Product, and Gödel
               Semantics in a Neuro-Symbolic Reasoning System},
  author    = {Laabs, Adam},
  journal   = {Artificial Intelligence and Law},
  year      = {2026},
  note      = {Working draft. arXiv:2026.XXXXX},
  url       = {https://arxiv.org/abs/2026.XXXXX}
}

Zenodo DOI: https://doi.org/10.5281/zenodo.19147739


License

Apache 2.0 — see LICENSE.

Domain modules (EU AI Act ontology, GoviX) are proprietary. Contact: Adam.Laabs@TriStiX.com


Links

Releases

No releases published

Packages

 
 
 

Contributors