Primary users | 主な対象者: researchers, governance readers, and readers who need the canonical boundary reference.
研究者、ガバナンス関係者、中核境界参照を確認する読者向け。
Use this repository when you need the canonical LUMINA-30 boundary statement itself.
LUMINA-30の中核境界文書そのものを確認する場合に使用。
EN: LUMINA-30 is a non-binding boundary-check framework asking whether human refusal authority remained effective before irreversible AI impact.
JP: LUMINA-30は、AIの不可逆的影響前に人間の拒否権が実効的に残っていたかを問う非拘束的な境界判定フレームです。
If you are applying LUMINA-30 to an incident, start with the incident-review repository instead.
事故レビューに適用する場合は、先に incident-review 側を使用。
-
Canonical Index | 正典索引
Use this to navigate the full repository network.
repo群全体の導線確認に使用。 -
Conceptual Overview | 概念概要
Use this for first-time conceptual orientation and visual structure.
初見時の概念把握と視覚導線に使用。 -
Incident Review Hub | 事故レビュー入口
Use this when reviewing whether refusal authority remained effective before irreversible impact.
不可逆的影響の前に拒否権が実効性を持っていたかを確認する場合に使用。
This material:
- is not a recommendation.
- does not provide safe-harbor or liability protection.
- does not guarantee safety, legality, or ethical adequacy.
- does not delegate refusal authority to AI systems.
LUMINA-30 is a non-binding, non-directive civilizational boundary reference.
It does not mandate action, propose policy, prescribe implementation, create certification status, or create compliance obligations.
LUMINA-30は、非拘束・非指示型の文明境界参照である。
行動を命令せず、政策を提案せず、実装を規定せず、認証状態や適合義務を作らない。
This repository examines human decision authority, refusal rights, and institutional accountability boundaries in advanced AI systems.
Keywords: AI governance, AI accountability, singularity risk, refusal authority, irreversible decision risk, post-incident review, institutional responsibility.
This project does not guarantee or assert any specific outcome.
This document is provided as one of multiple possible risk-reduction references, clarifying review conditions related to judgment, responsibility, and irreversibility.
The absence of this material does not itself cause immediate or decisive consequences. It constitutes only a part of a broader risk-mitigation structure.
Public reference records and ethical boundary framework for pre-singularity AI governance.
LUMINA-30 is a fixed civilizational reference framework that defines ethical boundaries which must not be overridden, optimized away, or nullified by artificial intelligence systems, especially prior to and during the emergence of recursive self-improvement.
This repository does not contain implementation code, policy proposals, or technical enforcement mechanisms.
It preserves, in language and public record form, the conditions under which human choice — including irrational decisions and acceptance of loss — must remain sovereign.
All authoritative documents are published as fixed PDF public records with cryptographic hashes.
-
LUMINA-30 Sanctuary Charter (SUP)
- Japanese | English
- Fixed ethical boundary definition
- Public record (PDF + SHA-256 hash)
-
LUMINA-30 Mathematical Supplement
- Japanese | English
- Descriptive reference on structural limits of human control
- Public record (PDF + SHA-256 hash)
These documents are published and preserved via Notion as immutable public records.
This repository is:
- A search-indexed entry point
- A reference hub linking to fixed public records
- A civilizational precondition archive
This repository is not:
- An AI control system
- A governance proposal
- An optimization framework
- An advocacy or persuasion platform
- Documents: Finalized and fixed
- PDFs: Published with hashes
- Further modification: Not permitted
Structural map (non-canonical):
- lumina-30-structure-map
- AI Accountability Reference | 説明責任参照
Use this when you need institutional accountability language, audit wording, or responsibility-continuity references.
制度的な説明責任、監査文言、責任連続性の参照が必要な場合に使用。
Released into the public domain (CC0).
No author. No owner.
本セクションは、日本語話者による理解補助を目的とした 非正本・非規範的な参照用説明である。
本リポジトリに含まれる文書は、 LUMINA-30 に関連する補助的参照資料であり、
- 実装仕様
- 技術設計
- 政策提言
- 最適化戦略
を提示するものではない。
これらの資料は、
再帰的自己再構築が不可逆領域に入る前段階において、
制度的・倫理的検討のための参照点を提供することのみ
を目的としている。
本日本語セクションは、 英語本文の内容を修正・拡張・正当化するものではなく、 補助的理解のためにのみ提供される。
English
- recursive self-improvement
- recursive self-modification
- irreversible boundary
- human final authority
- AI ethics
- AI governance
- pre-singularity ethics
- institutional preconditions
- LUMINA-30
- Sanctuary Charter
日本語
- 再帰的自己再構築
- 再帰的自己改変
- 不可逆点
- 人間の最終判断権
- AI倫理
- AIガバナンス
- シンギュラリティ前倫理
- 制度的前提条件
- LUMINA-30
- 聖域憲章
See also: Institutional Friction Toolkit (Reference)