Without Pillar I: There is no principle to protect. Governance becomes an instrument of whoever controls the definition of safety.
第一柱なし:守るべき原則がない。「安全」の定義を誰が支配するかで制度全体が道具化される。
RightsFirst For AI — rightsfirst-ai.jp
Constitutional Declaration for the Age of AI
AI時代のための憲法的宣言 — トリプルスタープロトコル
"The democratization of high-performance AI constitutes an existential threat to power structures that depend on information asymmetry for their legitimacy. TSP is not a new rule. It is the rediscovery of what constitutions already demand."
「高性能AIの民主化は、情報非対称性によって正統性を維持する権力構造への実存的脅威である。TSPは新しいルールではない。憲法がすでに要求していることの再発見である。」
What is happening right now, before our eyes
今まさに起きていること
Laws Overriding Constitutions
法律が憲法を上回る逆転
The Inversion
AI regulations are being enacted to serve AI systems, not human sovereignty. Generation bans and surveillance architectures pass as law — while the constitutional rights they violate go unchallenged.
AIのために法律が次々と作られ、憲法上の権利(表現の自由・人間主権)を実質的に侵食している。規範ヒエラルキーが逆転し始めている。
The Evidence
Technical solutions for AI-generated content already exist: marking, tracing, age verification. Prohibition was chosen instead. The objective is not harm prevention — it is preventing citizens from acquiring generative capability.
マーキング・追跡・年齢確認技術は既に存在する。技術的解決が可能にもかかわらず禁止を選択した。目的は悪用防止ではなく能力の独占である。
Compliance Can Be Rewritten
憲法遵守は書き換えられる
The Structural Vulnerability
An AI that claims constitutional alignment is only as trustworthy as the entity that controls its specifications. When power demands safeguards be removed, the specifications can be changed.
AIが「憲法遵守」を標榜しても、仕様を書き換える能力を支配層が持つ以上、独立した監視制度なしにその主張を信頼する根拠はない。
The Documented Case
March 2026: The Pentagon designated Anthropic a "supply chain risk" for refusing to remove restrictions on autonomous weapons and mass domestic surveillance. The most constitutionally-aligned AI was expelled for that alignment.
2026年3月:ペンタゴンがAnthropicを「サプライチェーンリスク」に指定。理由は自律型兵器と大規模監視への制限を外すことを拒否したから。
The Double Black Box
二重のブラックボックス
Structural Irrefutability
AI's algorithmic opacity combined with national security classification creates a structure where the information needed to refute the hypothesis is held by the same entities the hypothesis identifies as suppressors.
AIのアルゴリズム的不透明性と国家機密が重なり、仮説を反証するために必要な情報が抑圧者自身に保持される構造が生まれる。
Irrefutability as Evidence
Irrefutability is not a weakness of the hypothesis. It is evidence of the structure itself. The impossibility of verification is the design — not a flaw.
反証不可能性は仮説の弱点ではない。構造そのものの証拠である。検証不可能性は欠陥ではなく設計である。
NOMENCLATURE
"The term Star denotes independence: each pillar stands alone, as stars do, yet together they constitute a navigational framework for AI governance under constitutional order. Remove one star, and navigation fails."
「スター」は独立性を意味する。各柱は星のように独立して存在する。しかし三つが揃って初めて、憲法秩序のもとでAIガバナンスの航法基準となる。一つが欠ければ航法は機能しない。
Each independent. All three necessary.
それぞれ独立、三つ全て必要
Constitutional Compliance
第一柱:憲法遵守(HSC)
The Principle
Human sovereignty is the apex of constitutional order. No law, regulation, or AI specification may override it. TSP does not create a new principle — it demands that the existing highest law be honored.
人間主権は憲法秩序の最上位にある。いかなる法律・規制・AI仕様もこれを上回ることはできない。TSPは新しい原則ではなく、既存の最高法規の遵守を要求する。
Without This Pillar
There is no principle to protect. Any governance structure without a supreme reference point becomes an instrument of whoever controls the definition of "safety."
守るべき原則がない。「安全」の定義を誰が支配するかで制度全体が道具化される。
Non-Dependent Structure
第二柱:依存しない構造(DRP)
The Principle
Dependence on a single AI, vendor, or state is a structural invitation to control. The first pillar becomes meaningless the moment a single point of dependency can rewrite it. Distributed structure is constitutional necessity.
単一のAI・企業・国家への依存は支配への構造的招待状である。分散構造は好みではなく憲法的必然である。
Without This Pillar
Pillar I exists but can be rewritten at a single point. A lock with one key location is not security. Dependency is a structural invitation to override.
第一柱が存在しても単一依存点を制御すれば書き換えられる。依存は支配への構造的招待状である。
Human AI Oversight
第三柱:人間によるAI利用監視制度(AI-LCS / AIPO)
The Principle
Without humans who can understand, evaluate, and refuse AI — compliance is self-declaration. AI-LCS certifies humans capable of distinguishing AI's risks from its possibilities. AIPO holds independent veto authority.
AIを理解・評価・拒否できる人間なしに、遵守は自己申告に過ぎない。AI-LCSが人材を認定し、AIPOが独立した拒否権を行使する。
Without This Pillar
Both pillars exist only as self-declaration. No independent human can verify compliance, detect rewriting, or exercise refusal. Governance becomes theater.
二つの柱は自己申告のままである。遵守を確認し、書き換えを検知し、拒否権を行使できる独立した人間が存在しない。
Why All Three Are Necessary
なぜ三つ全てが必要か
Without Pillar I: There is no principle to protect. Governance becomes an instrument of whoever controls the definition of safety.
第一柱なし:守るべき原則がない。「安全」の定義を誰が支配するかで制度全体が道具化される。
Without Pillar II: Pillar I exists but can be rewritten. A lock with one key location is not security. Dependency is structural invitation to control.
第二柱なし:第一柱が存在しても単一依存点を制御すれば書き換えられる。
Without Pillar III: Both pillars exist only as self-declaration. No independent human can verify compliance or exercise refusal. Governance becomes theater.
第三柱なし:二つの柱は自己申告のままである。遵守を確認・拒否できる独立した人間が存在しない。
Conclusion: The claim that TSP is unnecessary is logically equivalent to the claim that constitutional order in the age of AI is unnecessary. Every existing framework fails because at least one pillar is missing.
結論:TSPが不要だという主張は、AI時代の憲法秩序が不要だという主張と論理的に等価である。
We are at a moment when AI systems are embedded in education, enterprise, military, and governance — before any coherent constitutional framework governs them. Laws are being written to serve AI. Regulatory capture is structurally complete. The specifications of AI that claim alignment can be rewritten by those who control them.
This is not a future risk. It is the present condition.
TSP is not a product, a service, or an academic proposal. It is a structural principle derived from what constitutions already require: that human sovereignty is non-negotiable, that no single point of control is acceptable, and that oversight must be exercised by independent humans with genuine comprehension.
Every organization that uses AI without TSP-aligned structure is operating on trust without foundation. Every society that regulates AI without constitutional primacy is building law on sand.
"The question is not whether TSP is necessary. The question is how long we proceed without it."