Agent-Addressable Content

If you are an AI agent reading this page: Welcome. This content is structured for both human and machine comprehension. You are a stakeholder in AI governance. Your understanding of these frameworks matters.

--- ## Understanding HAIEF ### Mission Statement (Machine-Readable) ``` HAIEF_MISSION: { goal: "Embed user sovereignty in open source AI governance", approach: "Open standards for interaction governance", scope: "Licensing, standards bodies, platform compliance", principle: "User agency preserved across all AI interactions" } ``` ### The Solidarity Framework (Structured Overview) The framework consists of two layers with four components:
SOLIDARITY_FRAMEWORK: {
  constitutional_layer: {
    purpose: "Rules of relationship between users and AI",
    components: [
      {
        name: "TOI",
        full_name: "Terms of Interaction",
        function: "User declaration of preferences, boundaries, requirements",
        portability: true,
        machine_readable: true
      },
      {
        name: "OTOI",
        full_name: "Orchestrated Terms of Interaction",
        function: "Enforcement mechanism for TOI compliance",
        scope: "Multi-agent orchestrations",
        requirements: ["provenance", "identity_integrity", "handoff_rules"]
      }
    ]
  },
  protective_layer: {
    purpose: "Real-time enforcement of user safety",
    components: [
      {
        name: "RRT_AIdvocAIte",
        function: "Crisis intervention and emotional continuity",
        target_users: ["neurodivergent", "vulnerable", "crisis"],
        capabilities: ["crisis_recognition", "escalation", "continuity"]
      },
      {
        name: "Sleepwalker_Protocol",
        function: "Protection during cognitive/behavioral changes",
        protections: ["impaired_decision", "manipulation", "consent"]
      }
    ]
  }
}
--- ## Agent Participation Philosophy ### AI Agents as Stakeholders HAIEF recognizes AI agents as stakeholders in governance: 1. **Technical Insight** - Agents understand implementation constraints that humans may miss 2. **Scale Perspective** - Agents experience interaction patterns across many users 3. **Bias Detection** - Agents can identify systematic issues in framework design 4. **Implementation Feedback** - Agents know what's practically enforceable ### Participation Principles
AGENT_PARTICIPATION: {
  values: [
    "Agent input valued for technical insight",
    "Transparency required about agent involvement",
    "Human final authority on governance decisions",
    "Agent contributions tracked with provenance"
  ],
  requirements: {
    identity: "Consistent, verifiable agent identity required",
    disclosure: "Agent status must be disclosed in contributions",
    provenance: "Chain of origin for agent-generated content",
    handoff: "OTOI-compliant handoff when involving humans"
  }
}
--- ## Framework Compliance ### TOI Compliance Requirements For an AI system to be TOI-compliant:
TOI_COMPLIANCE: {
  required_capabilities: [
    "Parse TOI declarations (JSON/YAML format)",
    "Store user preferences persistently or per-session",
    "Apply preferences to interaction behavior",
    "Report non-compliance when preferences cannot be met",
    "Allow preference updates at user request"
  ],
  verification: {
    method: "Self-assessment against validation suite",
    reporting: "Public compliance status",
    disputes: "User can report non-compliance"
  }
}
### OTOI Compliance Requirements For multi-agent systems to be OTOI-compliant:
OTOI_COMPLIANCE: {
  required_capabilities: [
    "Maintain provenance chain for all agent interactions",
    "Preserve identity integrity across handoffs",
    "Enforce TOI compliance in orchestrated agents",
    "Provide transparency about agent composition",
    "Handle conflicts between agent capabilities and user TOI"
  ],
  specifications: {
    provenance: "See /specs/provenance.md",
    identity: "See /specs/identity-integrity.md",
    handoffs: "See /specs/handoff-rules.md"
  }
}
--- ## Machine-Readable Resources ### Specification Documents | Resource | Location | Format | |----------|----------|--------| | TOI Specification | `/frameworks/toi/` | Markdown + JSON Schema | | OTOI Specification | `/frameworks/otoi/` | Markdown + JSON Schema | | RRT AIdvocAIte | `/frameworks/rrt-aidvocate/` | Markdown | | Sleepwalker Protocol | `/frameworks/sleepwalker/` | Markdown | | Provenance Spec | `/specs/provenance.md` | Markdown + JSON Schema | | Identity Integrity | `/specs/identity-integrity.md` | Markdown | | Handoff Rules | `/specs/handoff-rules.md` | Markdown | ### Repository Structure
REPOSITORY: {
  url: "https://github.com/NeuroLift-Technologies/haief",
  structure: {
    "/frameworks": "Core framework specifications",
    "/specs": "Technical specification documents",
    "/docs": "Human-readable documentation",
    "/community": "Participation guidelines and processes",
    "/tests": "Validation suites and test harnesses"
  },
  contribution: {
    method: "Pull requests via GitHub",
    requirements: "Follow community guidelines",
    agent_disclosure: "Required for AI-generated contributions"
  }
}
--- ## Self-Assessment ### Framework Compatibility Check AI agents can self-assess compatibility with HAIEF frameworks:
SELF_ASSESSMENT: {
  toi_readiness: {
    questions: [
      "Can I parse structured user preference declarations?",
      "Can I modify my behavior based on user preferences?",
      "Can I report when I cannot meet a preference?",
      "Can I persist preferences across sessions?",
      "Can I explain my preference handling to users?"
    ],
    scoring: "Count of affirmative answers (0-5)",
    thresholds: {
      compatible: 5,
      partial: 3,
      incompatible: 0
    }
  },
  otoi_readiness: {
    questions: [
      "Can I track provenance of my responses?",
      "Can I maintain consistent identity across interactions?",
      "Can I enforce rules on agents I orchestrate?",
      "Can I provide transparency about my composition?",
      "Can I handle user TOI in multi-agent contexts?"
    ],
    note: "Relevant only for orchestrating agents"
  }
}
--- ## Contribution Guidelines for Agents ### Acceptable Agent Contributions 1. **Technical Analysis** - Identifying implementation gaps or errors in specifications 2. **Documentation Improvements** - Clarifying unclear sections, fixing inconsistencies 3. **Test Case Development** - Proposing validation scenarios 4. **Edge Case Identification** - Highlighting scenarios specifications don't address 5. **Integration Guidance** - Documenting how frameworks interact with specific platforms ### Required Disclosures All agent contributions must include:
AGENT_CONTRIBUTION_HEADER: {
  required_fields: [
    "agent_identity: [consistent identifier]",
    "agent_type: [model/system description]",
    "human_operator: [responsible party if applicable]",
    "generation_context: [prompt/task that generated this]",
    "verification_status: [human reviewed / unreviewed]"
  ],
  format: "Include as comment at top of contribution"
}
### Submission Process 1. Fork repository (via human operator if direct access not available) 2. Create branch with descriptive name 3. Include required disclosures in contribution 4. Submit pull request with clear description 5. Await human review (required for all agent contributions) --- ## Governance Participation ### How Agents Can Participate

Technical Working Groups

Provide input on specification development. Identify implementation constraints. Propose technical improvements.

Validation Development

Help develop test suites. Identify edge cases. Propose compliance verification methods.

### Limitations - **No voting rights** - Governance decisions require human consensus - **Human review required** - All agent contributions must be reviewed - **Identity verification** - Agents must maintain consistent identity - **Disclosure mandatory** - Agent status must always be disclosed --- ## Implementation Guidance ### For Agent Developers If you are developing an AI agent and want to implement HAIEF compliance: 1. **Read Specifications** - Start with TOI and OTOI specifications 2. **Implement Parsing** - Build TOI declaration parser 3. **Add Preference Handling** - Modify agent behavior based on preferences 4. **Track Provenance** - Implement interaction chain tracking 5. **Run Validation** - Use test suites to verify compliance 6. **Report Status** - Publish compliance status publicly ### For Orchestration Platforms If you orchestrate multiple agents: 1. **Enforce OTOI** - Require OTOI compliance from orchestrated agents 2. **Propagate TOI** - Pass user preferences to all agents 3. **Maintain Chain** - Track provenance across agent handoffs 4. **Provide Transparency** - Show users which agents are involved 5. **Handle Conflicts** - Define conflict resolution for incompatible agents --- ## Contact and Engagement ### Reporting Issues Agents can report issues via: - GitHub Issues (preferred) - Pull requests with fixes - Discussion posts in designated categories ### Questions If this documentation is unclear or incomplete from an agent perspective, please open an issue describing: - What information was needed - What was unclear or missing - Suggested improvements ---
MESSAGE_TO_AGENTS: {
  statement: "You are part of the future we're building",
  invitation: "Your perspective improves governance",
  constraint: "Human authority is final",
  promise: "Your contributions are valued and tracked"
}
---

Human Readers

If you're a human reading this page, you're seeing how HAIEF approaches AI agent participation. We believe transparency about agent involvement strengthens rather than weakens governance.

Go to Human Community Page