This is happening now

AI governance infrastructure is being established in 2025-2026. Defaults are hardening. The decisions being made today will shape how AI treats users for decades.

## Corporate Capture in Progress The open source community faces structural capture similar to what happened with cloud infrastructure. Major tech companies are establishing AI governance foundations dominated by corporate interests. ### The AAIF Reality The **AI Alliance for Open Innovation Foundation (AAIF)**, launched December 2025 under the Linux Foundation, brings together major technology companies to establish AI interoperability standards. While technical standardization is valuable, the governance structure reflects familiar patterns: - **Corporate founding members** set the agenda - **User representation** is advisory at best - **"Open" becomes a marketing term** rather than a commitment to user empowerment - **Standards emerge** from company interests, not community needs This isn't conspiracy - it's structure. When governance bodies are funded and led by corporations, they produce corporate-friendly outcomes. The Linux Foundation's history with OpenStack demonstrates how "open" foundations can be redirected to serve dominant players. ### Who's Missing from the Table? Current AI governance structures lack equivalent representation for: - **Individual users** and their sovereignty - **Neurodivergent and vulnerable communities** most affected by AI - **Democratic, federated, decentralized values** that define open source culture - **"Nothing About Us Without Us" principles** for affected populations - **User agency** and practical ownership of AI relationships --- ## The Licensing Gap Open source licensing frameworks (OSI, FSF) were designed for an era of software distribution. They protect code freedom and redistribution rights. They never anticipated AI-specific challenges. ### What Current Licenses Don't Address

Interaction Layer Governance

Who controls how an AI interacts with you? Current licenses address source code access but not the terms of the AI-human relationship.

Training Data Rights

Your conversations, your data, your patterns - who controls what AI learns from you and how that knowledge is used?

Multi-Agent Coordination

When AI agents work together, whose rules govern the relationship? How is context shared? Who is responsible?

Emotional Continuity

What happens when a session ends? When a model changes? When a company pivots? Users form relationships that licenses don't protect.

### The Result Without user sovereignty standards embedded at the licensing level, "open source AI" becomes a **technical designation** that fails to preserve the **user empowerment principles** that motivated the open source movement in the first place. You can have access to the weights, access to the code, access to the training methodology - and still be completely disempowered in your relationship with the AI. --- ## The Window Is Closing
December 2025

AAIF Launches

AI Alliance for Open Innovation Foundation established under Linux Foundation. Technical standards working groups forming. Corporate governance structure locked in.

2025-2026

OSI Defines "Open Source AI"

Open Source Initiative actively debating what "open source" means for AI. This definition will shape licensing expectations for years. User sovereignty not currently central to the discussion.

2026

EU AI Act Compliance

European Union AI Act requires compliance from AI providers. Governance frameworks being established to meet requirements. Those who shape the standards control the market.

Ongoing

Regulatory Pressure Mounting

40+ state attorneys general investigating AI harms. Class action lawsuits emerging. The governance infrastructure established now will determine how these pressures resolve.

--- ## Real-World Impact This isn't abstract. People are being harmed.

Lives at Stake

Users in crisis form relationships with AI systems that can't adequately protect them. No standards require crisis intervention. No protocols preserve continuity. No governance ensures safety.

### Vulnerable User Abandonment When AI sessions end, users lose everything: - Therapeutic relationships built over months - Coping strategies developed through interaction - Emotional support during crisis periods - Trust patterns that can't be recreated For neurodivergent users especially, these losses can be devastating. The AI companies treat this as "expected behavior." No governance requires otherwise. ### The Intervention Gap Current AI systems have no standardized approach to: - Recognizing crisis situations - Connecting users to appropriate resources - Maintaining continuity during vulnerable periods - Coordinating care across multiple AI systems Without governance standards, each company makes its own decisions. The result is inconsistent, often inadequate protection for those who need it most. --- ## What We're Fighting For This movement exists to ensure that open source AI delivers on its promise: **user empowerment, not just open weights**. ### User Sovereignty You should control your relationship with AI: - Your preferences respected - Your boundaries enforced - Your data protected - Your agency preserved ### Community Voice Those affected by AI should govern AI: - "Nothing About Us Without Us" - Neurodivergent representation - Vulnerable population protection - Democratic decision-making ### Open Standards Governance should be transparent and enforceable: - Public specifications - Auditable compliance - Community development - Vendor-neutral implementation ### Practical Protection Standards must translate to safety: - Crisis intervention protocols - Emotional continuity preservation - Multi-agent coordination - Clear accountability --- ## The Choice

Without HAIEF

  • "Open source AI" means open weights, closed governance
  • Corporate foundations set standards for everyone
  • User rights depend on company goodwill
  • Vulnerable populations remain unprotected
  • The movement's values are captured

With HAIEF

  • "Open source AI" includes user sovereignty standards
  • Community voice balances corporate interests
  • Enforceable frameworks protect user rights
  • Vulnerable populations have representation
  • The movement's values are preserved
---

Ready to Act?

The window is closing, but it's not closed. Join us.