Frontier Model Forum (FMF) is an industry-supported non-profit dedicated to advancing frontier AI safety and security. It leverages the technical and operational expertise of member companies to ensure the most advanced AI systems remain safe, secure, and capable of meeting society’s needs. The FMF focuses on turning vision into action through collaboration across government, academia, civil society, and industry.
Overview
- The FMF aims to address significant risks to public safety and national security that accompany frontier AI.
- It emphasizes safety research, best-practice development, and information sharing to fortify governance and standards for advanced AI models.
What the Frontier Model Forum Does
The FMF operates as a collaborative, industry-supported entity with three core mandates:
- Identify best practices and support standards development.
- Advance science and independent research.
- Facilitate information sharing among government, academia, civil society, and industry.
FMF Response: AI Action Plan RFI
- In response to the White House’s AI Action Plan, the FMF highlighted the need to advance AI safety and security science, strengthen international coordination and global standards, and facilitate national security testing and coordination.
- The FMF published public comments to inform policy and adoption of robust safety measures.
Core Objectives of the Forum
The FMF is committed to turning vision into action to promote safe and secure AI development. Its core objectives are:
- Advancing AI safety research: Promote responsible development of frontier models, reduce risks, and enable standardized evaluations of capabilities and safety.
- Identifying best practices: Establish and share best practices for frontier AI safety and security, including threat models, evaluation methods, thresholds, and mitigations for public-safety risks.
- Collaborating across sectors: Work across academia, civil society, industry, and government to develop solutions to frontier AI safety and security challenges.
- Information sharing: Facilitate the exchange of information about unique safety and security challenges in frontier AI.
How to Join and Get Involved
- The FMF invites stakeholders from multiple sectors to contribute toward safer AI development.
- Updates, publications, and opportunities to engage are shared via the FMF platform and affiliated channels.
Contact and Access
- General inquiries: [email protected]
- Privacy Policy, Terms of Use, and additional information are available on the FMF site.
- © 2025 Frontier Model Forum. All Rights Reserved.
Why it Matters
As frontier AI systems become more capable, coordinated governance, safety research, and cross-sector collaboration become essential to ensure these technologies benefit society while mitigating risks.
How It Works
- The FMF coordinates with member organizations to identify risk areas, publish research findings, and disseminate best practices.
- It fosters information-sharing channels between researchers, policymakers, and industry practitioners to accelerate safety-through-design approaches in frontier AI.
Safety and Governance Emphasis
The FMF places a strong emphasis on safety frameworks, independent research, and transparent dialogue to support global standards and responsible deployment of advanced AI systems.
Core Features
- Industry-supported non-profit focused on frontier AI safety and security
- Three core mandates: best practices, independent research, information sharing
- Public-facing response and policy collaboration (AI Action Plan involvement)
- Cross-sector collaboration across government, academia, civil society, and industry
- Emphasis on worldwide standards, coordination, and national security considerations