Today, the National Telecommunications and Information Administration issued a report on “Dual-Use Foundation Models with Widely Available Model Weights” which makes recommendations to the White House regarding the marginal risks and benefits of large artificial intelligence models with widely available model weights, commonly called “open foundation models.”
The report finds that open foundation models offer a broad spectrum of benefits and that there is insufficient evidence to justify restrictions on the wide availability of model weights. In light of the rapid development in AI, and the uncertainty that brings, the NTIA’s report also recommends that the federal government actively monitor and evaluate the marginal risks posed by open foundation models without restricting the publication or dissemination of model weights in order to preserve innovation, competition, research, and accessibility in AI systems.
The report follows President Biden’s 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which directed the NTIA and other agencies to examine different aspects of the AI ecosystem as part of a government-wide effort to harness AI for good while mitigating its risks. As of today, all of the 270-day deadlines in the Executive Order have been met, in a demonstration of the Biden administration’s dedication to addressing the challenges and opportunities presented by the rapid advancement in the AI sector.
The following statement can be attributed to Nick Garcia, Senior Policy Counsel at Public Knowledge:
“The NTIA’s report is a careful assessment of the unique benefits and marginal risks posed by the open release of AI model weights. The report’s recommendation to preserve the existing openness that has created both extraordinary innovation and critical accountability insights in the AI ecosystem is an important policy landmark. Informed by a diversity of stakeholders across the industry, academia, and civil society through the open comment process, the NTIA’s analysis correctly concludes that many of the purported justifications for restrictions on open model weights are not supported by concrete evidence.
“Public Knowledge’s comments encouraged the NTIA to recognize the compatibility – and synergy – between safe, secure, and responsible AI development and the protection of open foundation models. Open foundation models encourage dynamic competition, inclusive innovation, and vital access to these developing technologies. Open models lie at the heart of our insights into AI bias, explainability, and effectiveness and open models create an important foundation for further innovation and research. As the report recognizes, the marginal risks of open models compared to closed, proprietary models are not well-supported by existing evidence, and the potential benefits that would be lost from restrictions are significant.
“We are encouraged that the NTIA makes recommendations to expand the government’s capacity to monitor, evaluate, and flexibly respond to changes in AI technology. As we recommended in our comments, flexibility is one of the most important features in regulatory regimes for rapidly changing technologies. A flexible regulatory approach acknowledges the limitations of our current understanding, anticipates the need for adjustments as technology evolves, and positions the government to adapt to new risks that may emerge from AI developments.”
You may read our full comments in the proceeding to learn more about our analysis of the risks and benefits of open models as well as our policy recommendations to promote innovation, competition, and responsible AI development.
Members of the media may contact Communications Director Shiva Stella with inquiries, interview requests, or to join the Public Knowledge press list at shiva@publicknowledge.org or 405-249-9435.