Update | 27 March 2024: Mozilla has submitted its comments to the NTIA’s consultation on openness in AI models referenced in this blog post originally. Drawing on Mozilla’s own history as part of the open source movement, the submission seeks to help guide difficult conversations about openness in AI. First, we shine a light on the different dimensions of openness in AI, including on different components across the AI stack and development lifecycle. Second, we argue that openness in AI can spur competition and help the diffusion of innovation and its benefits more broadly across the economy and society as a whole; that it can advance open science and progress in the entire field of AI; and that it advances accountability and safety by enabling more research and supporting independent scrutiny as well as regulatory oversight. In the past and with a view to recent progress in AI, openness has been a key tenet of U.S. leadership in technology — but ill-conceived policy interventions could jeopardize U.S. leadership in AI. We also recently published the technical and policy readouts from the Columbia Convening on Openness and AI to serve as a resource to the community, both for this consultation and beyond.
Civil society and academics are joining together to defend AI openness and transparency. Mozilla and the Center for Democracy & Technology (CDT), along with members of civil society and academia, have united to underscore the importance of openness and transparency in AI. Nearly 50 signatories sent a letter to Secretary Gina Raimondo in response to the U.S. Commerce Department’s request for comment on openness in AI models.
“We are excited to collaborate with expert individuals and organizations who are committed to seeing more transparent AI innovation,” said Jenn Taylor Hodges, Director of US Public Policy & Government Relations at Mozilla. “Open models in AI will promote trustworthiness and accountability that will better serve society. Mozilla has a long history of promoting open source and fighting corporate consolidation on the Internet. We are bringing those values and experiences to the AI era, making sure that everyone has a say in shaping the future of AI.”
There has been a noticeable shift in the AI landscape toward closed systems, a trend that Mozilla has diligently worked to counter. As detailed in the recently released Accelerating Progress Toward Trustworthy AI report, prominent AI entities are adopting closed systems, prioritizing proprietary control over collaborative openness. These companies have advocated for increased opacity, citing fears of misuse. However, beneath these arguments lies a clear agenda to stifle competition and limit oversight in the AI market.
The joint letter was sent in advance of the Department of Commerce’s comment deadline on AI models which closes March 27. Endorsed by science policy think tanks, advocates against housing discrimination, and computer science luminaries, it argued:
- Open models have significant benefits to society: They help advance innovation, competition, research, civil and human rights protections, and safety and security.
- Policy should look at marginal risks of open models compared to closed models: Commerce should look to recent Stanford and Princeton research, which emphasizes limited evidence that open models create new risks not present in closed models.
- Policy should focus more on AI applications, not models: Where openness makes AI risks worse, policy interventions are more likely to succeed in going after how the AI system is deployed, not by restricting the sharing of information about AI models.
- Policy should proactively advance openness: Policy on this topic must be developed and vetted by more than just national security agencies, and should promote more R&D into open approaches for AI and better standards for testing and releasing open models.
“The range of participants in this effort – from civil liberties to civil rights organizations, from progressive groups to more market-oriented groups, with advocates for openness in both government and industry, and a broad range of academic experts from law, policy, and computer science – demonstrates how the future of open innovation around powerful AI models is critically important to a wide variety of communities,” said Kevin Bankston, Senior Advisor on AI Governance for CDT. “As our letter highlights, the benefits of open models over closed models for competition, innovation, security and transparency are rather clear, while the risks compared to closed models aren’t. Therefore the White House and Congress should exercise great caution when considering whether and how to regulate the publication of open models.”
Mozilla’s upcoming longer submission to the Commerce Department’s request for comment will include greater details including expanding on Mozilla’s long history of increasing privacy, security, and functionality across the internet through its products, investments, and advocacy. It highlights key findings from the recent Columbia Convening on Openness and AI, and explains how openness is vital to innovation, competition, and accountability – including safety and security, as well as protecting rights and freedoms. It also takes on some of the most prominent arguments driving the push to limit access to AI models, such as claims of “unknown unknown” security risks.
The joint letter and Mozilla’s upcoming response to the call for comments demonstrates how openness can be an enabler of a better future – one where everyone can help build, shape, and test AI so that it works for everyone. That is the future we need, and it’s the one we must keep working toward through policy, technology, and advocacy alike.
The post Mozilla, Center for Democracy and Technology call for openness and transparency in AI appeared first on Open Policy & Advocacy.