Quantcast
Channel: Open Policy & Advocacy
Viewing all articles
Browse latest Browse all 19

Managing Misuse Risk for Dual-Use Foundation Models — Mozilla Submits Comments to NIST

$
0
0

In July 2024, the U.S. AI Safety Institute (AISI), under the National Institute of Standards and Technology (NIST) released draft guidance on Managing Misuse Risk for Dual-Use Foundation Models. This draft, intended for public comment, is focused specifically on foundation models – the largest and most advanced AI models available – and namely those built by closed model developers in big tech labs. The AI Safety Institute’s framework laid out in the document “focuses on managing the risk that models will be deliberately misused to cause harm…”

According to NIST’s AISI, the document is meant to build on the existing AI Risk Management Framework (to which Mozilla provided comments) to address both the technical and social aspects of misuse risks by providing best practices for organizations.

Mozilla takes seriously its role as a steward of good practices, especially when it comes to protecting open-source, privacy, and fighting for the principles in Mozilla’s Manifesto. We’ve led the way in advancing safer and more trustworthy AI, releasing an in-depth report on Creating Trustworthy AI in 2020 and bringing together forty AI leaders to discuss critical questions related to openness and AI at the 2024 Columbia Convening. As such, Mozilla encourages legislators and regulators to do their part and protect the interests of individuals and to make technology more useful and accessible for all.

However, while the AISI draft guidelines do an excellent job in highlighting the theoretical risks posed by foundation models created by large and largely private developers, it takes a narrow view of the way AI is developed today, including at the current technology frontier. In our full comments, we focused on encouraging the AISI to expand the lens through which it examines how AI is developed today. In particular, we believe that the AISI should work to ensure that its guidelines are adapted to take into account the unique nature of open source. Below is a list of highlights from Mozilla’s comments on the existing draft:

  • The current draft focuses on AI services deployed on the internet and accessed through some interface or API. The reality is that the majority of AI research and development is occurring on locally deployed AI models that are collaboratively developed and freely distributed. NIST should rework the draft’s front matter and glossary to better capture the state of the AI ecosystem.
  • The practices outlined in the draft place a disproportionate burden on any AI developer outside of the small handful of very large AI companies. Mozilla believes that NIST should ensure that requirements are applicable to organizations of all sizes and capability levels, and should take into account the potential negative impact of misuse at different organizational scales.
  • The recommendations for implementing the practices outlined in the draft imply that the AI model is centrally controlled and deployed. Open-source and collaborative development environments don’t align with this approach, rendering this guidance inapplicable, unhelpful, or at worst – harmful. Given the strong evidentiary basis for open-source helping mitigate risk and make software safer, NIST should ensure open-source AI is considered and supported in its work.
  • The document should define “gradients of access” as a way to provide a framework for AI risk management discussions and decision making. These gradients should represent incremental steps of access to an AI model (e.g. chat interface, prompt injection, training, direct weights visibility, local download, etc.) and each should be accompanied by its associated risks.

We hope that the AI Safety Institute continues to build on its foundational work in the field and works to develop guidelines, recommendations, and best practices that will not only stand the test of time but take into account the broader field of participants in the AI ecosystem. When such regulations are well designed, they propel the AI sector towards a safer and more trustworthy future. Mozilla’s full comments on Managing Misuse Risk for Dual-Use Foundation Models can be found here.

The post Managing Misuse Risk for Dual-Use Foundation Models — Mozilla Submits Comments to NIST appeared first on Open Policy & Advocacy.


Viewing all articles
Browse latest Browse all 19

Trending Articles