Navigating the AI policy roadmap: How to create trustworthy safety standards
Reva Schwartz explains how NIST's artificial intelligence policy framework will shape responsible development and promote trust and safety.
As artificial intelligence rapidly evolves and permeates every sector of our society, the need to minimize bias and build AI safety has become increasingly urgent.
AI safety can remain "futureproof" and sector-agnostic by building safety standards into platforms sooner than later, said Reva Schwartz, Principal Investigator for AI Bias at the National Institute of Standards and Technology (NIST). “We want what we put out to be futureproof to the extent possible, sector-agnostic, and as broad as possible and flexible as it can be,” Schwartz told me in a recent interview. This approach enables the creation of a technology that builds public trust and confidence.
In their work at the NIST, Schwartz and her colleagues focus on the development of a framework for AI risk management that is open, transparent, and collaborative. Released on January 26th, the NIST framework is designed to assist organizations in managing the risks associated with AI systems and to promote trustworthy and responsible development and use of these systems.