US AI Safety Institute Consortium Established to Ensure Safe AI Innovation

Over 200 AI stakeholders, including local innovator Sonar, are members of the consortium.

Written by Ashley Bowden
Published on Feb. 09, 2024
US AI Safety Institute Consortium Established to Ensure Safe AI Innovation
An image depicting a person using AI is shown.
Photo: Shutterstock

The Department of Commerce’s National Institute of Standards and Technology established the U.S. AI Safety Institute Consortium, or AISIC. The consortium aims to promote the development and deployment of safe and trustworthy artificial intelligence and includes more than 200 stakeholders including AI creators and users, academic institutions, government and industry researchers, and civil society organizations. 

The consortium plans to collaborate with the U.S. government to establish measurements and standards needed to maintain America’s competitive edge while also developing AI responsibly. The consortium will develop guidelines for red-teaming, risk management, safety and security and watermarking synthetic content. 

The consortium’s membership list includes clean code innovator Sonar. The company leverages AI coding assistants to help developers write consistent, intentional, adaptable and responsible code. 

Sonar’s CEO expressed that AI-generated code can include bugs and errors as well as readability, maintainability and security issues, just like code written by humans, and that to ensure quality, generated code should be thoroughly reviewed before deployment. Sonar believes the AISIC is critical to building a scalable model for the safe development and use of generative AI.

Hiring Now
Gopuff
eCommerce • Food • Logistics • On-Demand • Software