OpenAI announced on Tuesday the formation of a new safety and security committee, which will be headed by top executives and replace a previous oversight board that was dissolved in mid-May. This newly formed committee is tasked with advising OpenAI’s board on key safety and security matters concerning the company’s projects and overall operations.
This development coincides with OpenAI’s latest advancement in AI technology. The company disclosed that it has begun the training phase for what it calls the “next frontier model,” which it expects to represent a major leap forward in AI capabilities on the path toward achieving artificial general intelligence (AGI)—a level of intelligence that matches or surpasses human cognition. OpenAI shared these plans in a recent blog post, stating that the forthcoming systems will play a crucial role in advancing the company’s pursuit of AGI.
The safety and security committee will include CEO Sam Altman, along with Bret Taylor, Adam D’Angelo, and Nicole Seligman, all of whom currently serve on OpenAI’s board of directors. This initiative marks a restructured approach to oversight at OpenAI, coming after the previous team that focused on long-term AI risk was disbanded. Before its dissolution, OpenAI’s AI safety division was led by co-founder Ilya Sutskever and prominent researcher Jan Leike, who both subsequently left the Microsoft-backed organization.
Leike recently shared critical insights regarding OpenAI’s safety priorities, suggesting that the company’s focus on innovative products has at times overshadowed its commitment to safety. In response to Leike’s departure, Altman expressed regret on social media platform X, acknowledging Leike’s valuable contributions and noting that OpenAI still has “a lot more to do” in terms of improving its safety culture.
As OpenAI advances its technology, the next three months will be critical for the new safety committee, which will conduct a thorough evaluation of the company’s safety protocols and make recommendations to the board. OpenAI plans to provide an update on the specific safety measures it decides to implement based on these recommendations.
The question of AI safety has become an increasingly prominent topic within the industry, especially as large AI models like ChatGPT continue to grow in complexity. Developers and experts are debating not only the potential benefits but also the risks that could arise if AGI is achieved.