Miles Brundage, OpenAI’s senior adviser for artificial general intelligence (AGI) readiness, issued a stark warning as he announced his departure on Wednesday: no one, including OpenAI, is fully prepared for the reality of human-level artificial intelligence.
“Neither OpenAI nor any other leading lab is ready [for AGI], and the world is also not prepared,” Brundage wrote, reflecting on his six-year tenure shaping OpenAI’s AI safety strategies. He clarified that this view isn’t controversial among OpenAI’s leadership and noted the distinction between the current state of readiness and the timeline for achieving it.
Brundage’s exit is part of a broader trend, marking another high-profile departure from OpenAI’s safety team. Jan Leike, a notable researcher, left after voicing concerns that the emphasis on “safety culture and processes” had diminished in favor of commercial product development. Ilya Sutskever, a co-founder, also departed to start his own venture focused on safe AGI.
The recent disbanding of Brundage’s “AGI Readiness” team comes just months after OpenAI eliminated its “Superalignment” team, which focused on mitigating long-term AI risks. These moves reflect growing friction between OpenAI’s original mission of safe AI research and its increasing commercial focus. The company reportedly faces pressure to shift from a nonprofit to a for-profit public benefit corporation within the next two years; otherwise, it may need to return some of the $6.6 billion recently raised from investors. This commercialization push has been a point of concern for Brundage, who had expressed unease back in 2019 when OpenAI initially created its for-profit division.
Brundage cited the growing restrictions on his research and ability to publish independently as a significant reason for his departure. He highlighted the importance of having independent voices in AI policy, advocating for an unbiased approach to AI governance that stands apart from corporate interests. Having advised OpenAI’s leadership on internal readiness for AGI, Brundage now believes he can make a greater impact on global AI governance from outside the organization.
His departure underscores a cultural divide within OpenAI, as many researchers who initially joined to advance AI research find themselves in an environment increasingly driven by product development goals. Reports have surfaced that allocation of internal resources has become contentious, with Leike’s team reportedly denied computing resources for their safety research before it was ultimately dissolved.
Despite these tensions, Brundage mentioned that OpenAI has extended an offer to support his future endeavors with funding, API credits, and early access to models, with no conditions attached.