[ad_1]
OpenAI is digging its heels deeper into industry self governance as the company announces a revamped safety and security team, following several public resignations and the dissolution of its former oversight body.
The Safety and Security Committee, as its been renamed, is led by board members and directors Bret Taylor (Sierra), Adam D’Angelo (Quora), Nicole Seligman, and — of course — OpenAI CEO Sam Altman. Other members include internal “OpenAI technical and policy experts,” including heads of “Preparedness,” “Safety Systems,” and “Alignment Science.”
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” wrote OpenAI. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”
The committee’s first task is to “evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the company wrote in its announcement, with feedback from outside experts who are already on OpenAI’s external oversight roster, like former NSA cybersecurity director Rob Joyce.
Mashable Light Speed
The announcement is a timely response to a swirling management controversy at OpenAI, although it may do little to reassure watchful eyes and advocates for external oversight. This week, former OpenAI board members called for more intense government regulation of the AI sector, specifically calling out the poor management decisions and toxic culture fostered by Altman in the role of OpenAI’s leader.
“Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives,” they argued.
OpenAI’s new committee is being thrown straight into the fire, with an immediate mandate to evaluate the company’s AI safeguards. But even those might not be enough.
Topics
Artificial Intelligence
OpenAI
[ad_2]