OpenAI has announced significant changes to its security and safety practices, including the establishment of a new independent board oversight committee. The move comes with a notable change: CEO Sam Altman is no longer part of the security committee, a departure from the previous structure.
The newly formed Security and Safety Committee (SSC) will be chaired by Zico Kolter, director of the Machine Learning Department at Carnegie Mellon University. Other key members include Quora CEO Adam D’Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former EVP and General Counsel of Sony Corporation.
This new committee will replace the previous Security and Safety Committee formed in June 2024, whose members also included Altman. The original committee was tasked with making recommendations on critical security and safety decisions for OpenAI projects and operations.
The SSC’s responsibilities now extend beyond recommendations. It will have the authority to oversee security assessments for major model releases and monitor model launches. Importantly, the committee will have the power to postpone a release until security concerns are adequately addressed.
This reorganization comes after a period of scrutiny about OpenAI’s commitment to AI safety. The company has faced criticism in the past for disbanding its superalignment team and the departure of key security-focused personnel. Altman’s removal from the security committee appears to be an attempt to address concerns about potential conflicts of interest in the company’s security oversight.
OpenAI’s latest security initiative also includes plans to enhance security measures, increase transparency about their work, and collaborate with external organizations. The company has already signed agreements with the US and UK AI security institutes to research emerging AI security risks and standards for trustworthy AI.
OpenAI has begun previewing a new tool called Operator that can navigate within a web browser. According to a blog post published on Thursday, the software is known by the company as a computer-using agent. “CUA is trained to interact with graphical user interfaces (GUIs) — the buttons, menus, and text fields that people see on screens — just as humans do,” OpenAI said of the model.
“This allows it to perform digital tasks without using OS- or web-specific APIs.” The current release of Operator is based on OpenAI’s GPT-4o model. It combines that algorithm’s vision capabilities with “advanced reasoning” trained through reinforcement learning.
Operator has the ability to “break down tasks into multi-step plans and adaptively self-correct when challenges arise.” According to OpenAI, this ability represents the next step in AI evolution.