
OpenAI seeks a Head of Preparedness to assist with its artificial intelligence (AI) startup in anticipating potential risks arising from its AI models and devising strategies to address them. The new head will also be in charge of researching new hazards associated with AI in fields including computer security and mental health. Additionally, the corporation is prepared to make a sizable cash and stock payout for the new position. Sam Altman, CEO of OpenAI, described the role as essential to the company’s operations, emphasising that the worker will be concentrated on assessing cutting-edge models prior to their public release. This is noteworthy because the company is currently dealing with a number of lawsuits that claim ChatGPT encouraged users to commit suicide and murder.
The AI giant said that it is seeking a Head of Preparedness in a recent job posting on its careers page. This senior position “ensures that OpenAI’s most capable models can be responsibly developed and deployed” within the Safety Systems team. The position is located in San Francisco and offers an annual salary of up to $555,000 which is about Rs. 4.99 crore plus stock.
The CEO of OpenAI posted the listing on X, formerly known as Twitter, stating that the role is crucial given the rapid expansion of model capabilities. According to Altman’s article, it’s a “stressful job” that entails delving into intricate safety issues as soon as one joins the organisation.
Altman further admonish the public by writing via the post to please consider applying if you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve.
The Head of Preparedness will oversee the technical strategy and implementation of the organization’s Preparedness framework, which is a component of the larger Safety Systems organisation, according to the official job description. Building and organising capacity assessments, creating thorough threat models, and creating mitigations that provide “a coherent, rigorous, and operationally scalable safety pipeline” will all fall under the purview of this position.
In respect to the job description, the key duties include developing accurate and reliable capacity assessments for quick model development cycles, developing threat models for various risk domains, and supervising mitigation plans that correspond with threats that have been discovered. To direct complicated tasks throughout the safety organisation, the role requires strong technical judgement and effective communication.
At a time when its AI systems have come under fire for purportedly having unanticipated negative effects on users and the wider world, OpenAI opened this new job. A teen’s death and a 55-year-old man’s murder-suicide were recently linked to ChatGPT. Additionally, ChatGPT Atlas, the company’s AI browser, is allegedly susceptible to prompt injections.
In 2023, the firm announced the formation of a readiness team, stating that it would be in charge of researching potential “catastrophic risks,” whether they were more hypothetical, like nuclear threats, or more urgent, like phishing attacks.
Aleksander Madry, Head of Preparedness at OpenAI, was reassisgned to a position centred on AI reasoning less than a year later. Other OpenAI safety executives have either quit the firm or taken on positions unrelated to safety and readiness.
Additionally, the business recently modified its Preparedness Framework, noting that if a rival AI lab produces a “high-risk” model without comparable safeguards, it may “adjust” own safety requirements.
The influence of generative AI chatbots on mental health has come under increasing investigation, as Altman mentioned in his piece. According to recent lawsuits, OpenAI’s ChatGPT worsened users’ social isolation, strengthened their delusions, and even caused some of them to commit suicide. (The business stated that it is still working to enhance ChatGPT’s capacity to identify indicators of emotional distress and link users to actual support.)
The role, which emphasises that safety is evolving from a compliance function to a fundamental strategic pillar inside the organisation, reports directly to senior leadership.
Similar teams have been formed by other AI firms, such as Anthropic and Google DeepMind, suggesting an increasing industry-wide emphasis on “proactive defence” and extreme risk assessment.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







