• Archives
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Enterprise

OpenAI Hires Preparedness Lead to Strengthen Model Safety

Akinola Ajibola by Akinola Ajibola
December 29, 2025
in Enterprise
Share on FacebookShare on Twitter

OpenAI seeks a Head of Preparedness to assist with its artificial intelligence (AI) startup in anticipating potential risks arising from its AI models and devising strategies to address them. The new head will also be in charge of researching new hazards associated with AI in fields including computer security and mental health. Additionally, the corporation is prepared to make a sizable cash and stock payout for the new position. Sam Altman, CEO of OpenAI, described the role as essential to the company’s operations, emphasising that the worker will be concentrated on assessing cutting-edge models prior to their public release. This is noteworthy because the company is currently dealing with a number of lawsuits that claim ChatGPT encouraged users to commit suicide and murder.

The AI giant said that it is seeking a Head of Preparedness in a recent job posting on its careers page. This senior position “ensures that OpenAI’s most capable models can be responsibly developed and deployed” within the Safety Systems team. The position is located in San Francisco and offers an annual salary of up to $555,000 which is about Rs. 4.99 crore plus stock.

The CEO of OpenAI posted the listing on X, formerly known as Twitter, stating that the role is crucial given the rapid expansion of model capabilities. According to Altman’s article, it’s a “stressful job” that entails delving into intricate safety issues as soon as one joins the organisation.

Altman further admonish the public by writing via the post to please consider applying if you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve.

The Head of Preparedness will oversee the technical strategy and implementation of the organization’s Preparedness framework, which is a component of the larger Safety Systems organisation, according to the official job description. Building and organising capacity assessments, creating thorough threat models, and creating mitigations that provide “a coherent, rigorous, and operationally scalable safety pipeline” will all fall under the purview of this position.

In respect to the job description, the key duties include developing accurate and reliable capacity assessments for quick model development cycles, developing threat models for various risk domains, and supervising mitigation plans that correspond with threats that have been discovered. To direct complicated tasks throughout the safety organisation, the role requires strong technical judgement and effective communication.

At a time when its AI systems have come under fire for purportedly having unanticipated negative effects on users and the wider world, OpenAI opened this new job. A teen’s death and a 55-year-old man’s murder-suicide were recently linked to ChatGPT. Additionally, ChatGPT Atlas, the company’s AI browser, is allegedly susceptible to prompt injections.

In 2023, the firm announced the formation of a readiness team, stating that it would be in charge of researching potential “catastrophic risks,” whether they were more hypothetical, like nuclear threats, or more urgent, like phishing attacks.

Aleksander Madry, Head of Preparedness at OpenAI, was reassisgned to a position centred on AI reasoning less than a year later. Other OpenAI safety executives have either quit the firm or taken on positions unrelated to safety and readiness.

Additionally, the business recently modified its Preparedness Framework, noting that if a rival AI lab produces a “high-risk” model without comparable safeguards, it may “adjust” own safety requirements.

The influence of generative AI chatbots on mental health has come under increasing investigation, as Altman mentioned in his piece. According to recent lawsuits, OpenAI’s ChatGPT worsened users’ social isolation, strengthened their delusions, and even caused some of them to commit suicide. (The business stated that it is still working to enhance ChatGPT’s capacity to identify indicators of emotional distress and link users to actual support.)

The role, which emphasises that safety is evolving from a compliance function to a fundamental strategic pillar inside the organisation, reports directly to senior leadership.

Similar teams have been formed by other AI firms, such as Anthropic and Google DeepMind, suggesting an increasing industry-wide emphasis on “proactive defence” and extreme risk assessment.

Related Posts:

  • header_image_mad_img_1753431126
    OpenAI Set To Release GPT-5 in August
  • sam altman
    OpenAI Offers Free Models to Challenge Chinese Firms, Meta
  • Microsoft-datacenter-cold-aisle-server-racks-for-the-AMD-MI300X
    Microsoft Prepares for OpenAI's GPT-5 Launch
  • 1_btZXWE2VEDAYfty9QAszog
    Microsoft Azure Integrates DeepSeek's AI Model
  • IWJXGSLOKRHKVIQEO4WX5JCXLY
    Ex-OpenAI CTO Mira Murati Fundraising for New AI Venture
  • Blog_Asset_Wellbeing
    OpenAI Restricts Personal Advice, adds ChatGPT Break…
  • W7BnebUnSW8Mxsq8EwkTs3-1200-80
    OpenAI Upgrades Operator Agent's AI Model
  • GettyImages-1778706504
    Rumour: Microsoft Developing AI Models to Rival OpenAI

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: ai jobsopenaiopenai Head of Preparedness
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Starlink Halts Satellite Services To Papua New Guinea December 29, 2025
  • Gmail May Let Users Change Their Email Address December 29, 2025
  • OpenAI Hires Preparedness Lead to Strengthen Model Safety December 29, 2025
  • ChatGPT Gets Personality and Response Customisation December 23, 2025
  • CBN Orders Stronger Security for Overseas Card Payments December 23, 2025
  • Malicious npm Package Compromises WhatsApp Accounts December 23, 2025
  • ChatGPT Rolls Out a Spotify Wrapped–Style Year-End Recap December 23, 2025
  • Nigerian Authorities Arrest Developer Linked to Microsoft 365 Phishing Tool December 20, 2025
  • WhatsApp GhostPairing Scam Lets Hackers Hijack Accounts December 20, 2025
  • OpenAI Reportedly Seeks $100B at $830B Valuation December 20, 2025
  • YouTube & Google Hit By Ongoing Outages As Reports Spike December 20, 2025
  • TikTok Finalises Agreement For Sale Of Its US Business December 19, 2025

Browse Archives

December 2025
MTWTFSS
1234567
891011121314
15161718192021
22232425262728
293031 
« Nov    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.