• AI Search
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Enterprise

OpenAI Hires Preparedness Lead to Strengthen Model Safety

Akinola Ajibola by Akinola Ajibola
December 29, 2025
in Enterprise
Share on FacebookShare on Twitter

OpenAI seeks a Head of Preparedness to assist with its artificial intelligence (AI) startup in anticipating potential risks arising from its AI models and devising strategies to address them. The new head will also be in charge of researching new hazards associated with AI in fields including computer security and mental health. Additionally, the corporation is prepared to make a sizable cash and stock payout for the new position. Sam Altman, CEO of OpenAI, described the role as essential to the company’s operations, emphasising that the worker will be concentrated on assessing cutting-edge models prior to their public release. This is noteworthy because the company is currently dealing with a number of lawsuits that claim ChatGPT encouraged users to commit suicide and murder.

The AI giant said that it is seeking a Head of Preparedness in a recent job posting on its careers page. This senior position “ensures that OpenAI’s most capable models can be responsibly developed and deployed” within the Safety Systems team. The position is located in San Francisco and offers an annual salary of up to $555,000 which is about Rs. 4.99 crore plus stock.

The CEO of OpenAI posted the listing on X, formerly known as Twitter, stating that the role is crucial given the rapid expansion of model capabilities. According to Altman’s article, it’s a “stressful job” that entails delving into intricate safety issues as soon as one joins the organisation.

Altman further admonish the public by writing via the post to please consider applying if you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve.

The Head of Preparedness will oversee the technical strategy and implementation of the organization’s Preparedness framework, which is a component of the larger Safety Systems organisation, according to the official job description. Building and organising capacity assessments, creating thorough threat models, and creating mitigations that provide “a coherent, rigorous, and operationally scalable safety pipeline” will all fall under the purview of this position.

In respect to the job description, the key duties include developing accurate and reliable capacity assessments for quick model development cycles, developing threat models for various risk domains, and supervising mitigation plans that correspond with threats that have been discovered. To direct complicated tasks throughout the safety organisation, the role requires strong technical judgement and effective communication.

At a time when its AI systems have come under fire for purportedly having unanticipated negative effects on users and the wider world, OpenAI opened this new job. A teen’s death and a 55-year-old man’s murder-suicide were recently linked to ChatGPT. Additionally, ChatGPT Atlas, the company’s AI browser, is allegedly susceptible to prompt injections.

In 2023, the firm announced the formation of a readiness team, stating that it would be in charge of researching potential “catastrophic risks,” whether they were more hypothetical, like nuclear threats, or more urgent, like phishing attacks.

Aleksander Madry, Head of Preparedness at OpenAI, was reassisgned to a position centred on AI reasoning less than a year later. Other OpenAI safety executives have either quit the firm or taken on positions unrelated to safety and readiness.

Additionally, the business recently modified its Preparedness Framework, noting that if a rival AI lab produces a “high-risk” model without comparable safeguards, it may “adjust” own safety requirements.

The influence of generative AI chatbots on mental health has come under increasing investigation, as Altman mentioned in his piece. According to recent lawsuits, OpenAI’s ChatGPT worsened users’ social isolation, strengthened their delusions, and even caused some of them to commit suicide. (The business stated that it is still working to enhance ChatGPT’s capacity to identify indicators of emotional distress and link users to actual support.)

The role, which emphasises that safety is evolving from a compliance function to a fundamental strategic pillar inside the organisation, reports directly to senior leadership.

Similar teams have been formed by other AI firms, such as Anthropic and Google DeepMind, suggesting an increasing industry-wide emphasis on “proactive defence” and extreme risk assessment.

Related Posts:

  • header_image_mad_img_1753431126
    OpenAI Set To Release GPT-5 in August
  • sam altman
    OpenAI Offers Free Models to Challenge Chinese Firms, Meta
  • Microsoft-datacenter-cold-aisle-server-racks-for-the-AMD-MI300X
    Microsoft Prepares for OpenAI's GPT-5 Launch
  • 1_btZXWE2VEDAYfty9QAszog
    Microsoft Azure Integrates DeepSeek's AI Model
  • Blog_Asset_Wellbeing
    OpenAI Restricts Personal Advice, adds ChatGPT Break…
  • IWJXGSLOKRHKVIQEO4WX5JCXLY
    Ex-OpenAI CTO Mira Murati Fundraising for New AI Venture
  • W7BnebUnSW8Mxsq8EwkTs3-1200-80
    OpenAI Upgrades Operator Agent's AI Model
  • GettyImages-1778706504
    Rumour: Microsoft Developing AI Models to Rival OpenAI

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: ai jobsopenaiopenai Head of Preparedness
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Adobe Unveils Firefly AI Assistant To Orchestrate Creative Cloud Workflows April 15, 2026
  • Snap Cuts 16% of Workforce as AI Reshapes Company Strategy April 15, 2026
  • Samsung’s 2026 Micro RGB 4K TVs Start at $1,600, Top Out at 85 Inches April 15, 2026
  • Anthropic’s Momentum Puts Fresh Pressure On OpenAI’s Sky‑High Valuation April 15, 2026
  • NDPC Probes Loan Shark Operators On Data Breach Policy April 15, 2026
  • Insolify Introduces Low-Latency AI Payments for Weak Networks April 15, 2026
  • Survey: 43% of AI‑generated Code Changes still Break in Production April 15, 2026
  • Mastodon Announces Plan To Add End-To-End Encryption For DMs April 15, 2026
  • OpenAI Rolls Out GPT-5.4-Cyber with Trusted Access Program to Control Powerful AI April 15, 2026
  • Google Launches Desktop App for Windows with AI Search Built In April 14, 2026
  • Cloudflare Boosts Developer Security with Shift-Left and AI-Driven Protections April 14, 2026
  • Study Finds Most Australian Teens Are Still Using Banned Social Media Platforms April 14, 2026

Browse Archives

April 2026
MTWTFSS
 12345
6789101112
13141516171819
20212223242526
27282930 
« Mar    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • AI Search
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

Chat with TechBooky AI
💬
TechBooky AI ✕
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.