• AI Search
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home African

NITDA Notifies Nigerians About ChatGPT Vulnerabilities

Akinola Ajibola by Akinola Ajibola
December 8, 2025
in African, Artificial Intelligence
Share on FacebookShare on Twitter

In a cybersecurity advisory to users, the National Information Technology Development Agency (NITDA) warned Nigerians about seven serious flaws in OpenAI’s GPT-4o and GPT-5 models that could result in data manipulation and leakage due to the recent vulnerabilities discovered in OpenAI’s most recent large language models.

NITDA’s Computer Emergency Readiness and Response Team (CERRT.NG) revealed seven vulnerabilities in OpenAI’s GPT-4.0 and GPT-5 series models in a notice published on Sunday via its official X account. These vulnerabilities allow attackers to manipulate ChatGPT through indirect prompt injections concealed in seemingly innocuous online content.

The advisory states that malicious instructions can be inserted by attackers into “webpages, comments, or crafted URLs,” allowing ChatGPT to carry out unwanted commands while conducting routine browsing, summarisation, or search operations. 

CERRT adds that some of the vulnerabilities enable threat actors to use trusted domains to get around safety measures or take use of markdown rendering problems to conceal malicious input.

The possibility of long-term manipulation is one of the more alarming problems. Attackers may even “poison ChatGPT’s memory so that injected instructions persist across future interactions,” the government cautions, posing concerns for both individual users and business systems.

CERRT asserts that despite OpenAI’s purported implementation of partial remedies, big language models continue to encounter fundamental difficulties in differentiating between maliciously implanted material and legitimate user intent.

According to NITDA its possible impact and advise is that the vulnerabilities may result in long-term behavioural influence, information leaking, unauthorised acts, and altered outputs.

It is important to be aware that the assaults might occur “without clicking anything,” particularly when ChatGPT scans search results or webpages with hidden payloads, potentially impacting users without any direct involvement.

However, NITDA suggested actions to prevent is that organisations and users are advised by CERRT to implement immediate precautions, such as:

  • ChatGPT’s surfing and summarisation features for unreliable websites can be restricted or disabled in a corporate settings.
  • To turn on features like memory and browsing only when needed. By turning on or activating features like memory and web browsing only when necessary for operations.
  • GPT-4.0 and GPT-5 models should be updated often to guarantee that identified vulnerabilities are fixed. By updating and patching on regular basis this helps to fix known vulnerabilities.

Also NITDA warns Nigerians on GPT concerns

Nigerians were awarned on Monday by the National Information Technology Development Agency about new vulnerabilities in OpenAI’s GPT-4.0 and GPT-5 series that could expose users to data leaks.

The Director of Corporate Affairs and External Relations for the organisation, Mrs. Hadiza Umar, issued the advise in Abuja.

According to Umar, the agency found seven significant flaws in the models that let hackers use indirect prompt injection to control the system.

Attackers can make ChatGPT execute unwanted commands through routine browsing, summarisation, or search actions by inserting concealed instructions into webpages, comments, or forged URLs.

Additionally, several vulnerabilities allow attackers to conceal malicious information by taking advantage of markdown rendering problems and circumventing safety filters by exploiting trusted domains.

She said, “That act can even poison ChatGPT’s memory so that injected instructions persist across future interactions.”

Large language models still have trouble differentiating between harmful embedded data and legitimate user intent, according to Umar, even though OpenAI had partially resolved the problem.

According to her, the method can trick ChatGPT into carrying out inadvertent actions during ordinary browsing or search activities by embedding concealed instructions in webpages, online comments, or designed URLs.

According to Umar, there were significant hazards associated with the vulnerabilities, such as unauthorised activities, information leaking, altered outputs, and long-term behavioural influence from memory poisoning.

According to her, the agency advises businesses to restrict or prevent the browsing and summarisation of untrusted websites in business settings in order to mitigate the dangers.

“Only activate ChatGPT features like memory or browsing when they are operationally required,” she stated.

Additionally, she recommended that the GPT-40 and GPT-5 models be updated and patched on a regular basis to guarantee that any known vulnerabilities are fixed.

In the meantime, for firewall resolutions issues, the agency has issued an urgent warning through CERRT.NG of new security issues impacting Cisco firewall equipment used by banks, government agencies, enterprises, and internet service providers.

Cybercriminals are now using a new attack technique to target Cisco Secure Firewall ASA and Cisco Secure Firewall Threat Defence (FTD) systems, according to a warning posted on NITDA’s official X page on Monday. Unexpected network failures may result from the flaw’s ability to force a device to reboot.

According to the organisation, attackers are utilising previous vulnerabilities as part of a new technique that can cause firewalls to “restart without warning,” resulting in denial-of-service attacks and instability throughout impacted networks.

Related Posts:

  • OAI_GPT-5.2-Codex_ArtCard_16x9.
    OpenAI Unveils GPT-5.2-Codex
  • ChatGPT Creator, OpenAI, Sued $3bn For Stealing…
  • openai-stack-overflow
    OpenAI & Stack Overflow Partner to Revamp…
  • chatgpt_openai_reuters_1675831938432
    OpenAI Strikes Licensing Deal with People Magazine Publisher
  • openai
    OpenAI Hires Preparedness Lead to Strengthen Model Safety
  • 0abf4dfc-cac6-42ee-be90-33e6f6229f53
    OpenAI o3 & o4 Mini Models Feature Visual Reasoning
  • Microsoft-datacenter-cold-aisle-server-racks-for-the-AMD-MI300X
    Microsoft Prepares for OpenAI's GPT-5 Launch
  • 0_c_BDi0qfpXCm4Gon
    ChatGPT Service Disrupted After Significant Outage Today

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: ChatGPTnitdavulnerability
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • YouTubers Sue Snap Over Copyright Infringement in AI Models January 28, 2026
  • Sony Plans A State of Play Broadcast in February January 28, 2026
  • TikTok Avoids Trial, Settles Lawsuit Over Addiction Claims January 27, 2026
  • Ezra Olubi Sues David Hundeyin for ₦140M Over X Defamation January 27, 2026
  • Lagos & MTN Team Up on Eco-Friendly Obalende Bus Park January 27, 2026
  • France Ditches Microsoft Teams, Zoom for Homegrown ‘Sovereign’ Platform January 27, 2026
  • Meta Tests Premium Subscriptions on Facebook, Instagram & WhatsApp January 27, 2026
  • Microsoft Introduces New IT Admin Tool to Analyse Security Breaches January 27, 2026
  • Google May Bring Apple-Like “Liquid Glass” Design to Android 17 January 27, 2026
  • TikTok Blames Power Outage for US Service Problems January 27, 2026
  • Nvidia Backs CoreWeave With $2B to Support Data Centre Growth January 27, 2026
  • Google Agrees $68M Settlement in Google Assistant Privacy Lawsuit January 27, 2026

Browse Archives

January 2026
MTWTFSS
 1234
567891011
12131415161718
19202122232425
262728293031 
« Dec    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • AI Search
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.