
By now, everyone should be aware that not everything AI tells you can be trusted. Threat actors are now using paid search ads on Google to share and promote conversations with ChatGPT and Grok that seem to offer tech support instructions but actually instruct macOS users to install an infostealing malware on their devices. Large language models (LLMs) can occasionally provide inaccurate information.
The campaign is an adaptation of the ClickFix assault, which frequently tricks targets into carrying out harmful commands by using CAPTCHA prompts or fictitious problem messages. However, on reputable AI platforms, the instructions are camouflaged as useful troubleshooting recommendations.
Kaspersky describes a way that attackers use ChatGPT through a campaign designed specifically for macOS Atlas installation. The first sponsored result when a user searches for a guide using the term “chatgpt atlas” is a link to chatgpt.com with the page titled as “ChatGPTTM Atlas for macOS – Download ChatGPT Atlas for Mac.” By clicking through, this will take one to the official ChatGPT website, where one will see a set of instructions which are allegedly installing Atlas.
The page, however, is a replica by duplicate of a publicly accessible chat between an anonymous user and the AI that serves as a malware installation guide. The chat also instructs the user to copy, paste, and run a program in the Terminal on your Mac, granting full access to the AMOS (Atomic macOS Stealer) infostealer.
Further investigation by Huntress where he had found similarly poisoned results in both ChatGPT and Grok when using broader troubleshooting queries and questions, such as “how to delete system data on Mac” and “clear disk space on macOS.”
According to the press statement by BleepingComputer, where it advised that by obtaining the root-level capabilities and targeting macOS, AMOS enables attackers to carry out commands instructions, record keystrokes, and send extra payloads. The infostealer also targets files on the disc, macOS Keychain data, browser data (including cookies, stored passwords, and autofill data), and cryptocurrency wallets.
It will be unwise to blindly follow every AI-generated command when troubleshooting technical issues. One should always carefully review any online instructions one find. Also threat actors often use sponsored search results and social media to spread ClickFix attacks. One should never execute commands if one don’t fully understand it, and even if the guidance comes from a search engine or an LLM that is trusted, it may be malicious-free especially if it advises one to run PowerShell or Terminal commands to “fix” a problem.
Of course, by asking ChatGPT (in a new discussion) whether the instructions are safe to follow, you might be able to reverse the attack as Kaspersky claims that the AI will inform you that they are not.
Also everyone can stay protected by following key security practices which includes using only official and trusted sources, avoid unofficial or not trusted downloads sources, verify developer information, be cautious with Terminal or PowerShell commands, and employ reliable security software.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







