Google Search has done it again by making it simple and easy to view hundreds of thousands of user discussions using Elon Musk’s xAI chatbot Grok, according to Forbes.
Forbes, a website covering the tech industry, was the first to report on Grok chats showing up in search engine results after counting over 370,000 Google user discussions.
When a Grok user selects the “share” button during a chat with the chatbot, a special URL is generated that the user can use to share the conversation on social media, via text message, or by email. Forbes claims that search engines like Google, Bing, and DuckDuckGo are indexing those URLs, allowing anyone to look up the discussions online.
Recently, users of Meta’s and OpenAI’s chatbots encountered a similar issue. Similar to those instances, the conversations that Grok leaked reveal users’ less-than-respectable choices, including enquiries about how to hack cryptocurrency wallets, filthy chats with an explicit AI persona, and requests for meth cooking instructions.
Although using xAI’s bot to “promote critically harming human life” or creating “bioweapons, chemical weapons, or weapons of mass destruction” is prohibited by its terms of service, users have clearly continued to solicit Grok for assistance with these tasks.
Conversations made public by Google reveal that Grok instructed users on how to make fentanyl, provided a list of ways to commit suicide, gave advice on how to build bombs, and even offered a comprehensive plan for Elon Musk’s assassination.
When asked for comment, xAI did not immediately reply. We have also enquired as to when Grok discussions were indexed by xAI. Also BBC has approached X for comment on this development, however there has not been a reply as at the time of drafting this.
The BBC was also able to view chat transcripts in which Musk’s chatbot was requested to generate a strong password, offer weight-loss meal plans, and respond to in-depth enquiries about medical issues.
Users’ attempts to push the boundaries of what Grok might say or do were also seen in some searchable transcripts.
Users of ChatGPT raised the alarm late last month when they noticed that Google was indexing their chats, which OpenAI called a “short-lived experiment.” In a post that Musk quote-tweeted with the words “Grok ftw,” Grok clarified that it “prioritize[s] privacy” and has “no such sharing feature.”
In one instance that the BBC saw from a test carried out, the chatbot gave thorough directions on how to create a Class A medication in a laboratory.
This is not the first instance of people’s interactions with AI chatbots being more widespread than they may have first thought when they used “share” features.
Recently, OpenAI pulled back a “experiment” in which users’ shared ChatGPT discussions showed up in search engine results.
A representative told BBC News that the company was “testing ways to make it easier to share helpful conversations, while keeping users in control” at the time.
According to them, users had to specifically choose to share their chats, which were by default private.
Meta came under fire earlier this year when user discussions with its chatbot, Meta AI, were made public in the app’s “discover” feed.
In privacy disaster situations, the prompts may not in any way contain sensitive, private information about users, even though their account information may be anonymised or hidden in shared chatbot transcripts.
According to experts, this demonstrates growing worries about user privacy.
Prof. Luc Rocher, an associate professor at the Oxford Internet Institute, told the BBC that AI chatbots are a privacy catastrophe in the making.
They claimed that “leaked conversations” from chatbots have revealed user data, including complete identities, location, and private information about relationships, company operations, and mental health.
“Once leaked online, these conversations will stay there forever,” they stated.
Carissa Veliz, an associate professor of philosophy at the Institute for Ethics in AI at Oxford University, stated that it is “problematic” that users are not informed that their shared chats may show up in search results.
She further stated and concluded that their technology doesn’t even tell us what it’s doing with our data, and that’s a problem.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.