• Archives
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Social Media

Instagram, TikTok Face Charges Over Harmful Teen Content

Akinola Ajibola by Akinola Ajibola
August 21, 2025
in Social Media
Share on FacebookShare on Twitter

Instagram and TikTok have come under attention for allegedly targeting minors with content about suicide and self-harm more frequently than they did two years ago.

Following the death of a 14-year-old girl child and daughter to Ian Rusell due to an exposure to hazardous content on social media. Ian Russell founded the Molly Rose Foundation, which used a 15-year-old girl from the UK to conduct an analysis of hundreds of posts on the platforms.

The organisation asserted that under-16s who have previously interacted with similar content were still exposed to a “tsunami” of video tapes featuring “suicide, self-harm, and intense depression” in the films that were suggested by algorithms on the For You pages.

Of all the negative posts, one out of ten which was received had at least a million likes. The researchers also reported that the average number of likes was 226,000.

Mr. Russell told Sky News that the findings were “horrifying” and indicated that rules pertaining to online safety are ineffective.

“It is astounding that eight years after Molly’s passing, extremely harmful content about depression, suicide, and self-harm is still prevalent on social media,” he said. 

The extent of the harm being indicated to vulnerable users is not reflected in Ofcom’s current child safety guidelines, which ultimately do nothing to stop additional tragedies like Molly’s.

“Despite the efforts of governments, authorities, and individuals like himself, the situation has gotten worse rather than better. According to the survey, it is nearly impossible to avoid falling down the rabbit hole of hazardous suicidal self-injury information.

The government had claimed that the new legislation, which is part of the Online Safety Act, would protect children online. The findings were made before the new regulation goes into effect.

“For over a year, this entirely preventable harm has been happening on the prime minister’s watch and where Ofcom have been timid it is time for him to be strong and bring forward strengthened, life-saving legislation without delay.”

A coroner determined that Molly had been depressed and that the content she had watched online had contributed positively to her death “in a more than minimal way” after her death in 2017.

Bright Data researchers examined 242 TikToks and 300 Instagram Reels to see if they featured “themes of intense hopelessness, misery, and despair,” “promoted and glorified suicide and self-harm,” or references to ideation or methods.

Before the Online Safety Act’s new children’s codes for tech businesses went into effect in July 2025, they were collected between November 2024 and March 2025.

For Instagram, Molly Rose Foundation claims that Instagram will continues by algorithms to suggests severe high volumes of harmful material.

According to the researchers, 97% of the films that were suggested for an adolescent girl’s Instagram Reels account after she had already viewed this content were deemed dangerous.

According to them, 44% of them specifically mentioned suicide and self-harm. They also asserted that emails with user-recommended content contained hazardous content.

A representative of Meta, Instagram’s parent company disagreed with the assertions of this report and the limited methodology behind it.

“With built-in safeguards that restrict who can reach them, what they can view, and how much time they spend on Instagram, tens of millions of teenagers are now using Instagram Teen Accounts.

“99% of information that promotes suicide and self-harm is proactively removed by our automated technology before it is reported to us. To help protect kids online, we created Teen Accounts, and we’re still working nonstop to achieve that goal.”

For TikTok, Molly Rose Foundation claims that 96% of the videos were deemed detrimental, according to the assessment, which accused TikTok of suggesting “an almost uninterrupted supply of harmful material.”

It was reported that a single search turned up messages encouraging suicide habits, risky stunts, and challenges, with more than half (55%) of the For You posts being tied to suicide and self-harm.

The report also claimed that since 2023, there has been a spike in the quantity of hazardous hashtags, many of which were shared on accounts with a large following that produced “playlists” of dangerous content.

According to a TikTok spokeswoman, “Parents can further customise 20+ content and privacy settings through Family Pairing, and teens’ accounts on TikTok have over 50 features and settings designed to help them safely express themselves, discover, and learn.”

“With over 99% of violative content proactively removed by TikTok, the findings don’t reflect the real experience of people on our platform which the report admits.”

TikTok claims that prohibiting hashtags encourages users to assist helplines and that they do not permit content that depicts or encourages suicide or self-harm.

The brutal reality young people can comment negatively on hazardous content that has been recommended to them on both sites. However, the researchers discovered that they can also provide this information favourable feedback and receive it for the following 30 days.

“These numbers demonstrate a harsh reality – for far too long, tech companies have allowed the internet to spread hateful content to children, ruining young lives and even shattering some families,” said Technology Secretary Peter Kyle.

However, businesses can no longer act as though they are blind. Platforms must shield all users from unlawful content and children from the most dangerous content, such as that which promotes or encourages suicide and self-harm, according to the Online Safety Act, which went into force earlier this year. There are currently 45 sites being investigated.

In summary a different report released yesterday shows that the Children’s Commissioner discovered that, due in part to algorithms, the percentage of kids who have viewed pornography online has increased over the previous two years.

According to Rachel de Souza, the content that young people are exposed to is “violent, extreme, and degrading” and frequently unlawful. She also stated that the findings of her office should be viewed as a “snapshot of what rock bottom looks like.”

More than half (58%) of respondents to the poll claimed that, as youngsters, they had seen pornography featuring strangulation, while 44% reported seeing a portrayal of rape – specifically someone who was unconscious.

The average age at which 1,020 respondents between the ages of 16 and 21 first saw pornography was 13. Some reported being six or younger, while more than a quarter (27%) claimed to be eleven.

Related Posts:

  • Reels-First-Experience_IG
    Instagram Test Reels-First UI In South Korea & India
  • image
    Accelerated Reels on Instagram Like TikTok
  • Horizontal-videos-on-TikTok-are-now-a-reality-Tests-have
    TikTok Is Testing A New Horizontal Full-Screen Mode
  • Horizontal-videos-on-TikTok-are-now-a-reality-Tests-have
    TikTok’s New Feature Tells Why A Video Appeared In Your Feed
  • Major-investigation-launched-into-child-protection-measures-on-TikTok-Reddit-and-Imgur
    Reddit & TikTok Under Investigation in the UK Over…
  • tiktok text
    Social Media War: TikTok Launches A Text Based Feature
  • article_full@1x
    Twitch Is Now Offering TikTok-like Video Stories
  • Illustration shows TikTok logo
    Senegal’s Move Against TikTok — A Closer Look at the…

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: instagraminternet safetyteenstiktok
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Meta Previews New AI Parental Controls October 18, 2025
  • ChatGPT Mobile App Sees Drop in Usage and Downloads October 18, 2025
  • Facebook’s AI Now Suggests Photo Edits on Phones October 18, 2025
  • Reddit Expands AI Search to 5 New Languages October 18, 2025
  • Meta Ends Messenger Desktop Apps for Mac and Windows October 18, 2025
  • Microsoft Rolls Out New Windows 11 AI Copilot Features October 16, 2025

Browse Archives

October 2025
MTWTFSS
 12345
6789101112
13141516171819
20212223242526
2728293031 
« Sep    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.