As a new TikTok case enters the court, explosive new documents are coming to light for the first time. These documents, which NPR obtained through an accidental leak at the Kentucky Attorney General’s Office, suggest that TikTok was aware of its platform’s addictiveness and adverse effects on mental health, specifically on children.
Reportedly, TikTok’s own research found that “compulsive usage correlates with a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, and increased anxiety.”
In addition, TikTok found that “compulsive usage” of the app had adverse effects on people’s ability to spend time on schoolwork, sleep, work, and time with loved ones.
What else did the leaked TikTok court documents say?
Disturbingly, the documents also seemed to imply that TikTok is well aware of “filter bubbles” on its platform. According to NPR, filter bubbles occur when users “encounter only information and opinions that conform to and reinforce their own beliefs, caused by algorithms that personalize an individual’s online experience.”
Examples of negative filter bubbles cited in the documents include “painhub” and eating disorder content. One TikTok employee reported it only took him twenty minutes to enter a negative filter bubble.
The documents also suggest that TikTok knew its time-management tool was ineffective, as it noted that “[m]inors do not have executive function to control their screen time, while young adults do.”
Furthermore, while the tool is intended to limit young people’s use of the app to 60 minutes a day, the documents say that young people use it for 107 minutes a day. Comparatively, before the tool was introduced, youths were using TikTok for 108.5 minutes a day. So, the difference the feature made was negligible.
In a statement responding to the publication of these documents, TikTok spokesman Alex Haurek said:
“It is highly irresponsible of NPR to publish information that is under a court seal,” Haurek said. “Unfortunately, this complaint cherry-picks misleading quotes and takes outdated documents out of context to misrepresent our commitment to community safety.”
“We have robust safeguards, which include proactively removing suspected underage users, and we have voluntarily launched safety features such as default screentime limits, family pairing, and privacy by default for minors under 16.”