In September, the non-profit Global Witness Report tested the political ad moderation systems of TikTok, Facebook, and YouTube. To each platform, the organization submitted eight ads that contained disinformation, including claims about needing to pass an English test to vote and calls for another Jan. 6 insurrection. On Oct. 17, it finally published the results of the test.
According to the results, Facebook performed the best, approving just one of the eight ads with disinformation about voter eligibility. Meanwhile, YouTube and TikTok both only rejected four out of the eight ads.
Additionally, YouTube blocked the account from running any more ads without further identity checks.
However, as the Washington Post points out, the failure is especially damning for TikTok. This is because, since 2019, the platform has supposedly banned all political ads.
In an interview with the Washington Post, Ava Lee, Global Witness’s campaign lead for digital threats, said that the approved ads were “pretty seriously bad.”
“We were pretty surprised to see that TikTok was doing so badly,” she said.
When approached for comment by the Washington Post, TikTok spokesperson Ben Rathe said: “Four ads were incorrectly approved during the first stage of moderation but did not run on our platform. We do not allow political advertising and will continue to enforce this policy on an ongoing basis.”
This isn’t the first time TikTok’s ban on political ads has proven ineffective. Back in September, NBC News found 52 videos with political rhetoric on the platform tagged with the “Paid Partnership” label or hashtags like #ad or #sponsored.
With pending legislation against the platform and less than three weeks to go before polling day in the US, TikTok has a lot of work to do. Not just in the app itself, but in restoring people’s trust.