UK regulator Ofcom is calling on social media platforms to “tame toxic algorithms” that recommend troubling content in a new draft of the Children Safety Codes of Practice.
On Wednesday May 8, Ofcom outlined “more than 40 practical measures” social media sites could take as part of its enforcement of the Online Safety Act, including more robust age checks, and better content moderation.
What is the Online Safety Act?
The Online Safety Act is a controversial UK law that gives Ofcom enhanced enforcement powers. The law puts the onus of responsibility on social media firms to protect children from legal but harmful material. Ofcom defines “harmful” content as being content related to pornography, eating disorders, self-harm and suicide.
“We want children to enjoy life online. But for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control,” Melanie Dawes, Ofcom Chief Executive, said in a statement. “Many parents share feelings of frustration and worry about how to keep their children safe. That must change.”
According to the BBC, the Children Safety Codes of Practice will come into effect in the second half of 2025. It will target tech platforms that have “a significant number of children who are users of the service” or are “likely to attract a significant number of users who are children.” If a tech company falls into one of these categories, it will need to complete a risk assessment to identify the potential risks.
“Our proposed codes firmly place the responsibility for keeping children safer on tech firms,” Dawes added. “They will need to tame aggressive algorithms that push harmful content to children in their personalized feeds and introduce age-checks so children get an experience that’s right for their age.”