YouTube quietly introduced a policy that allows users to request the takedown of AI-generated content which simulates their likeness, whether that be visually or via voice.
This policy change, which was slyly rolled out in June but discovered in an updated YouTube help documentation page by TechCrunch on July 1, falls under YouTube’s privacy request process.
The updated policy requires first-party claims — which, in other words, means that the person being imitated needs to file the privacy violation themselves.
There are a few exceptions of course, like YouTube deepfakes depicting minors, deceased people, or those without computer access. But by and large, users are expected to deal with this type of violation by submitting requests for themselves.
When will YouTube takedown AI deepfakes?
YouTube will consider complaints based on a range of factors. This includes public interest, whether the video includes public figures, its value as a satire/parody, whether it is disclosed as AI-generated.
If YouTube determines a valid complaint, the uploader will have a 48-hour window to act. One way they can act on it is by removing the content, which will immediately close the complaint.
Otherwise, YouTube will initiate a review. This could end with the video being deleted or the account being suspended if it is a repeat offense.
While the uploader can blur the individual’s face in the video, privating the video will not be considered a sufficient measure.
“If we remove your video for a privacy violation, do not upload another version featuring the same people. These people will likely file another privacy complaint or report you for harassment,” the site warns. “We’re serious about protecting our users and suspend accounts that violate people’s privacy.”