YouTube has rolled out a new policy that allows individuals to request the removal of synthetic content that simulates their face or voice.
Under the new policy, individuals can request the removal of AI-generated content that violates their privacy, rather than citing misleading or deepfake concerns.
YouTube’s updated Help documentation outlines the process, which requires first-party claims, except in cases where the affected individual is a minor, lacks access to a computer, is deceased, or falls under other specified exceptions.
However, submitting a request for takedown does not guarantee removal. YouTube will assess the complaint based on various factors, including whether the content is disclosed as synthetic or AI-generated, whether it uniquely identifies a person, and whether it can be considered parody, satire, or of public interest.
The company will also consider whether the AI content features a public figure or well-known individual, and whether it shows them engaging in sensitive behavior, such as criminal activity, violence, or endorsing a product or political candidate.
In the event of a complaint, YouTube will give the content’s uploader 48 hours to act on the request. If the content is removed before the time expires, the complaint is closed. Otherwise, YouTube will initiate a review.
The company also warned users that removal means fully removing the video from the site.
YouTube’s approach to AI-generated content is nuanced, as the company has experimented with generative AI itself, including a comments summarizer and conversational tool for asking questions about a video or getting recommendations.
In the case of privacy complaints over AI material, YouTube won’t penalize the original content creator. Instead, the company will focus on addressing the privacy violation, which is separate from Community Guidelines strikes.
https://ift.tt/jEBeVb4
https://ift.tt/Pjv0w9T
0 Comments