YouTube is expanding its pilot program to detect and manage AI-generated content featuring creators’ likenesses, while publicly supporting the NO FAKES Act, a legislation aimed at preventing the misuse of AI-generated replicas.
The company initially launched the likeness detection pilot program in December 2024 with the Creative Artists Agency (CAA). YouTube’s new technology builds on its existing Content ID system, which identifies copyright-protected material in users’ uploaded videos. The program automatically detects AI-generated simulated faces or voices, allowing platforms to distinguish between authorized content and harmful fakes.
YouTube collaborated with Sens. Chris Coons (D-DE) and Marsha Blackburn (R-TN), the Recording Industry Association of America (RIAA), and the Motion Picture Association (MPA) on the NO FAKES Act. Coons and Blackburn are reintroducing the legislation at a press conference. The NO FAKES Act aims to allow individuals to notify platforms about AI-generated likenesses they believe should be removed, enabling platforms to make informed decisions.
The likeness detection system is being tested by top YouTube creators, including MrBeast, Mark Rober, Doctor Mike, the Flow Podcast, Marques Brownlee, and Estude Matemática. During the testing period, YouTube will work with these creators to scale the technology and refine its controls. The company plans to expand the program to more creators in the coming year, although a broader public launch date has not been specified.
In addition to the likeness detection technology pilot, YouTube has updated its privacy process to allow users to request the removal of synthetic content that simulates their likeness. The company has also added likeness management tools that enable people to detect and manage how AI is used to depict them on YouTube.
YouTube’s support for the NO FAKES Act and its expansion of the likeness detection pilot program demonstrate its efforts to address the challenges associated with AI-generated content. The company recognizes the potential for AI to “revolutionize creative expression” but also acknowledges the risks of misuse or harm. By working with creators, lawmakers, and industry players, YouTube aims to balance protection with innovation and provide a safer online environment.




