YouTube, the video-sharing platform owned by Google, has expanded its AI likeness detection tool in response to rising concerns about digital impersonation and deepfake content.
The company announced Tuesday that the technology will now help protect the identities of a broader group of public figures, including government officials, journalists, and political candidates, as artificial intelligence generated content becomes more widespread online.
“As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities,” YouTube said in a statement.
Expanded protection against AI impersonation
The likeness detection tool was initially introduced in September 2025 and was primarily available to creators enrolled in the YouTube Partner Program. The expansion broadens eligibility to include individuals who are often targeted in misinformation campaigns or identity misuse.
According to YouTube, the system scans AI-generated videos for matches with an enrolled participant’s face or likeness. If a match is detected such as a deepfake using someone’s image the affected individual can review the flagged content and request its removal if it violates the platform’s privacy policies.
“We’re starting with this cohort to ensure the tool meets their unique needs, with plans to significantly expand access over the coming months,” the company said.
Participation requires users to enroll in the system. Their verification data is then used to confirm their identity when they submit a request to remove AI-generated content that imitates them.
Balancing protection and free expression
YouTube noted that the tool will not automatically remove all AI-generated impersonations. Content such as satire or parody may remain on the platform if it falls within accepted guidelines.
Instead, each removal request will undergo evaluation to determine whether the content violates privacy rules or constitutes legitimate expression.
To prevent misuse of the reporting mechanism, access to the removal request process is currently limited to enrolled participants government officials, journalists, political candidates, and creators within the YouTube Partner Program.
Rising concerns over deepfake technology
The expansion comes amid increasing global concerns over AI-generated impersonations on social media platforms. Deepfakes videos or images manipulated by artificial intelligence to convincingly mimic real individuals are being used for a range of purposes, from satire and entertainment to scams, cyberbullying, and political disinformation.
In January 2026, controversy emerged involving xAI and its chatbot Grok, after the tool was accused of generating explicit deepfake images that circulated on X. The incident sparked legal challenges and criticism over safeguards against the misuse of generative AI systems.
Similarly, writing assistant developer Grammarly recently faced legal action linked to its AI-powered “Expert Review” feature, which allegedly generated text revisions that imitated well-known authors and academics without their consent.
The feature, launched this week, uses AI agents designed to replicate the writing styles of subject-matter experts, raising concerns about intellectual property rights and digital identity protections.
Growing pressure on tech platforms
As generative AI tools become increasingly accessible, technology companies are under mounting pressure to establish safeguards that protect individuals from identity misuse while preserving legitimate creative expression.
YouTube’s expanded likeness detection initiative reflects a broader shift within the technology sector toward developing systems that can identify and address AI-driven impersonation before it spreads widely online.
Industry analysts say such tools could become an essential component of digital governance as deepfake technology continues to evolve.
