YouTube rolls out new content labelling system to boost transparency 

In the ever-evolving world of AI, YouTube are taking extra precautions to ensure transparency of content published by using what’s called the ‘C2PA verification labelling system’.  

What is it?  

C2PA stands for The Coalition for Content Provenance and Authenticity. This new labelling system is designed to provide transparency about the origins and editing history of digital content. The end goal is to combat misinformation, disinformation and deepfakes which has been on the rise in recent years. This new system will allow users to trace the authenticity of content and how it has been altered or generated.  

Truepic, a digital content authentication service, uploaded the first video to YouTube to showcase the new labelling system on the 15th of October. In the description of the YouTube video, we can see a headline ‘How this content was made’ followed by a note saying, ‘captured with camera…this content was captured using a camera or other recording device’.  

How does it work?  

Currently, this verification system only works on devices with a built-in C2PA support , this functionality attaches secure metadata to the piece of content. This metadata verifies the content’ origin, signifying if it’s audio or visuals have been altered, aligning with YouTubes policy on ‘altered and synthetic content’. YouTube will then relay the information that the content was 'Captured with a camera’ and apply the disclosure when it detects this metadata.  

Like any new feature, there are a few limitations to keep in mind. YouTube has explained that the ‘Captured with camera’ label will only appear when creators use the C2PA technology during filming. So, if it’s missing, it doesn’t necessarily mean the content has been altered. The label also depends on data provided by third parties, like camera manufacturers, which means it’s not foolproof. For example, if someone takes a photo of a digitally altered image on a screen, it might get labelled as ‘captured by camera,’ even though the original content was synthetic. 

Why have they rolled this out?  

Earlier this year, Youtube also introduced guidelines requiring creators to label videos as ‘Altered’ or ‘Synthetic’ when appropriate. According to Laurie Richardson, Google’s vice president of trust and safety, these developments are part of YouTube's broader strategy addressing how AI-generated content is handled.  

 The steps YouTube are taking to ensure content is authentic and clearly labelled is an effort to strengthen trust with users while also benefitting their commercial interests by showcasing their efforts to create transparency and safe environments for advertisers. 

Source: Silicon Republic 

Previous
Previous

Google establishes AI education programmes for Irish students  

Next
Next

Navigating Black Friday, Influencer Marketing To Grow, And RTE Player To Implement New User Sign Ups in 2025