
YouTube has officially endorsed the groundbreaking “Nurture Originals, Foster Art, and Keep Entertainment Safe” (NO FAKES) Act of 2024. This bipartisan legislation, spearheaded by Senator Chris Coons and Senator Marsha Blackburn, is designed to safeguard individuals from unauthorized deepfakes and their proliferation across various digital platforms.
The NO FAKES Act seeks to establish accountability by holding individuals or companies liable for the creation of unauthorized digital replicas of any person in a performance. Furthermore, digital platforms will also bear responsibility if they knowingly host such content. This aspect of the legislation presents challenges in clearly defining accountability concerning the hosting of these replicas, yet YouTube is dedicated to collaborating on these new regulations to enhance their systems for better compliance.
This legislation represents a complex area of legal responsibility, particularly regarding digital content hosting. Nevertheless, YouTube is committed to adapting its practices in accordance with these new regulations to improve its content management systems.
According to the bill:
“Generative AI has unlocked new realms of creative possibilities, equipping millions with tools that inspire them to explore their artistic abilities. However, alongside these creative advantages, these technologies can also enable users to misuse another individual’s voice or visual likeness by generating highly realistic digital replicas without obtaining permission.”
The bill highlights significant incidents, including an AI-generated song that mimicked Drake’s voice and an advertisement featuring an AI-generated portrayal of Tom Hanks.
Such misapplications of technology are only expected to become more sophisticated as advancements continue, which underscores the critical importance of this legislation in providing legal avenues for addressing misrepresentations through these means.
In response to this pressing need, YouTube is committed to enforcing the stipulations of the NO FAKES Act throughout its platform.
As stated by YouTube:
“We recognize that AI possesses immense potential, but unlocking this potential responsibly necessitates the implementation of protective measures. The NO FAKES Act offers a strategic path forward by focusing on the most effective way to balance protection with innovation: empowering individuals to alert platforms about AI-generated likenesses they believe should be removed. This notification process is crucial as it enables platforms to differentiate between authorized content and harmful fakes.”
YouTube’s commitment to combating AI-generated fakes is further highlighted by its support for the TAKE IT DOWN Act, which criminalizes the distribution of non-consensual intimate imagery.
Additionally, YouTube has introduced likeness control tools designed to assist individuals in detecting and managing the usage of AI to represent them on the platform. They have also launched a pilot program in collaboration with the creative industry, granting some of the most influential figures in the world access to advanced detection and removal requests.
This issue is of paramount concern, and while the anticipated surge of AI-generated fakes following last year’s U.S. election has not materialized, the utilization of AI-generated representations is on the rise, leading to varying degrees of confusion among audiences.
Thus, this is a significant advancement, and we can expect more platforms to align with the principles of the NO FAKES Act in the near future.
NOTE: Major companies like Amazon, Google, Meta, and X have all expressed their initial support for the NO FAKES Act.