Google-owned YouTube said Wednesday it is altering its algorithms to reduce invalid copyright infringement claims on its video-sharing site and will begin manually reviewing some claims instead of the system automatically blocking disputed footage.
UPDATE 10/5/2012: Actually Google subsequently ‘clarified’ its blog post to reflect that YouTube will *not* be doing any manual reviews of questionable algorithmic copyright matches.
The development comes a month after First Lady Michelle Obama’s speech at the Democratic National Convention was wrongly flagged by algorithms just after it aired. YouTube, the official streaming partner of the Democratic National Convention, automatically put a copyright blocking message on the livestream video of the event shortly after it ended.
Thabet Alfishawi, rights management product manager for YouTube, said “mistakes can and do happen” due to the volume of uploaded videos and the sheer number of copyrighted clips uploaded into its automated Content ID service. We at Wired have labeled the algorithm “streaming video’s robotic overlord.”
To address the issue of false positives and outright abuse of the system, he said, “We’ve improved the algorithms that identify potentially invalid claims. We stop these claims from automatically affecting user videos and place them in a queue to be manually reviewed.”
YouTube five years ago engineered a filtering system enabling rights holders to upload music and videos they own to a “fingerprinting” database — 500,000 hours of reference files to date. When YouTube account holders upload their videos, the algorithm known as Content ID scans new uploads against the copyright database for matches.
If a full or partial match is found, the alleged rights holder can have the video automatically removed, or it can place advertising on the video and make money every time somebody clicks on the video.
The idea was to solve the problem of large copyright holders constantly complaining about copyright violations. The compromise lets people submit homemade videos set to one of their favorite musicians’ songs or a snippet from a movie and allow the original creator to benefit from the exposure and ad dollars if they so choose.
But if Content ID overmatches or a rogue manages to feed the filter content it doesn’t own, a YouTube user could see her video hijacked through a false copyright claim because Content ID had largely functioned on auto-pilot.
Under the new rules announced Wednesday, however, if the uploader challenges the match, the alleged rights holder must abandon the claim or file an official takedown notice under the Digital Millennium Copyright Act. (Falsely representing ownership in a DMCA flap may expose one to potential monetary damages.)
“Prior to today, if a content owner rejected that dispute, the user was left with no recourse for certain types of Content ID claims (e.g., monetize claims). Based upon feedback from our community, today we’re introducing an appeals process that gives eligible users a new choice when dealing with a rejected dispute,” Alfishawi said. “When the user files an appeal, a content owner has two options: release the claim or file a formal DMCA notification.”
Under U.S. copyright law, Google is not required to deploy copyright filters. But rights holders are embracing it as a way to make money online. Even Viacom, which is suing YouTube for $1 billion for copyright violations, uses Content ID and its lawsuit only covers alleged copyright violations before the filter’s deployment.
A month before the Michelle Obama bungle, an official NASA recording of the Mars landing was blocked hours after the successful landing, due to a rogue complaint by a news network.
Google only allows large media companies and video networks to join the Content ID program, which has about 3,000 registered participants.