Why Companies Keep Folding to Copyright Pressure, Even If They Shouldn’t

Estimated reading time: 4 minutes

Why Companies Keep Folding to Copyright Pressure, Even If They Shouldn’t

The giant record labels, their association, and their lobbyists have succeeded in getting a number of members of the U.S. House of Representatives to pressure Twitter to pay money it does not owe, to labels who have no claim to it, against the interests of its users. This is a playbook we’ve seen before, and it seems to work almost every time. For once, let us hope a company sees this extortion attempt for what it is and stands up to it.

Here is the deal. Online platforms that host user content are not liable for copyright infringement done by those users so long as they fulfill the obligations laid out in the Digital Millennium Copyright Act (DMCA). One of those obligations is to give rightsholders an unprecedented ability to have speech removed from the internet, on demand, with a simple notice sent to a platform identifying the offending content. Another is that companies must have some policy to terminate the accounts of “repeat infringers.”

Not content with being able to remove content without a court order, the giant companies that hold the most profitable rights want platforms to do more than the law requires. They do not care that their demands result in other people’s speech being suppressed. Mostly, they want two things: automated filters, and to be paid. In fact, the letter sent to Twitter by those members of Congress asks Twitter to add “content protection technology”—for free—and heavily implies that the just course is for Twitter to enter into expensive licensing agreements with the labels.

Make no mistake, artists deserve to be paid for their work. However, the complaints that the RIAA and record labels make about platforms are less about what individual artists make, and more about labels’ control. In 2020, according to the RIAA, revenues rose almost 10% to $12.2 billion in the United States. And Twitter, whatever else it is, is not where people go for music.

But the reason the RIAA, the labels, and their lobbyists have gone with this tactic is that, up until now, it has worked. Google set the worst precedent possible in this regard. Trying to avoid a fight with major rightsholders, Google voluntarily created Content ID. Content ID is an automated filter that scans uploads to see if any part—even just a few seconds—of the upload matches the copyrighted material in its database. A match can result in either a user’s video being blocked, or monetized for the claiming rightsholder. Ninety percent of Content ID partners choose to automatically monetize a match—that is, claim the advertising revenue on a creator’s video for themselves—and 95 percent of Content ID matches made to music are monetized in some form. That gives small, independent YouTube creators only a few options for how to make a living. Creators can dispute matches and hope to win, sacrificing revenue while they do and risking the loss of their channel. Fewer than one percent of Content ID matches are disputed. Or, they can painstakingly edit and re-edit videos, or avoid including almost any music whatsoever and hope that Content ID doesn’t register a match on static or a cat’s purr.

While any creator has the right to use copyrighted material without paying rightsholders in circumstances where fair use applies, Content ID routinely diverts money away from creators like these to rightsholders in the name of policing infringement. Fair use is an exercise of your First Amendment rights, but Content ID forces you to pay for that right. WatchMojo, one of the largest YouTube channels, estimated that over six years, roughly two billion dollars in ads have gone to rightsholders instead of creators. YouTube does not shy away from this effect. In its 2018 report “How Google Fights Piracy,” the company declares that “the size and efficiency of Content ID are unparalleled in the industry, offering an efficient way to earn revenue from the unanticipated, creative ways that fans reuse songs and videos.” In other words, Content ID allows rightsholders to take money away from creators who are under no obligation to obtain a license for their lawful fair uses.

That doesn’t even include the times these filters just get things completely wrong. Just the other week, a programmer live-streamed his typing and a claim was made for the sound of “typing on a modern keyboard.” A recording of static got five separate notices placed on it by the automated filter. These things don’t work.

YouTube also encourages people to simply use only the things that they have a license for or are in a library of free resources. That ignores that there is a fair use right to use copyrighted material in certain cases, and lets companies argue that no one has to use their work without paying since these free options exist.

So, when the labels make a lot of disingenuous noise about how inadequate the DMCA is and how platforms need to do more, they have YouTube to point to as a “voluntary” system that should be replicated. And companies will fold, especially if they end up being inundated with DMCA takedowns—some bogus—and if they think the other option is being required to do it by law, the implicit threat of a letter like the one Twitter received.

This tactic works. Twitch found itself buried under DMCA takedowns last year, handled that poorly, and then found itself being, like Twitter, blamed for taking money out of the hands of musicians by the RIAA. Twitch now makes removing music and claimed bits of videos easier, has adopted a similar repeat infringer policy to YouTube’s, and makes deleting clips easier for users. Snap, owner of Snapchat, went the route of getting a license, paying labels to make music available to its users.

Creating a norm of licensed or free music, monetization, or automated filters functionally eviscerates fair use. Even if people have the right to use something, they won’t be able to. On YouTube, reviewers don’t use the clips of the music or movies that are the best example of what they’re talking about—they pick whatever will satisfy the filter. That is not the model we want as a baseline. The baseline should be more protective of legal speech, not less.

Unfortunately, when the tech companies are facing off against the largest rightsholders, it’s users who most often lose. Twitter is only the latest target, we hope they become the one to stand up for its users.

Published August 06, 2021 at 08:59PM
Read more on eff.org

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.