jordan-harrison-1208586-unsplashThe latest tool in the fight against IP infringement is technology, but it remains to be seen how effective those tools might be in that fight. Technology can be easy and efficient, but it can fail the test when it comes to things like judgement and intuition that humans (ideally) can do better. Machines looking at certain markers or data points aren't going to be able to render the same verdict on a photo or video as someone who can literally look at the entire picture, and understand context and nuance.

Things like automated filters are also subject to abuse at the hands of those looking to weaponize copyright tools for their own purpose, whether that is to harm their competition or rivals, or simply to exert ownership over public domain works that obviously don't belong to them.

The most recent example of this problem is the many automated takedown notices issued for copies of the Report On The Investigation Into Russian Interference In The 2016 Presidential Election, known colloquially as the Mueller report. As a government document, it is free to all and in the public domain, and has been uploaded to a number of different sites for dissemination. But as Above the Law reports, many of those copies uploaded to the site Scribd have been flagged and removed for infringing upon a copyright that doesn't exist.

The explanation is, of course, that there are those who are trying to make money off of the highly-anticipated document. Publishers have begun to sell bound copies of the report at bookstores and other booksellers, and as such have uploaded their copies to sites such as Scribd to avoid infringement despite the fact that it is, again, a work of public domain.

The snafu demonstrate the trouble in trying to use a technological solution to police and arbitrate matters of infringement. Without the ability to understand bad faith, the concept of fair use, or any broader context surrounding a work, machines will make wrong decisions in these matters again and again. And while it might seem like a minor point, especially if automation is correctly doing its job a vast majority of the time, it still stands as a failure on balance; after all, don't we employ technology to try and eliminate mistakes? If we were looking for an imperfect arbiter, we could simply go back to having humans do the job and avoid these particular incidents in favor of innocent mistakes.

Join for Free Business Risk Assessment