EU Proposal on Terrorist Content Misguided – Center for Data Innovation
Many European policymakers are rightly concerned about the threat of harmful content online. Alongside the issue of online disinformation, access to content promoting terrorism and the increasing prevalence of hate speech have all recently made headlines. To address this problem, in September 2018 the European Commission proposed a regulation aimed at “preventing the dissemination of terrorist content online” which would require any Internet company offering services in the EU to monitor and remove content that incites terrorist acts. Unfortunately, there are two major problems with the proposal as it stands.
First, the Commission’s proposal sets an unreasonable requirement for how quickly companies must take down content. The proposal would require companies to remove harmful content within one hour upon notification by the government. This time frame is impractical for large companies, and even more unworkable for those with fewer resources. It is worth remembering that many online services operate with thin margins. Twitter, for example, only became profitable in late 2017 . Thus this proposal would limit competition, as startups would be unlikely to have the capacity to monitor and take down content fast enough and may instead simply avoid the European market. In addition, the regulation would apply equally to all types of Internet companies, big and small.
Second, because companies failing to promptly take action would face severe financial penalties—up to 4 percent of global turnover—they will have a strong incentive to be overzealous and err on the side of removing content so as to avoid the risk of fines. As a result, users will enjoy less free speech, as platforms will likely remove lawful content, including those holding important public interest, out of an abundance of caution.
EU policymakers should have anticipated these problems because similar problems of excessive blocking of legal content have already arise with Germany’s controversial Network Enforcement Law (NetzDG) . Adopted in 2017, this law requires large social networks to remove illegal content within 24 hours of notification or face stiff penalties. The non-profit group Reporters Without Borders reports that the law has spurred companies to “delete large amounts of content that was in fact legal in an effort to ensure that they will not be punished.”
Rather than imposing unreasonable requirements on platforms and forcing them to overly police their users, EU policymakers should encourage more industry-government collaboration. After all, some companies have already been making rapid progress at removing significant amounts of illegal content without the constraints of a regulation. And these companies are also able to explore alternative solutions to removing content, such as facilitating counter speech to challenge and discredit hate speech.
Now that the digital copyright reforms have passed, policymakers working on the digital single market have shifted their attention to this file. However, they should not rush through because of legislative fatigue and political agendas. When objectives are common—in this case, the guarantee of a healthy Internet in a free society—broader ex-ante consultation with online platforms, users, and others would avoid shortcomings such as unrealistic requirements, misinformed demands on digital platforms, and unjustifiable government overreach.