The UK Should Clarify Its Online Safety Bill To Provide Greater Protection for Free Expression – Center for Data Innovation

The United Kingdom (UK) Online Safety Bill—legislation to combat the spread of harmful online content—is currently wrapping up line-by-line scrutiny in Parliament’s Public Bill Committee. The legislation requires online services to monitor and remove various forms of both legal and illegal online content. While well-intentioned, the bill’s current approach to online safety is overbroad, and it does not clearly define what content online services must stop. To avoid running afoul of the legislation, online services will likely over moderate permitted content, hurting free expression. Therefore, the UK government should revise the bill to either only restrict illegal content online or clearly define specific types of legal content it requires services to moderate. 

The bill imposes a set of obligations—“duties of care”—on online services to monitor and remove “legal but harmful” content. Yet Parliament has not defined exactly what content is included in this obligation, leaving the final details to later legislation. This lack of clarity is further compounded by how the Secretary of State and Ofcom—the UK’s broadcast, telecom and postal regulatory agency—may continuously adjust and redefine the bill’s specific obligations on online services. The standards may also change depending on the party in power. In practice, constant redefinition could confuse companies regarding what content they should proactively moderate and will likely push companies to over moderate for fear of hefty fines for noncompliance. 

 The bill requires online services to restrict and remove content these services “reasonably believe” to contain words or images that violate the new policy. But reasonable belief is a subjective standard that will be difficult to apply consistently. For example, online services may struggle to pick up on the nuances that differentiate a user encouraging self-harm and someone raising awareness around postpartum depression or between photos affirming breastfeeding versus exploitative nudity. Online services will have to strike a delicate balance between the context behind user-generated content and the bill’s obligations. At best, these services will accurately guess what content to moderate. But at worst, these services could impede swaths of legal free speech wrongfully deemed harmful as they navigate and attempt to enforce vaguely defined rules. 

The bill does try to preserve important but potentially contentious speech online, creating exemptions for content of journalistic value. Unfortunately, even these exemptions face the same overmoderation concerns. The bill defines journalistic content as either news publisher content or user-created content “generated for the purposes of journalism.” The second part of this definition aims to protect citizen journalism. But unlike a news outlet that can signal journalistic value of its content by its publisher status, a citizen journalist can only hope services accurately recognize the newsworthiness behind their post. And since the bill fails to clarify what content falls in the legal but harmful versus journalistic value categories, platforms may easily conflate the two when drafting and applying their terms of service. For instance, a citizen of a war-torn country may post distressing images intended to raise global awareness of the situation at home. But if services fail to acknowledge the democratic importance behind this seemingly harmful post, they will likely remove speech vital to sustaining a healthy democracy. 

If the Online Safety Bill more clearly defined what content was or wasn’t in scope and granted greater protection for free expression, it could better balance free expression and user safety. Parliament should fix this problem one of two ways. One option is to amend the bill only to restrict unlawful content and include provisions that move certain types of content from the lawful to unlawful category—similar to how the bill already criminalizes cyber flashing. Doing so would ensure the protection of legal free speech online while still targeting egregious forms of content. Alternatively, the UK government could clearly define what legal but harmful content online services should monitor and remove before the bill passes rather than leaving it open for others to decide in the future. With greater definitional clarity, platforms can better strike a fair balance between free expression and online safety. 

While the Online Safety Bill currently fails to balance legal free speech and online safety, there are ways to clarify the bill, such as defining legal but harmful content or restricting the bill’s obligations to just illegal content, to better protect users’ right to free expression. If the UK wants to lead the world in online safety, it should do so in ways that do not infringe upon the civil liberties of UK Internet users and instead maximize the balance between digital free speech and online safety.

Photo by Stefan K on Unsplash