– UK gov wants ‘unsavoury’ web content censored (Wired, March 15, 2014):
The UK minister for immigration and security has called for the government to do more to deal with “unsavoury”, rather than illegal, material online.
James Brokenshire made the comments to the Financial Times in an interview related to the government’s alleged ability to automatically request YouTube videos be taken down under “super flagger” status.
A flagger is anyone that uses YouTube’s reporting system to highlight videos that breach guidelines. The Home Office explained to Wired.co.uk that the Metropolitan Police’s Counter Terrorism Internet Referral Unit (CTIRU), responsible for removing illegal terrorist propaganda, does not have “super flagger” status, but has simply attained the platform’s Trusted Flagger accreditation — a status for users who regularly correctly flag questionable content.
The FT published its article in context of growing concerns around the radicalisation of Britons travelling to partake in the ongoing conflict in Syria, and the Home Office told Wired.co.uk any videos flagged by the CTIRU for review were ones found to be in breach of counter-terrorism laws (29,000 have been removed across the web since February 2010).
This seems to be the impetus for the kinds of extended controls Brokenshire told the FT the government should be looking into, namely, dealing with material “that may not be illegal but certainly is unsavoury and may not be the sort of material that people would want to see or receive”.
“Terrorist propaganda online has a direct impact on the radicalisation of individuals and we work closely with the internet industry to remove terrorist material hosted in the UK or overseas,” Brokenshire told Wired.co.uk in a statement.
YouTube already has a flagging system in place for just these purposes, and will review every complaint. However with 100 hours of video being uploaded to the site every minute, the concern is there is no feasible way of playing whack-a-mole fast enough. This is one issue. How a member of government could propose the authorities do more to deal with material that is simply “unsavoury” though, is another matter entirely. And it’s hard to see how any suggestion of this kind is not censorship.
“It is [censorship],” Jaani Riordan, a barrister specialising in technology litigation, told Wired.co.uk. “Removal of lawful material by government simply because it offends governmental or public policy is without justification. Conversely, a private enterprise, such as YouTube, would always remain free to remove content which offends its Terms of Use or other policies, and there is very limited if any recourse against it for doing so.”
If the government were to force YouTube to remove content, it would be breaching Article 10(2) of the European Convention of Human Rights, related to freedom of expression.
This is why, as with the case of self-harm content or even explicit content, the government prefers to put pressure on private companies to self-censor. In a situation like this, in which we find ourselves today, it is of course impossible to know where these lines will eventually be drawn.
In his interview with the FT Brockenshire says the government is considering a “code of conduct” for internet service providers and companies, and a potential system whereby search engines and social media platforms actually alter their algorithms so that said “unsavoury” content is less likely to appear.
“Google has already modified its algorithms to accommodate government and rights-holder requests,” says Riordan. “For example, penalising sites with a high number of takedown requests and removing auto-complete suggestions for pirate content. These changes would likely be couched in terms of helping consumers find relevant content. It’s a dangerous precedent.”
Furthermore, the government has already piled on the pressure for service providers to provide internet filters. It is the further expansion of those filtering systems — which began as being there to block child abuse content, then pornography and now everything from nudity to alcohol-related content — that is gravely concerning.
“Through proposals from the Extremism Taskforce announced by the Prime Minister in November, we will look to further restrict access to material which is hosted overseas — but illegal under UK law,” Brockenshire told Wired.co.uk in a statement. But, there was more: “…and help identify other harmful content to be included in family-friendly filters.”
The Home Office told Wired.co.uk the government has no interest in preventing access to legitimate and legal material (though we’re not sure why the inclusion of the word “legitimate” was necessary — the government need only be concerned with legality). But it went on to say that even though it may be legal, some extremist content can be distressing and harmful. As such it is working with industry to support its efforts in identifying material that can be included in “family friendly filters”, which can be turned off if the user wishes.
The government had already admitted in January, mere weeks after ISP filters had been implemented, that they were inadvertently blocking content.
“The government has a history of conjuring the spectre of ‘regulation,'” said Riordan. “It didn’t work in 2008 and 2009, when the government sought to encourage ISPs to agree a code of conduct on repeat copyright infringers; this culminated in the Digital Economy Act. CleanFeed blocking of the [Internet Watch Foundation] watchlist is voluntary, but clearly encouraged. One could easily imagine similar threats being made in relation to filtering of extremist materials. The potential for mission creep is extremely concerning.”
Referring to Brockenshire’s comment that the government will help industry “identify other harmful content”, the barrister added: “The passage is certainly suggestive of mission creep — and potentially of great impact, since opt-out filtering now affects the vast majority of British residential internet connections. But I think blocking (assuming no false positives) is probably less harmful than outright removal at source… Of course, the lack of clarity and coherence in content policy is itself deeply concerning.”
The Home Office told Wired.co.uk a large part of the new effort by the CTIRU is to be centred on taking down “terrorist” content overseas, where much of it is being posted. If the police has a good relationship with industry, it’s possible that material can still be taken down, and the Home Office said it has such relationships and now wants them to take these new developments forward i.e. the threat of radicalisation.
The Home Office in fact compared the situation to the restriction of child abuse images, which industry (including Google, most prominently, as mentioned earlier) has already conceded to. But the issues the Home Office refers to as “new developments” cover legal material that may be considered “harmful”.
“I don’t think content should be restricted simply because someone thinks it is ‘unsavoury’ — we need to know what criteria would be applied,” Jacob Rowbottom from the University of Oxford’s law faculty, told Wired.co.uk. The author of ” Leveson, Press Freedom and the Watchdogs“ said we would need to know if there would be an accountability or legal supervision system for decisions made by the government to flag material or request certain sites receive lower search engine rankings. “There is a danger that informal arrangements between government and private companies can lack sufficient safeguards.”
“If there’s one thing that remains constant, politicians have proved to be terrible arbiters of taste. If you don’t think much of their suits and haircuts, you’re not going to think much of what they think acceptable or unsavoury for public consumption,” Danny O’Brien, International Director of the Electronic Frontier Foundation, told Wired.co.uk. “We have free speech because there’s no one person, no one organisation, and certainly no single political party that can determine what’s true, acceptable, unsavoury or revolutionary. The internet is about letting us all speak, and then leaving it to posterity to see who was right and who was wrong. A system that could let us make that judgement in advance wouldn’t be an algorithm — it would be a time machine.”
Emma Carr, deputy director of Big Brother Watch, added: “Governments shouldn’t be deciding what we can see online. Google must be fully transparent about how the British Government uses this system to reassure people freedom of speech is not being chilled.”
For any “code of conduct” — which the Home Office confirmed it is still looking into — to be even remotely acceptable, the conversation about its formula has to happen in public. We need to know, points Riordan, how that code will be enforced, who the signatories will be and whether there will be any kind of penalty for non-compliance. “Co-regulation of this kind is nothing new, but might potentially amount to an interference with the freedom to conduct a business under article 16 of the EU Charter of Fundamental Rights if the duties it imposes are onerous and disproportionate.”
For O’Brien, no amount of explanation will justify the end result. “In China, they call it ‘self-discipline’,” he said. “If politicians are terrible censors, companies are even worse at implementing such censorship. They don’t have the resources to snoop on every video and picture when dealing with billions of users, and they shouldn’t be given that excuse.”
One particular comment made to Wired.co.uk by the Home Office did nothing to assuage concerns. It said that although the Met uses the Trusted Flagger scheme to quickly flag illegal terrorist propaganda, YouTube may also choose to remove legal extremist content if it breaches its terms and conditions. The moment the Home Office is speaking on behalf of industry, speculating about what it may or may not do in context of a proposed code of conduct (which would amount to a new set of terms and conditions for that industry) and more intrusive filtering, a fair few flags of our own pop up.