In a disturbing agreement from the perspective of free speech advocates Tuesday, the European Commission and U.S. based social media companies Facebook, Twitter, YouTube and Microsoft agreed to a "Code of Conduct" on "hate speech". It's pretext is to stop terrorist related propaganda on social media, but reading the announcement from the EU it looks to also be a way to censor opposition to negative news about Muslims. The EU describes it this way:
The IT Companies support the European Commission and EU Member States in the effort to respond to the challenge of ensuring that online platforms do not offer opportunities for illegal online hate speech to spread virally. They share, together with other platforms and social media companies, a collective responsibility and pride in promoting and facilitating freedom of expression throughout the online world. However, the Commission and the IT Companies recognise that the spread of illegal hate speech online not only negatively affects the groups or individuals that it targets, it also negatively impacts those who speak out for freedom, tolerance and non-discrimination in our open societies and has a chilling effect on the democratic discourse on online platforms.
The EU further describes the purpose for the new rules as follows:
In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment. While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated.
The problem is what is hate speech? It's well known that the EU has often twisted the idea of hate speech from a battle with Islamic extremists and terrorists to a fight to stop the speech of those opposing their extreme positions. For instance, is it hate to draw a political cartoon of Mohammed as the French satirical newspaper Charlie Hebdo did which resulted in the murder of 12 members of its staff by radical Islamic extremists. By Western standards of free speech, obviously not. But from the EU's point of view maybe.
Many see these new rules as Orwellian and distressing that US based social media companies would agree to censorship of views the EU doesn't agree with. From Breitbart:
Janice Atkinson MEP told Breitbart London: “It’s Orwellian. Anyone who has read 1984 sees it’s very re-enactment live.
“The Commission has been itching to shut down free speech in the Parliament and now they’re attacking social media. We have already seen Facebook ‘policing’ so-called right-wing postings.
“If an MEP, such as the centre-right Hungarians, the Danish People’s Party, the Finns, the Swedish Democrats, the Austrian FPO, say no to migration quotas because they cannot cope with the cultural and religious requirements of Muslims across the Middle East who are seeking refugee status, is that a hate crime? And what is their punishment? It’s a frightening path to totalitarianism.”
UKIP’s Justice and Home Affairs spokeswoman Diane James MEP told Breitbart London:
“This legislation is so vague that it is the thin end of the wedge not just curb hate speech but free speech as well.
“Different people and cultures across Europe have different ways of communicating. The Liberal tradition in Britain for instance is more open and very different from that of dictatorial former Communist countries in the East.
“The EU was sold to people as a Common Market, it became a political union and now wishes to decide and compromise our civil liberties as a people. This is unacceptable to a free people who have a right to know where all this legislation is leading to.
“In my opinion, if the EU still allows to me have an opinion, I believe this matter should be decided by national parliaments rather than the unelected European Commission.“
In response the these new restrictive rules on speech by the EU and American social media companies the European Digital Rights (EDRi) announced that is pulling out of future discussions with the Commission:
Faced with this lamentable outcome, and with no possibility to provide meaningful input to this process, the Commission has left us with no other choice but to withdraw from the discussion,
said Estelle Massé, EU Policy Analyst at Access Now.
It is ironic that the Commission is threatening to take Member States to court for failing to implement EU law on racism and xenophobia while it is persuading companies like Google and Facebook to sweep offences under the carpet,
added Joe McNamee, Executive Director at European Digital Rights.
In a release, the EDRi explains why they have a problem with this new Code of Conduct agreement:
What is in today’s code of conduct?
- an explicit statement that companies will “take the lead” in policing controversial speech online, which means that law enforcement authorities will not be taking the lead;
- an undertaking that IT companies will ban content that should already be legally banned;
- an undertaking to review notifications against company terms of service first and then, “if necessary” to review them against the law. In practice, this means that the legal procedures for testing the legality of content against the law will never be used as the code of conduct asks for illegal content to be banned by terms of service.
In short, the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.
This means that this “agreement” between only a handful of companies and the European Commission is likely in breach of the EU Charter of Fundamental Rights, under which restrictions on fundamental rights should be provided for by law. It will, in practical terms, overturn case law of the European Court of Human Rights on the defense of legal speech.