Does Online Hate End With Parler?
“The lie outlasts the liar,” writes the historian Timothy Snyder. Donald Trump’s deployment of social media to erase any “distinction between what feels true and what actually is true” will go down in history as his most dangerous legacy.
The complicity of social media companies that enabled Donald Trump’s hateful rhetoric is equally dangerous.
By: M Yakovlev (First published on MDI)
Only after the Trump-inspired attempt at a violent “putsch” failed on 6th January 2021, did Silicon Valley finally start enforcing their own guidelines. First, they removed accounts spreading Trumpian disinformation and hate. By the 8th, Trump’s own Facebook and Twitter had been suspended.
Another high profile ‘causality’ was Parler. The self-anointed “unbiased” social media platform rose to infamy as millions of new users – most of them alt-right and conspiracy theorists – flocked to it in protest at Twitter starting to add ‘disinformation’ stickers to some of Trump’s Tweets, which they saw as “censorship”. MDI explored this exodus at the time.
Now, Parler’s everything-goes approach has come to bite it. Twenty four hours after Capitol’s breach, Google, then Amazon, then Apple pulled it from their App Stores. The final blow came when Amazon confirmed it will also be removing Parler from Amazon Web Services, its cloud hosting service.
Is this the end of the road for Parler and the kinds of hateful voices it attracted?
Dr Ed Bracho-Polanco, Lecturer in Media and Communications at the University of Westminster, tells MDI that Parler is likely to get back online. At the time of writing, the site has already made a partial come-back with help from DDoS-Guard, a Russian digital infrastructure company. Users remain locked out. But anyone trying to access the site will see a landing page promising a full come-back.
Still, “Parler’s banning by the major tech corporations became a point of no return in terms of Parler’s potential to become a mainstream social media platform that could compete with Twitter or Facebook” – explains Dr Bracho-Polanco – even if Parler can regain “its status as a niche and fringe site for specific ideologically-driven groups – namely the very same right-wing and hyper conservative groups and militias.”
If Parler doesn’t make it, “there are lots of far-right sites and any number of them could easily grow to fill in the gap, left by Parler,” opines Dr Eric Heinze, Professor of Law and Humanities at Queen Mary University London.
Dr Bracho-Polanco agrees – “the right-wing militia Proud Boys gained 8,000 followers on Telegram alone, only five days after they” lost access to Twitter, Facebook and Parler (which got shut down) in the aftermath of the Capitol riot.
“Groups like this have easily found ways to move to smaller sites once Parler was removed from their reach. And this exodus follows a logic of political polarisation and radicalisation. We’re bound to find less diverse and more radical user-produced content in these smaller sites,” he continues.
Similarly, Jillian C. York – free-speech advocate and Director for International Freedom of Expression at the Electronic Frontier Foundation – warns that “bans like this might be satisfying in the short term, and they might prevent short-term harm. But, one of the things that I have realised after moving to Germany is that banning Nazi symbols here hasn’t got rid of Nazis. Banning dangerous discourse simply makes it harder to spot.”
From a legal perspective, pushing far-right agitators from conventional social media further into the dark-web and encrypted messaging services deprives law enforcement of evidence.
“One interesting thing about the perpetrators on 6th January is that they were so open about their views on social media. Evidence like this is gold in court because it proves motive. If someone was Tweeting ‘let’s go hang Mike Pence’, then they won’t be able to claim that they were innocent bystanders in court,” explains Dr Heinze.
“This also makes it easier for police to track threats ahead of time – something Capitol Police failed completely, as we know. There is a lot of concern about that.”
In turn, the banning of Parler in of itself raises serious concerns about the power of private corporations, like Amazon and Google, to decide what is acceptable speech.
“While these debates have always been strong in the US because of the First Amendment, they are no longer a purely American concern, like they were twenty or thirty years ago. Even Angela Merkel criticised Twitter’s suspension of Trump’s account, straightforwardly in the name of free speech,” stresses Dr Heinze.
For Jillian C. York, the lack of transparency about the way these companies yield their power of policing speech represents a grave danger for freedom of expression.
Often, social media censor global speech in line with US or EU policy. “In the case of removing COVID-19 disinformation, that’s a good alliance.”
“But, on the other side you also see them going against the power of states that they don’t like. Things get really muddled when you look at the broader picture. Companies like Facebook align with the Turkish government, which is a powerful state and, frankly, a profitable market. So, social media companies will take down anything that is illegal in Turkey, although Turkey’s laws on speech are not aligned with human rights. It’s the same with Saudi Arabia. At the same time, you’ll see social media not complying with requests from Pakistan or Iran.”
For York, the solution is ethical self-regulation. “The first step is to create consistent rules, which let all users understand what is or is not acceptable. Whether you are a member of Hezbollah or you are the President of the United States, these are the lines you can’t cross.”
Is Silicon Valley ready to change?
“There will be guidelines. And, people will be kicked off. And, posts will be taken down. Ultimately, however, if people are clicking on certain things, Mark Zuckerberg doesn’t want to take them down, unless his advertisers are threatening to pull. Controversy – whatever gets the most clicks – drives profit,” Dr Heinze tells MDI.
Since its launch in 2016, MDI-partner UK-based Stop Funding Hate has been implementing its pioneering strategy of empowering ordinary consumers to push brands to pull advertising from hateful outlets, like The Daily Mail, with remarkable success. But, putting pressure on advertisers alone does not seem enough to force social media giants to take real action on hateful and dangerous speech.
According Dr Bracho-Polanco, only government regulation will do the job. “Many voices, including those belonging to progressive stances, have expressed concern about possible curbing or even violation of the First Amendment were this to happen. Yet, I believe that, in the context of the US, regulation regarding social media content could be elaborated in a pragmatic and ethical manner, resorting to the notion that some online content or platforms might present a threat to national and international security.”
“Self-expression and freedom of opinion are indeed essential democratic values, but not when these disrespect or put at risk the freedom of others – and this is what we saw with much content uploaded on Parler,” he continues.
Jillian C. York is sceptical about any government policing of a medium that has enabled countless individuals around the world to voice legitimate dissent against authoritarian regimes, starting with the Tunisian Spring in 2011.
To add, there is no guarantee that state legislation will even work.
At best, Silicon Valley giants will stick with ‘geoblocking’. As MDI’s Eline Jeanné explained in a 2019 piece, “geoblocking acts as a sticking plaster” allowing social media companies to comply with specific governments “without truly tackling the issue” of hate which is not confined to borders.
At worst, market incentives will prevail over legal obligations even in states where the likes of Facebook and Twitter claim to be complying with law. For example, QAnon 2: Spreading Conspiracy Theories on Twitter – a recent report by Get the Trolls Out, an MDI-led project – discovered that German is the second most common language of QAnon-related Tweets and that those tweets tend to use especially vitriolic anti-Semitic language, despite Germany’s notoriously restrictive Network Enforcement Act and equally-stringent laws criminalising anti-Semitism.
Is there a solution?
For now, it seems that Dr Heinze is right about the absence of a perfect solution. While a comprehensive internet treaty is unlikely to be agreed at UN-level anytime soon, state-level legislation, grassroots activism or some combination of both are the only available options.
Both will only be effective insofar as someone is listening on the other side, at the top of these companies.
“The best way to ensure this is by fostering actual diversity in internal policy making. One of the things that I found interviewing people of colour who had worked at Facebook and YouTube, for my latest book Silicon Values: The Future of Free Speech Under Surveillance Capitalism, is that a lot of them felt as though they were tokenized in the decision-making process,” concludes York.