It is time to get tough with the social media giants

Maria Miller

GUEST POST: Maria Miller is a former Culture Secretary and is MP for Basingstoke. Follow on Twitter

I want 2021 to be the year that we finally grasp the nettle of online abuse – to create a safer, more respectful online environment, that will lead to a kinder politics too. The need has never been greater. Abuse, bullying, and harassment on social media platforms is ruining lives, undermining our democracy, and splintering society.

As an MP, I have had to become accustomed to a regular bombardment of online verbal abuse, rape, and even death threats. In this I am far from alone. Female colleagues across the House are routinely targeted online with abusive, sexist, threatening comments. As Amnesty has shown, black female MPs are most likely to be subjected to unacceptable and even unlawful abuse.

And while women and people from an ethnic minority background are more likely than most to receive abuse online, they are not alone. Hate-filled trolls and disruptive spammers consider anyone with a social media presence to be fair game: one in four people have experienced some kind of abuse online and online bullying and harassment has been linked to increased rates of depression, anxiety, and suicide.

While the personal impact of online abuse is intolerable, we must not underestimate the societal effect it is having. Research by the think-tank Compassion in Politics found that 27 per cent of people are put off posting on social media because of retributive abuse. We cannot have an open, honest, and pluralist political debate online in an atmosphere in which people are scared to speak up.

Which is why I am working cross-party with MPs and Peers to ensure that the upcoming Online Harms Bill is as effective as possible in tackling the scourge of online abuse.

First, the Bill must deal with the problem of anonymous social media accounts. Anonymous accounts generate the majority of the abuse and misinformation spread online and while people should have an option to act incognito on social media, the harm these accounts cause must be addressed.

I support a twin-track system: giving social media users the opportunity to create a “verified” account by supplying a piece of personal identification and the ability to filter out “unverified” accounts. This would give choice to verified users while continuing to offer protection to those, for example whistle blowers, who want to access social media anonymously.

The public back this idea. Polling by Opinium for Compassion in Politics reveals that 81 per cent of social media users would be willing to provide a piece of personal identification (passport, driving license or bank statement most probably) to gain a verified account. Three in four (72 per cent) believe that social media companies need to have a more interventionist role to wipe out the abuse on their platforms.

Of course, this approach would need to be coupled with enforcement, and I believe that can be achieved by introducing a duty of care on social media companies, along the lines suggested in the Government’s White Paper.

For too long, they have escaped liability for the harm they cause by citing legal loopholes, arguing they are platforms for content not producers or publishers. The legal environment that has facilitated social media companies’ growth is not fit for purpose – it must change to better reflect their previously unimaginable reach and influence. Any company that sells a good to a customer already has to abide by health and safety standards, and there is no reason to exempt social media companies. Any failure by those companies to undertake effective measures to limit the impact of toxic accounts should result in legal sanctions.

Alongside a duty of care, we need more effective laws to give individuals protection, particularly when it comes to posting of images online without consent. Deepfake, revenge pornography and up-skirting are hideous inventions of the online world. I want new laws to make it a crime to post or threaten to post an intimate image without consent, and for victims to be offered the same anonymity as others subjected to a sexual offence, so we stop needing the law to play continuous ‘catch up’ as new forms of online abuse emerge.

Finally, the Government should make good on its promise to invest an independent organisation with the power and resources to regulate social media companies in the UK. All the signs suggest that Ofcom will be asked to undertake that role and I can see no problem with that proposal as long as the company is given truly wide-ranging and independent powers, and personnel with the knowledge to tackle the social media giants.

In making these recommendations to Government, my intention is not to punish social media companies or to stifle online debate. Far from it. I want a more respectful, representative, and reasonable discourse online. So, let’s work together over the coming 12 months to make this Bill genuinely world-leading in the protection it will create for social media users, in the inclusivity it will foster, and respect it will engender.

If you have ideas for the group or would like to get involved, please email us.

This piece was written for ConservativeHome.com

The consequences of ‘gagging’ Trump

GUEST POST: Mario Creatura is Head of Strategic and Digital Communications at Virgin Money. Follow on Twitter. Connect on LinkedIn

Late on Wednesday evening Facebook chose to ban President Trump from their platform and Instagram indefinitely, but at least for the duration of his presidency.

Twitter temporarily froze his account, but then took the more drastic decision of banning him permanently on Friday. Given his words directly led to the violence on Capitol Hill, who could blame them for taking this potentially preventative action?

While social media companies have for some time now been encouraged to remove accounts perceived to be harmful or criminal, this is nevertheless a watershed moment for the core definition of these organisations – one that will shape the role they and regulators play in curating our digital world.

This could not be more important. It all centres round the ongoing debate about whether social media companies are ‘publishers’ (with an editorial policy akin to a traditional newspaper) or ‘platforms’ (where they act as the passive host through which any and all content can be shared).

For years now they have maintained the façade that they are platforms – in short that they are not to blame for much of the biased, twisted material that’s shared through their tool. But if they are making choices about who to ban, what content is permissible, and what action is justified in the policing of their sites then their argument quickly deteriorates to the point of ridiculousness.

This is not a semantic, academic debate for media lawyers. In late November last year, Prince Harry sued the publishers of The Mail on Sunday over a story claiming that he has fallen out of touch with the Royal Marines. If Facebook is a platform, then they are broadly protected from similar lawsuits. If they are acknowledged to be a publisher then this totally changes the ballgame and leaves them open to such libel actions as well and could remove them from the protections of Section 230 of the Communications Decency Act.

The banning of President Trump from social platforms will likely have a huge impact on clarifying this debate. Social media companies are undeniably taking an editorial stance, one that many will agree with in this instance. But once that premise is accepted, how can we object to future judgements that we are less keen on?

Too little, too late?

In this very specific situation, what will the impact of Facebook and Twitter’s decision be on Trump’s advocates?

His removal from these platforms takes away his primary means of communicating to some of his increasingly aggressive base of supporters. One possibility is that over time this ban will hurt him and his populist philosophy by making him seem unconnected and ineffective. They could think: ‘If Twitter can silence the great Trump, is he really the all powerful leader we think he is?’

The alternative, far more dangerous path is that it will yet further embolden his fanatics. A scenario that paints the elite, wealthy techno giants as being in hock to the out-of-touch Democrats; claiming they are so terrified by Trump speaking the truth that the will do anything to silence him. ‘They stole the election, now they’re trying to gag him!’ In this version of events, where do these people go? Do they continue to spout their views on mainstream channels, without an obvious leader to corral them?

The editorial decisions made by social media companies could quite feasibly create a digital Hydra – they can try to cut off the head, but many will grow in its place, spawning yet more leaders of hyper-partisan, totally populist campaigners to accompany his already large following of loyal lieutenants.

After all, it’s simply too late to now be punishing Trump by removing his bully pulpit. He’s on his way out and frankly the damage has been done. And he’s not done it alone, dozens of his Senators, Congressmen, political staffers and loyal media outlets have stoked the rhetoric that led to the violence in DC. It has already spread too far for it to be halted by simply banning Trump.

What’s next?

While Trump’s gagging on social channels sends a clear signal that tech giants are taking their curating role seriously, it needs to be more than a Democrat-wooing PR-exercise. Personal responsibility needs to be taken urgently among our lawmakers and the press to self-regulate the content that they all individually publish, whether or not digital companies are finally identified as publishers. We simply cannot wait yet more years for this debate to play out or for social media companies to regulate free expression retrospectively.

For one: it will cause resentment of the social channels from the perceived oppressed side of the deal. If Trump is censored by Twitter, then Trump supporters will turn their guns on to Twitter.

For another: social media companies are significantly more adept at adapting to the shifting needs of the digital sphere. There is already fear that any attempt by legislators to regulate social media will be out-dated and irrelevant by the time the lengthy legislative process is complete.

Whose job is it to police the digital police if they exist beyond traditional borders with little knowledgeable accountability?

The decision to ban Trump has already unleashed waves of criticism – some arguing that it’s an attack on free speech, others that it’s a more serious assault on democratic institutions. That pales into insignificance when compared to the mass of calls for an entirely reasonable principle: fairness. Many are calling for Twitter to ban Ayatollah Khamenei for the same reasons as Trump – will social media companies be able to operate their content moderation policies consistently?

It took Twitter three days to remove a post from a Chinese Embassy trying to spin justifications for their Uyghur genocide – do they have the capacity to apply them fairly? The pressure on them to be consistent, in speed and judgement, will grow and grow exponentially.

Trump may have led the creation of the ripe environment for sedition, but many agents played their part in advancing it. Obfuscating social media companies, slow legislators, and partisan communicators all must share in the blame for last Wednesday’s violence.

For that accountability to happen, influencers need to get to grips with their responsibility to consider the consequences of their personal content and for us all to understand the true role of the social media giants.

If you have ideas for the group or would like to get involved, please email us.

This piece was written for Influence.