In her recent TED talk, UK journalist Carole Cadwalladr suggested that the Facebook–Cambridge Analytica scandal is a symptom of a wider malaise: the breakdown of democracy. Cadwalladr, who in 2018 broke the news about how Cambridge Analytica’s misuse of Facebook user data influenced the outcomes of the UK Brexit referendum and the US presidential elections, stated that Facebook, Google, and Twitter (but mostly Facebook) all played a role in this breakdown by enabling their platforms and users to be manipulated – unwittingly or otherwise. This manipulation occurred through the publishing of false or misleading advertising or fake news on the social platforms, which was targeted at specific audiences, often included hateful content, and was often paid for (in the case of advertising) by unknown individuals or entities.
The question now is: Can the genie be put back in the bottle? That is, do the technology companies that Cadwalladr holds responsible for diluting the power of democracy have the will to make the necessary changes to their platforms, their business models, and their operating procedures, which are needed to restore the true value of a vote?
There is no doubt that these companies have the resources to develop the tools and institute the internal processes that will be required on a global scale – they are billion-dollar internet companies, and they can move quickly. But as an industry, social platforms need to fully commit to the Herculean task of recognising and taking down harmful content on their platforms, including fake news and misleading and false advertising that includes hate speech, even as, like the heads of the Hydra, such content continues to regenerate. They also need to effectively identify and ban all individuals and organisations responsible for spreading this harmful content, fake news, and false and misleading advertising – not just the most prominent individuals and organisations.
Social platforms have already committed to identifying to users the individuals or organisations that have paid for political advertising. But, ultimately, they must ensure that all the efforts they are making are more than just lip service. Facebook’s continued growth will depend on maintaining the confidence and trust of brands, especially given that the company’s number of active advertisers has leveled off over the past year and half (see Figure 1).
As little as a month ago, Facebook put out a call to governments to regulate social media platforms. But given that it might take some time for governments to put that regulation in place, perhaps Facebook, Google, and Twitter should take the initiative and put together a team of smart people – a council – sourced from within their organisations, the wider industry, and their user bases, to jointly develop a code of conduct specifically for social platforms. Ideally, a code of conduct would outline what constitutes harmful content and usage of social platforms, along with the penalties that would be incurred for noncompliance, among other things. Compliance with the code would need to be monitored by an independent third party, and the code itself could perhaps eventually be enacted in legislation. And again, the social platforms will need to push themselves to make sure that the code is as strong as it can possibly be, even if it means that they suffer a little.
The situation is not irretrievable, but it is becoming imperative that the social platforms not only act but act meaningfully.