We all know how easy it is to create new email addresses and user accounts on certain websites. But social media, or at least Big Social Media, is supposed to be different, with the likes of Facebook, Instagram and Twitter purportedly working around the clock to remove fake users, fake posts and fake engagement.
However, while there’s little doubt that these companies do make a public effort to limit fake and misleading activity on their networks, a growing body of evidence is suggesting that they aren’t enjoying much in the way of success. And as they fail to significantly reduce the illusory, misleading elements of their sites, it becomes apparent that social media is increasingly functioning – intentionally or not – to deceive the public into accepting a false version of reality.
This point made itself painfully clear in a study published earlier this month. Authored by researchers at the NATO Strategic Command Centre of Excellence in Latvia, it found that it’s surprisingly easy to purchase tens of thousands of comments, likes and views on Facebook, Twitter, YouTube and Instagram. Not only that, but the vast majority of such ‘astroturfed’ engagement is likely to remain online for weeks and months, even when flagged by users.
Specifically, the researchers hired 16 companies (mostly based in Russia) to purchase fake online engagement for 105 posts on the four aforementioned networks. They spent €300 ($330), which bought them 3,530 comments, 25,750 likes, 20,000 views and 5,100 followers. Most astonishingly, 80% of this fake activity remained online four weeks after, while 95 out of 100 accounts reported as fake were also still online after three weeks.
In conclusion, the researchers wrote, “Overall social media companies are experiencing significant challenges in countering the malicious use of their platforms.”
The researchers also found that most fake social media engagement relates to commercial rather than political domains. However, at a time when the UK is holding a general election, and when the US is gearing up for another presidential election, the prevalence of such falsity is alarming, particularly when it’s so easy to post manipulative and misleading content.
The NATO study would appear to be the tip of the iceberg when it comes to fake social media activity. A 2017 New York Times study in conjunction with cybersecurity firm FireEye found that “hundreds or thousands of fake accounts” regularly posted anti-Hilary Clinton messages on Facebook and Twitter in the run up to the 2016 US election. Similarly, this year the Independent analysed tweets surrounding the May EU elections, finding that 12% of all tweets using hashtags supportive of far-right parties were automated, and that 6% of all tweets with political hashtags came from bots.
Again in 2017, research from the University of Oxford concluded that “the lies, the junk, the misinformation” of corporate and governmental propaganda is prevalent on social media and “supported by Facebook or Twitter’s algorithms.” In particular, it found that 45% of highly active Twitter accounts in Russia aren’t real people, but rather bots. Similarly, it also highlighted a campaign against Taiwanese President Tsai Ing-wen, which involved thousands of heavily co-ordinated accounts spreading Chinese propaganda.
In other words, social networks are platforms for creating fake people and fake worlds, where individuals are fed a distorted version of reality and lied to about what other people believe. Sure, there’s plenty of activity and engagement from real people on Facebook and Twitter, but there’s also a significant percentage of orchestrated activity, designed to mislead people into accepting a version of reality that doesn’t exist.
The social networks are well aware of this, and while they emphasize their efforts to limit fake activity, the sheer extent and regularity of false engagement indicates that they’re fighting a losing battle. In its most recent Transparency Report from November, Facebook revealed that it “disabled” 5.4 billion fake accounts between January and September 2019. This represents a 63.6% increase over the 3.3 billion fake accounts it removed in all of 2018.
Clearly, the fact that 5.4 billion fake accounts can sprout up so soon after 3.3 billion are removed indicates that masses of fake accounts are being created all the time. Facebook says it takes down most of these during the registration process, but given that it’s dealing with billions of fake accounts, only a small percentage need to get through for fake content to have an impact. Indeed, Facebook admits that 5% of its active monthly users (i.e. 120 million accounts) are fake, although critics have claimed the real figure is much higher.
Put differently, Facebook is engaged in an industrial-scale game of whack-a-mole, and it’s a game it’s losing. Because the point of fake accounts and fake engagement is not to remain on Facebook (or Twitter) in perpetuity (or even for a month), but to have a short-lived but widespread impact. Think of the all the fake news articles that were the most popular news items during the 2016 election, or the posts from fake accounts during the current UK election that claimed a photo of sick four-year-old boy were staged. The aim of such fake content is to change hearts and minds within a short time frame of a few hours. It’s completely irrelevant if Facebook, Twitter or YouTube takes accounts and posts down after a few days or a week.
“Fake engagement tactics remain a challenge facing the entire industry,” Facebook said in a statement in response to the recent NATO study. “We’re making massive investments to find and remove fake accounts and engagement every day.”
There’s no doubt that Facebook, like other social networks, is investing massively in removing fake accounts and engagement. But evidence suggests that their endeavors aren’t entirely successful, and that social media regularly confronts its users with content and accounts that aren’t what they seem. Social media has become a tool for reframing our perception of reality, and all-too often making us act, think and speak in terms of things that aren’t actually real.