Social media and fake news: a 2020 round up

2020 has been a wild ride and with its highs and lows being largely experienced and documented online, it’s had us turning to social media more than ever before. In fact, UK adults were, on average, spending a record high of over four hours a day online during April 2020 – we can’t be the only ones regretting not buying shares in Zoom, right?

In between the photos of banana bread and speculative posts about what really happened to the husband of Carole Baskin, the uncertainty of life and increased time on social media has meant that fake news is a growing issue.

In fact, it’s such an issue that a college student from York did a social experiment and fooled us all by setting up a fake account saying that Woolies was on the return. This was to demonstrate how quickly fake news can gain social momentum. It trended so fast that people were planning their pick & mix before it was debunked and he owned up to it being a media project. The scary thing is, it was believed without a second thought.

Whilst each social channel has a very different content offering, there is a common theme of storytelling across them all and each platform has had to step up to monitor and police fake news.

Some content can be easy to see through – when your aunty is announcing the next lockdown on Facebook, it’s pretty easy to work out that she doesn’t have Boris on a direct dial – but when it’s statistics? Things that could actually make sense? Something written by a (questionable) media outlet?

That’s where fake news comes in.

It’s easy to forget that as marketers, it’s our job to read between the lines and know that fake news does the rounds. But for the wider world the concept of ‘fake news’ tends to come with a meme-like association with Donald Trump’s tweets.

With fake news widespread and largely unavoidable on social media, we’ve looked at the measures each platform has put in place, to help combat the spread of misinformation in 2020.


After the Cambridge Analytica scandal of 2016, Facebook made a huge effort to reduce the spread of misinformation in the run-up to the 2020 election. They prevented new political ads from running in the week before, created a voting information centre on the platform and increased their safety and security team to 35,000 staff members.

Their introduction of third party fact checkers allows them to limit the spread of other fake news (some of which is financially motivated with hoaxes being shared on pages which masquerade as a legitimate news site) and they have made it harder for people posting fake news to buy ad space.

Facebook is also continuing to work with the News Literacy Project and on their News Integrity Initiative to help people on Facebook make informed decisions about the content they read, trust and share.


Twitter has worked hard in 2020 when it comes to its efforts to eliminate fake news on the platform. The fact check feature was rolled out in October 2020, whereby tweets will be labelled as fake news for all users and it also intends to warn users if they are planning to share information that is perceived as false.

The fact check feature rolled out just in time for the 2020 US election with the intention of stopping a presidential candidate from claiming a premature victory. The result of that? Trump’s own ‘victory’ tweets were labelled as false by the platform. Awkward.

Twitter also worked to remove harmful content about the coronavirus – specifically content that “goes directly against guidance from authoritative sources of global and local public health information”. This means that content telling you to inject yourself with Dettol to cure Covid will be deleted before anyone has chance to give it a go.


In-between every wannabe travel blogger posting snaps from their 2016 gap year in Thailand, fake news continues to be a big problem for Instagram. To combat this, Instagram introduced a new fact-checking feature in 2019, which allows users to report posts they believe contain fake news. Once flagged, the post is then verified by a third party fact checker. Should such a post be marked as containing fake news, it will be down-ranked in the algorithm to limit reach.

More recently, Covid-19 accounts were removed from account recommendations and Covid-19 content was removed from the explore page, unless posted by someone in the biz – i.e. a credible health organisation or government body.


Between January and June this year, LinkedIn removed 22.8k pieces of misinformation, which is a lot considering most people only use the platform for job searching, or to follow like-minded people in their industry. Whilst LinkedIn uses AI tech to detect misleading content, reports by members are crucial to help get misleading content taken down as quickly as possible.


With more time at home, there was a huge spike in video content consumption in 2020 (where else would we learn how to make a sourdough starter?) and this trend shows no sign of slowing.

By adding fact check boxes in September 2020, search results now show which information is legitimate before watching a video. The platform also pledged to remove content about Covid-19 which isn’t from an official source or contradicts official guidelines – so far, over 200k videos have been removed.


As a platform, the setup of content being deleted after 24 hours as standard prevents any content from ‘going viral’. Whilst this doesn’t stop peer-to-peer sharing of false information, it’s likely to be a far lower volume of misleading content which is being shared, particularly as there is no public newsfeed.

With the discover section of the app being closely monitored by the Snapchat staff, who also carefully consider which brands they partner with, it is very much a curated source of information which is regulated.


It turns out TikTok isn’t just for teens! Even Matt Hancock got involved with advertising the ‘stay home, protect the NHS’ messaging (admittedly this was filmed whilst he was stood outside his workplace).

TikTok has hired over 10,000 staff members globally to moderate content and enforce their own policies to remove misleading content. In April, in app reporting was added so people can flag something they believe to be misleading. Covid aside, TikTok also banned political advertising to ‘prevent the spread of misinformation’.

Although it’s clear (and good!) to see that social media platforms have stepped up their collective efforts to combat fake news, they still have some way to go to eradicate the problem entirely.

Written by:

Lizi Lege PR & Social Media Manager


What we think



You may also like

What we think


/  28 Jan 2022

2022 Trends Forecast: DX & Project Management

2022 is here, and with a new year comes new trends, because in our industry, nothing stays the same for long. But that’s why we love it - new technologies, platforms and methodologiess are always evolving and we enjoy the process of learning and ad

Read more

What we think


/  18 Jan 2022

2022 Trends Forecast: Digital Media

2022 is here, and with a new year comes new trends, because in our industry, nothing stays the same for long. But that’s why we love it - new technologies, platforms and methodologies are always evolving and we enjoy the process of learning and ada

Read more