Social Media’s Role in Turning Lies to Likes

Spreading fake news is becoming increasingly easy for those that seek to manipulate and control their audience. With over 4.8 billion daily users, social media, an unfathomably enormous platform for communication, is taking over the world as the ideal hub for disinformation. In 1969, poet and rock icon Jim Morrison expressed his displeasure with the media’s ability to control public perspectives and opinions, as he said “Whoever controls the media [inevitably] controls the mind.” Given that 85 percent of internet users have been duped by fake news on social media at least once in their lives, this increasingly prevalent statement should force ourselves to ask the hard questions: how is social media controlling us, how can we avoid being negligent in our news consumption, and how can our social media platforms be held accountable?

The Alarming Power of Social Media Algorithms

The way various social media platforms can grab our attention and hold it in a vice grip for hours is not coincidental or unexpected. Simple although effective algorithms are the engines driving the endless content appearing in our feeds every minute. These algorithms can be split into two categories: content processors and content propagators. Processing algorithms are responsible for sorting, annotating, and transcribing videos and posts, whereas propagating algorithms conduct the recommendation and delivery of content to users.

This figure is based on a table by Arvind Narayanan, which explains primary social media algorithms.

These algorithms are each in their own way used to keep us on social media, but those that are primarily responsible for captivating and engaging us are colloquially known as recommender systems. These recommender algorithms sift through millions of pieces of content and determine which videos and images to display to individual users. Recommenders follow a specific pipeline which gradually works to narrow down options by first removing undesirable item categories, and then ranking content based on how likely a user is to interact with it. 

Pipeline of a modern recommender system used by today's social media apps.

It is this recommender system that is responsible for the virality of the fake news you see on your social media platforms. Due to the algorithm's nature and selection process, fake videos and images are often those that generate the most engagement (likes, re-posts, comments, shares) and as a result, their distribution is more widespread. A study led by researchers at Massachusetts Institute of Technology (MIT) in 2018 discovered that fake news was not only shared by people more often than bots, but that it happened to also travel six times faster and was 70 percent more likely to reach people, ultimately amplifying their misconceptions.

Elevating the Effect of Fake News

Recommended algorithms may be largely responsible for the spreading of fake content, but there are additional effects that exacerbate our current age of disinformation. An example includes the echo chamber phenomena, which involves recommender algorithms distributing users isolated information in line with their beliefs and views, ultimately causing them to consume a distorted one-sided perception of news. People in large numbers can become entangled in echo chambers which are based upon fake news, creating polarized misinformation environments online. A study conducted by the Proceedings of the National Academy of Sciences (PNAS) journal concluded that users are more likely to share content reflecting a one-sided narrative and ignore other viewpoints, promoting the conception that exposure to an echo chamber can make a user more susceptible to reoccurring fake news. This echo chamber effect is exacerbated by companies through a phenomenon known as ‘identity marketing’. Identity marketing involves companies intentionally taking a polarized, strong stance on political or controversial topics in order to tug in additional readers or viewers. This tactic, coupled with the enormous number of different players now in the news market, is a significant reason for the decrease in clarity beginning to appear in online news.  

Reactionary tactics such as clickbait and sensationalism are additional effects which augment the way fake news is spreading. Sensationalism and clickbait are the action of reducing accuracy through exaggeration to generate a reaction online, or adding an element of dishonesty to advertisement for a video, page, or image to “get clicks.” These tactics are intertwined with fake news in the way they stray from the truth to garner viewership, and they are becoming extended to draw users into echo chambers of fake news. Studies on cognitive research demonstrate how by toying with emotions, sensationalism and other effects can promote a fervent belief in fake news.

Ongoing Crisis

Through these algorithms, systems, and effects, fake news on social media and all online forums continues to be relevant worldwide. One major recipient of fake news pieces continues to be the COVID-19 pandemic. A survey run in early 2023 concluded that between 30 per cent to 45 per cent of individuals had been misled by false news regarding the virus.

This graph of a number of countries worldwide shows percentages of individuals who witnessed false or misleading information about key topics over one week in February of 2023.

Misinformation has had omnipresent impacts on recent national news, including the Palestine-Israel conflict. Widespread graphic footage can now be seen online depicting the war going on in the middle east – but not all of it is real. A TikTok video promoting fake conspiracy theories about the origins of the Hamas and Israel attacks amassed over 300,000 views in mere days. Footage taken from a recording of the popular video game “Arma 3” was reposted to social media with a description stating it was a live video of the war and over 100,000 people viewed it. Fake news about the conflict is appearing in such excess on the social media platform ‘X’, that CEO Elon Musk has been warned of potential penalties arising if the content is not purged.

Is Fake News Fixable?

Misinformation and disinformation we see on social media may not be entirely removable, but it is combatable. Several initiatives are beginning to be implemented by various social media applications as a means of decreasing fake news traffic. Applying fact checker tools such as Media Smarts, AFP, and Snopes to websites is one preventative technique that the Government of Canada recommends for social media users. These fact checker tools are used to debunk news websites and verify their accuracy, but they are less effective on social media apps as they only check a primary source – clearing up reposts is more challenging.

Solutions to avoiding fake news pieces on social media cannot be contingent on a viewer's ability to discern truth from lie – the answer is within the apps we use. Mandates need to be established regarding technology and software that can effectively sort through false information before it goes viral. A study by Stanford discovered that to adequately sever the spread of fake news, apps must limit person-to-person distribution. Nevertheless, if a form of censorship and limitation is established to prevent the spreading of fake news, this limitation must not cross the line of our right to freedom of speech online. It is essential we preserve the integrity of our ability to communicate effectively and safely on social media, while maintaining an authentic space for interaction. This can only be accomplished through changes in the way algorithms allow allocation, or an implementation of new technology to capture fake news before it can be shared. Ultimately, if we want fake news to become old news, and social media to transform into a reliable source, it will take a cumulative effort from both applications and users alike.

Previous
Previous

Control Behind the Curtains: The Big Pharma Show in the Canadian Political System

Next
Next

Unplugging the Future: The Next Generation of Amazon Delivery Vans