Trust in the media during COVID-19

IN A post-truth society, where objective facts have become less influential in shaping public opinion, social media users are exposed to avalanches of false information. As media giants continue to fend off regulation, individual users remain vulnerable to harmful online content. Considering this, many studies have started offering tangible solutions to combat disinformation, usually by way of intervention messages and fact-checking systems.  

Quality news has been impacted by disinformation during the COVID-19 pandemic. Mainly, credible and trustworthy news sources have become harder to identify as problematic information continues to spread throughout social media. Some researchers have found that exposure to disinformation, or fake news, can create long-lasting false beliefs propagated by dangerous echo chambers. When people are exposed to misleading new sources, they have a harder time judging the quality of the content and may become biased in their assessment of the information.  

Within the current news ecosystem, disinformation has the potential to spread rapidly

Fact-checkers, warnings, and other algorithmic changes may help limit the spread of disinformation and conspiracy theories on social media. Algorithmic changes include technology that suppress fake news and conspiracy theories by learning from samples. Through these samples, machines can prevent disinformation and conspiracies from rising to the top of search engines; these machines are also beneficial in understanding the common characteristics of disinformation campaigns.

A 2016 study found that prevention messages that informed users that their news source might contain false information discouraged people from sharing it.  

When presented with an intervention message, participants viewed the article as having a negative consequence and were, therefore, less likely to share it. These intervention messages might be useful in mitigating the spread of disinformation and conspiracy theories, however, demand high exposure to false information before being put into place. For example, some articles are flagged as containing disinformation only after many people report it.  

Instead of leveraging from the crowd, a perhaps more viable solution would be the introduction of fact-checkers. In this system, fact-checkers can identify which articles are gaining traction and what information is being intentionally published to sway public opinion. As quality news has become harder to identify, these intervention messages and community flag systems can work to limit the spread of false information in the COVID-19 era and beyond. 

Video: Unesco

Within the current news ecosystem, disinformation has the potential to spread rapidly - This rapid spread has affected individually formed networks of trust; these trust networks are usually found on social media and are peer-to-peer. Content that is created or shared by family members (who are within these networks of trust) help propel the spread of misinformation.  

Considering that people are limiting their brackets of news to people they trust, high-quality information must be widely available. Several news organisations such as The New York Times, The Washington Post, and the South Carolina Courier, have introduced a metered paywall for online newsreaders. While those who can afford to invest in quality journalism receive premium content, those who cannot afford a subscription must rely on the internet and social media. Of course, there are examples of news organisations such as the BBC and the Guardian who provide high-quality free news online.  

The dilemma between high-quality content behind a paywall, or unregulated content for free, helps illustrate the current infodemic and how disinformation and fake news sources have the potential to sway public opinion. During the COVID-19 era, with quality news being harder to identify, offering social media users credible and high-quality news sources can better protect them from disinformation and fake news. 

Through my own research, I have discovered disinformation campaigns disguised as clickbait. Clickbait is a term that describes online content that is created to generate ‘clicks’ or revenue. From the start of COVID-19, many of these articles exploit the sale of medication, supplements, tests, and other procedures that are not vetted or confirmed by the medical community. Considering most people have access to the internet, few of them have medical knowledge or the capacity to separate medical facts from fiction.  

image.jpg

Efforts to hold media giants accountable for disinformation and violence have fallen on deaf ears, the current COVID-19 crisis is a testament to that. Therefore, I argue that researchers need to begin studying the behavioural, cognitive, and emotional side of disinformation in the media to truly combat it. By understanding why people are interacting with media in the ways they do, governments will have a clearer picture of how to solve the issue. 

Some governments are starting to tackle disinformation through legislation, task forces, bills, and other forms of media literacy, but still, individual users have had to tackle the bulk of the issue themselves. This problem illustrates a unique challenge: How can social media users be warned about the potential harms of disinformation online?  

One thing is for certain, the COVID-19 pandemic has revealed the dark underbelly of social media, and the more research that is conducted on this topic can only be considered the first steps in solving the issue. Overall, increased media distrust will continue to flourish until there is ample research to understand the behaviours and reach of disinformation hubs before they get out of control. Along with having to battle a global pandemic, society needs to take the necessary measures to protect themselves from an increasingly dangerous infodemic.