Results of the study


Key Takeaways

BY MEASURING the topical and volumetric features of disinformation through Twitter, the results of this study yielded many interesting and unexpected results: 

Overall, most Twitter conversations relating to COVID-19 were negative. Also, there were significant levels of disinformation and conspiracy theories contained in external URLs being shared by the top 50 users (among a sample of 5,000). Finally, and most damning, this study uncovered an increase in accounts demonstrating bot-like behaviour. 

For people interested in reading the full study, I have included a glossary of research terms. Please see below for in-depth explanations of the data and interactive graphs.


 
Salmon Pink and Blue Dotted Vocabulary Graphic Organizer.png
 

Sentiment Study

To gain better insight into the conversations among all users, the following graph demonstrates the overall metrics for each keyword in terms of vertices, edges, and sentiment (total words: positive, negative, hostile/violent).

This graph highlights that overall sentiment appeared to be negative across all keywords excluding Hydroxychloroquine, which displayed more positive sentiment than negative sentiment.

The keyword QAnon was an outlier for negative and hostile/violent sentiment with more than double the number of negative words than positive words. Considering that QAnon was an outlier for sentiment, the information contained for that keyword skewed averages for the total number of positive, negative, and hostile/violent words. 

Out of the 280 characters available on Twitter, averaging at approximately 55 words, this study found that 3.04% of sentiment was positive, 3.45% was negative, and 0.06% was hostile or violent. The next graph demonstrates the average percent of sentiments for each keyword. Again, the keyword QAnon was an outlier: 2.74% of total words were positive and 6.34% of total words were negative. It also had the highest amount of hostile/violent words with 0.19%. 

The figure above reveals that general sentiment was neither overwhelmingly positive or negative for the keywords COVID-19, China, and vaccine. However, sentiment within Hydroxychloroquine and QAnon displayed higher percentages of positive and negative words.

Tweets containing positive sentiment, across each of the five keywords, included terms such as, “positive, safe, protected, support, and Trump.” Example tweets containing negative sentiment for COVID-19, China, vaccine, and Hydroxychloroquine included terms such as, “fatal, virus, dies, and propaganda.” The keyword QAnon displayed high numbers of negative words such as “conspiracy” and “evil.” Finally, the five keywords all contained hostile/violent sentiment, usually in the form of terms such as, “hurt, destroy, kill, and hate.”

Disinformation Search

The next step was to sift through users to identify any disinformation or conspiracies. To do this, the top 10 networks (users) were ranked in descending order by their betweenness centrality: Nodes with a high betweenness centrality are important controllers of power or information. Ethically, this report could not identify or explore individual users, therefore, the top uniform resource locators (URLs) were reviewed for disinformation and conspiracy theories.  

Of the 18 URLs mentioned within the top 50 networks, six URLs originated from unreliable websites/blogs, with four containing disinformation and conspiracy theories. Moreover, of those four accounts that contained disinformation and conspiracy theories, 3 were flagged for showing bot-like activity. Therefore, out of the 18 URLs connected to the top 50 users across all five keywords, 22.2% of information shared through URLs contained disinformation and conspiracy theories.

The bottom figure outlines the top 50 Tweets and the attached URLs within them. 

Graph.png

The following graph demonstrates the number of URLs, out of the 18, that contained disinformation and conspiracy theories. The 18 URLs were reviewed by analysing their publisher and content; some pieces of disinformation could not be traced to an original publisher.

In fact, the two accounts that demonstrated bot-like activity shared the same deep fake of North Korean and Russian leaders criticising American democracy. Furthermore, some of the problematic medical advice took the form of published journals. However, warning labels accompanied the documents with the message “this article is a preprint and has not been peer-reviewed. It reports new medical research that has yet to be evaluated and so should not be used to guide clinical practice.” Furthermore, the preprint medical journal also encourages that news media does not publish the information.

Picture 1.png

In-degree and Out-degree Search

After identifying the tweets that contained disinformation and conspiracy theories, this study analysed the in-degree and out-degree centrality of the URLs. This was essential in mapping how information is used within the top 50 networks. Figure 8 illustrates the in-degree centrality of the 18 URLs, the links containing disinformation and conspiracy theories are in red.

Picture 2.png

The above graph is useful in demonstrating the number of ties a URL receives; Therefore, this graph demonstrates that a source containing disinformation and conspiracy theories received the highest number of mentions, retweets, or tags. Unlike the legitimate sources, the problematic URL was interacted with by 51 more people than the highest in-degree legitimate source 

Below, the graph demonstrates the out-degree centrality of the 18 URLs within the top 50 vertices. This graph highlights the URLs that originate at a vertex and mention other people. Here, a legitimate source had the highest number of out-degree centrality, however, the problematic URLs were still prominent on the graph. The highest problematic URL originated from a bot account and mentioned 14 other users. 

Picture 3.png

Bot Search

The interactive pie chart illustrates the percentage of bot-like accounts that were found among the top 50 users. Research from Bot Sentinel found that 26% of individual vertices were not real people. Again, bot accounts were evidenced to demonstrate outstanding levels of in-degree centrality as well as containing disinformation and conspiracy theories.