Packages

In order to run these analyses you will need tidyverse for data wrangling,rtweet for getting the twitter data, tidytext for working with text, knitr for tidy tables, and igraph and ggraph for making pretty network plots.

In order to get data from Twitter you will need to have a twitter account and gain access to Twitter’s API. There are instructions for doing so here.

library(tidyverse)
library(knitr)
library(tidytext)
library(rtweet)
library(igraph)
library(ggraph)

Data wrangling

We need to do a little bit of tidying as there’s a lot of data and we don’t need it all. I’m also going to add a tweet counter which will help when we move between wide and long-form. rtweet provides a huge amount of data, more than we’re going to use in this example so have a look through to see what you have access to - there are some great examples of how this data can be used here and here .

In the following code, I have added an identifier column tweet_number and then selected text (the actual text of the tweets), created_at (the timestamp of the tweet), source (iphone/android etc), followers_count (how many followers the accounts the tweets came from have), and country (like lang, the is the country specified on the account, not necessarily the country the tweeter is in).

dat <- tweets %>%
  mutate(tweet_number = row_number())%>%
  select(tweet_number, text, created_at, source, followers_count, country)%>%
  as_tibble()

The first thing we need to do is tidy up the text by getting rid of punctuation, numbers, and links that aren’t of any interest to us. We can also remove the hashtags because we don’t want those to be included in any analysis.

dat <- dat %>%
  mutate(text = str_replace_all(text, "[^\x01-\x7F]", ""),
         text = str_replace_all(text, "#GoT", ""),
         text = str_replace_all(text, "#ForTheThrone", ""),
         text = str_replace_all(text, "\\.|[[:digit:]]+", ""),
         text = str_replace_all(text, "https|amp|t.co", ""))

Time series

We can plot when the tweets were sent. Right now this is somewhat uninteresting, however, When GoT actually starts, you can use this to see the spikes in tweets when people are watching each episode live (different timezones will muddle this a little, you could filter by country perhaps).

ts_plot(tweets, by = "1 hours")

You can change the time interval with the by argument and you can also change the time zone. ts_plot creates a ggplot object so you can also add the usual ggplot layers to customise apperance.

ts_plot(tweets, by = "10 mins", tz = "GMT") +
  theme_minimal()+
  scale_y_continuous(name = "Number of tweets")

Tidy text and word frequencies

First, we can produce frequency plots for words used in all tweets to see which words are used most often in #GoT and #ForTheThrone tweets. To do this, we have to create a tidy dataset, just like we do when working with numerical data. We’re going to use the unnest_tokens function from tidytext which will separate each word on to a new line, something similar to like using gather (or pivot_long as it will soon be known). Helpfully, this function will also convert all of our words to lower case which makes them a bit easier to work with.

The second part of the code removes all the stop words. Stop words are words that are commonly used but are of not real interest, for example function words like “the”, “a”, “it”. You can make your own list but tidytext helpfully comes with several databases. Look at the help documentation if you want to know more about these or change the defaults.

# create tidy text

dat_token <- dat %>%  
  unnest_tokens(word, text) %>%
  anti_join(stop_words, by = "word")

We can plot the 20 most frequent words used in all the tweets.

dat_token%>%
  na.omit()%>%
  count(word, sort = TRUE)%>%
  head(20) %>%
  mutate(word = reorder(word, n))%>%
  ggplot(aes(x = word, y = n))+
  geom_col()+
  coord_flip()

There’s quite a few words here that aren’t that helpful to us so it might be best to get rid of them (essentially we’re building our own list of stop words).

custom_stop <- c("gameofthrones", "hbo", "season", "game", "thrones", "episode", "marathon", "forthethrone", "im", "tco", "mountaindew", "mtndewsweepstakes")

dat_token <- dat_token %>%
  filter(!word %in% custom_stop)

Now we can try plotting the words again and make them pretty.

dat_token%>%
  na.omit()%>%
  count(word, sort = TRUE)%>%
  head(20) %>%
  mutate(word = reorder(word, n))%>%
  ggplot(aes(x = word, y = n, fill = word))+
  geom_col(show.legend = FALSE)+
  coord_flip()+
  scale_fill_viridis(discrete = TRUE)

To be honest, this isn’t that interesting because the season hasn’t started yet so there’s a lot of words about watching episodes and looking forward to GoT rather than the actual content. This will likely change when the season starts and people are tweeting about the actual episodes but for knowd, it might be more interesting to see how often each of the main characters are being mentioned.

One problem is that we need to have the exact spellings which means that for Daenerys, Jon, and Jaime, the chances that people will have spelled their names wrong is quite high (as my level 2 students who watched me live code the first version of this will attest) so first we’re going to correct those.

dat_token2 <- dat_token %>%
  mutate(word = recode(word, "khaleesi" = "daenerys",
                       "dany" = "daenerys",
                       "jamie" = "jaime",
                       "john" = "jon"))

characters <- c("jon", "daenerys", "bran", "arya", "sansa", "tyrion", "cersei", "jaime")

Now we can plot a count of how many times each name has been mentioned.

dat_token2 %>%
  filter(word %in% characters)%>%
  count(word, sort = TRUE) %>%
  mutate(word = reorder(word, n))%>%
  ggplot(aes(x = word, y = n, fill = word)) +
  geom_col(show.legend = FALSE) +
  coord_flip()+
  scale_y_continuous(name = "Number of mentions")+
  scale_x_discrete(name = "Character")+
  scale_fill_viridis(discrete = TRUE)

Bigram analysis

Rather than looking at individual words we can look at what words tend to co-occur. We want to use the data set where we’ve corrected the spelling so this is going to require us to transform from long to wide and then back to long because the night is dark and full of terror. DID YOU SEE WHAT I DID THERE.

dat_bigram <- dat_token2 %>%
  group_by(tweet_number) %>%  summarise(text = str_c(word, collapse = " "))%>% # this puts it back into wide-form
  unnest_tokens(bigram, text, token = "ngrams", n = 2, collapse = FALSE)%>% # and then this turns it into bigrams in a tidy format
  na.omit()

dat_bigram %>%
  count(bigram, sort = TRUE)%>%
  head(20)%>%
  kable(align = "c")
bigram n
id sacrifice 1010
fan acanhasnoname 755
sacrifice fan 732
iron throne 259
watch begins 252
jon snow 247
begins rooting 240
winter coming 212
fans check 190
check wait 188
night king 183
se tvtime 144
watched se 139
acanhasnoname sacrifice 130
rooting finalfour 122
chance win 107
white walkers 99
sacrifice acanhasnoname 96
limited edition 87
arya stark 85

Again there’s a bit of nonsense here and it’s a bit uninteresting but as the season progresses it might turn up something fun. Now that we’ve got our bigrams we can plot these to see the connections between the different words. First, we’re going to use separate to put the two words into different columns, then we’ll count them up and plot them. If you want more information about this see the tidytext book online as I am entirely cribbing this from that book. The plot requires the packages igraph and ggraph.

bigrams_separated <- dat_bigram %>%
  separate(bigram, c("word1", "word2"), sep = " ")

# new bigram counts:
bigram_counts <- bigrams_separated %>% 
  count(word1, word2, sort = TRUE)

bigram_counts
## # A tibble: 79,740 x 3
##    word1     word2             n
##    <chr>     <chr>         <int>
##  1 id        sacrifice      1010
##  2 fan       acanhasnoname   755
##  3 sacrifice fan             732
##  4 iron      throne          259
##  5 watch     begins          252
##  6 jon       snow            247
##  7 begins    rooting         240
##  8 winter    coming          212
##  9 fans      check           190
## 10 check     wait            188
## # ... with 79,730 more rows
# filter for only relatively common combinations (more than 20 occurances)
bigram_graph <- bigram_counts %>%
  filter(n > 20) %>%
  graph_from_data_frame()

# make a pretty network plot

ggraph(bigram_graph, layout = "fr") +
  geom_edge_link() +
  geom_node_point() +
  geom_node_text(aes(label = name), vjust = 1, hjust = 1)+
  theme_void()

I am still figuring out how to customise the aesthetics of ggraph. Again this is less interesting at the moment because the season hasn’t started, although it does make it easy to spot that there is some sort of competition where you can win a prize and the connections between words all seem fairly sensible so if nothing else it’s reassuring that I haven’t borked my code. Also,hold the door. Sniff.

Sentiment analysis

Sentiment analyses look at whether the expressed opinion in a bit of text is positive, negative, or neutral, using information from databases about the valance of different words. We can perform a sentiment analysis on the tweets that contain each character’s name to see whether e.g., Jon is mentioned in tweets that are largely positive or if Jaime is mentioned in tweets that are largely negative.

To do this, we first need to do a bit of wrangling. We’re going to transform it back to wide-form, then we’re going to add in in a column that says whether each character was mentioned in the tweet, then we’re going to transform it back to long-form.

dat_mentions <- dat_token2 %>%
  group_by(tweet_number) %>%
  summarise(text = str_c(word, collapse = " "))%>%
  mutate(jon = case_when(str_detect(text, ".jon") ~ TRUE, TRUE ~ FALSE),
         daenerys = case_when(str_detect(text, ".daenerys") ~ TRUE, TRUE ~ FALSE),
         bran = case_when(str_detect(text, ".bran") ~ TRUE, TRUE ~ FALSE),
         arya = case_when(str_detect(text, ".arya") ~ TRUE, TRUE ~ FALSE),
         sansa = case_when(str_detect(text, ".sansa") ~ TRUE, TRUE ~ FALSE),
         tyrion = case_when(str_detect(text, ".tyrion") ~ TRUE, TRUE ~ FALSE),
         cersei = case_when(str_detect(text, ".cersei") ~ TRUE, TRUE ~ FALSE),
         jaime = case_when(str_detect(text, ".jaime") ~ TRUE, TRUE ~ FALSE))%>%
  unnest_tokens(word, text) %>%
  anti_join(stop_words, by = "word")

Once we’ve done this, we can then run a sentiment analysis on the tweets for each character. I still haven’t quite cracked iteration so this code is a bit repetitive, if you can give me the better way of doing this that’s less prone to copy and paste errors, please do.

jon <- dat_mentions %>%
  filter(jon)%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "jon")

daenerys <- dat_mentions %>%
  filter(daenerys == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "daenerys")


bran <- dat_mentions %>%
  filter(bran == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "bran")

arya <- dat_mentions %>%
  filter(arya == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "arya")

sansa <- dat_mentions %>%
  filter(sansa == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "sansa")

tyrion <- dat_mentions %>%
  filter(tyrion == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "tyrion")

cersei <- dat_mentions %>%
  filter(cersei == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "cersei")

jaime <- dat_mentions %>%
  filter(jaime == "TRUE")%>%
  inner_join(get_sentiments("bing"))%>%
  count(index = tweet_number, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)%>%
  mutate(character = "jaime")


dat_sentiment <- bind_rows(jon,daenerys,bran,arya,sansa,tyrion,cersei,jaime) %>%
  group_by(character) %>%
  summarise(positive = sum(positive),
            negative = sum(negative),
            overall = sum(sentiment))%>%
  gather(positive:overall, key = type, value = score)%>%
  mutate(type = factor(type, levels = c("positive", "negative", "overall")))

Now that we’ve done all that we can display a table of positive, negative and overall sentiment scores. Bear in mind that not all words have an associated sentiment score, particularly if they’re a non-standard usage of English (as an aside, this makes RuPaul’s Drag Race very difficult to analyse because tidytext will think a sickening death drop is a bad thing).

dat_sentiment %>%
  spread(type, score)%>%
  arrange(desc(overall))%>%
  kable(align = "c")
character positive negative overall
daenerys 244 284 -40
tyrion 111 162 -51
bran 98 150 -52
jaime 86 191 -105
sansa 113 254 -141
jon 337 487 -150
arya 166 322 -156
cersei 139 358 -219

Because there’s diferent numbers of tweets for each character, it might be more helpful to convert it to percentages to make it easier to compare.

dat_sentiment %>%
  spread(type, score)%>%
  group_by(character)%>%
  mutate(total = positive + negative)%>%
  mutate(positive_percent = positive/total*100,
         negative_percent = negative/total*100,
         sentiment = positive_percent - negative_percent)%>%
  select(character, positive_percent, negative_percent, sentiment)%>%
  arrange(desc(sentiment))%>%
  kable(align = "c")
character positive_percent negative_percent sentiment
daenerys 46.21212 53.78788 -7.575758
jon 40.89806 59.10194 -18.203883
tyrion 40.65934 59.34066 -18.681319
bran 39.51613 60.48387 -20.967742
arya 34.01639 65.98361 -31.967213
jaime 31.04693 68.95307 -37.906137
sansa 30.79019 69.20981 -38.419618
cersei 27.96781 72.03219 -44.064386

They’re all a bit negative (tbf, you don’t watch Game of Thrones for the feel good factor) but there’s some face validity to the analysis with Cersei and Jaime bottom of the pile and Daenerys and Tyrion coming out most positively (fight me). I think the sentiment analysis is an interesting one to repeat each episode as you can see how the reaction to the characters is changing.

Let’s make that into a graph cause graphs are great.

dat_sentiment %>%
  spread(type, score)%>%
  group_by(character)%>%
  mutate(total = positive + negative)%>%
  mutate(positive_percent = positive/total*100,
         negative_percent = negative/total*100,
         sentiment = positive_percent - negative_percent)%>%
  select(character, positive_percent, negative_percent, sentiment)%>%
  arrange(desc(sentiment))%>%
  ggplot(aes(x = reorder(character,sentiment), y = sentiment)) +
  geom_col(show.legend = FALSE) +
  scale_y_continuous(name = "Overall sentiment (percent)")+
  scale_x_discrete(name = "Character")

rtweet is such a cool package and I’ve found that the limits of what you can do with it are much more about one’s imagination. There’s much more you could do with this package but when I first ran these analyses I found that tracking RuPaul’s Drag Race was a fun way to learn a new package as it did give an insight into the fan reactions of one of my favourite shows, hopefully the same will be true of #GoT. If you have any feedback on this tutorial you can find me on twitter: [@emilynordmann](https://twitter.com/emilynordmann).

Also, Dany for life, preferably with Yara by her side.