Introduction

This tutorial introduces network analysis using R. Network analysis is a method for visualization that can be used to represent various types of data. In addition to being a visualization technique, networks have certain statistical properties that can be compared which makes network analysis a very useful procedure. To this end, this tutorial shows how to create and modify network graphs. The entire R markdown document for the sections below can be downloaded here. This tutorial builds on a tutorial on plotting collocation networks by Guillaume Desagulier, a tutorial on network analysis by offered by Alice Miller from the Digital Observatory at the Queensland University of Technology, and this tutorial by Andreas Niekler and Gregor Wiedemann.

How can you display the relationship between different elements, be they authors, characters, or words?

The most common way to visualize such relationships are networks (Silge and Robinson 2017, 131–37). Networks, also called graphs, consist of nodes (typically represented as dots) and edges (typically represented as lines) and they can be directed or undirected networks.

In directed networks, the direction of edges is captured. For instance, the exports of countries. In such cases the lines are directed and typically have arrows to indicate direction. The thickness of lines can also be utilized to encode information such as frequency of contact.

The example that we will be concerned with focuses on the first type of data as it is by far the most common way in which relationships are coded.To show how to create a network, we will have a look at the network that the characters in William Shakespeare’s Romeo and Juliet form.

Preparation and session set up

This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R here. For this tutorials, we need to install certain packages from an R library so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead and ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the libraries so you do not need to worry if it takes some time).

# install packages
install.packages("flextable")
install.packages("GGally")
install.packages("ggraph")
install.packages("gutenbergr")
install.packages("igraph")
install.packages("Matrix")
install.packages("network")
install.packages("quanteda")
install.packages("sna")
install.packages("tidygraph")
install.packages("tidyverse")
install.packages("tm")
install.packages("tibble")
# install klippy for copy-to-clipboard button in code chunks
remotes::install_github("rlesur/klippy")

Next, we load the packages.

# set options
options(stringsAsFactors = F)         # no automatic data transformation
options("scipen" = 100, "digits" = 4) # suppress math annotation
# activate packages
library(flextable)
library(GGally)
library(ggraph)
library(gutenbergr)
library(igraph)
library(Matrix)
library(network)
library(quanteda)
library(sna)
library(tidygraph)
library(tidyverse)
library(tm)
library(tibble)
# activate klippy for copy-to-clipboard button
klippy::klippy()

Once you have installed R, RStudio, and have also initiated the session by executing the code shown above, you are good to go.

1 Creating a matrix

This section shows how to create a network visualization with the network and the GGally packages. The network we will generate shows how often characters in William Shakespeare’s Romeo and Juliet occurred in the same scene. The issue we want to investigate here relates to networks of personas in Shakespeare’s Romeo and Juliet and we thus load this famous work of fiction. Now that we have loaded the data, we need to split the data into scenes. Scenes during which personas leave or enter will have to be split too so that we arrive at a table that contains the personas that are present during a sub-scene.

# load data
rom <- read.delim("https://slcladal.github.io/data/romeo_tidy.txt", sep = "\t")

We now transform that table into a co-occurrence matrix.

The data shows how often a character has appeared with each other character in the play - only Friar Lawrence and Friar John were excluded because they only appear in one scene where they talk to each other.

2 Quanteda Networks

The quanteda package contains many very useful functions for analyzing texts. Among these functions is the textplot_network function which provides a very handy way to display networks. The advantage of the network plots provided by or generated with the quanteda package is that you can create them with very little code. However, this comes at a cost as these visualizations cannot be modified easily (which means that their design is not very flexible compared to other methods for generating network visualizations).

In a first step, we transform the text vectors of the romeo data into a document-feature matrix using the dfm function.

# create a document feature matrix
romeo_dfm <- quanteda::as.dfm(romeo)
# create feature co-occurrence matrix
romeo_fcm <- quanteda::fcm(romeo_dfm)
# inspect data
head(romeo_fcm)
## Feature co-occurrence matrix of: 6 by 18 features.
##                 features
## features         BALTHASAR BENVOLIO CAPULET FIRST CITIZEN FIRST SERVANT FRIAR LAWRENCE JULIET LADY CAPULET MERCUTIO
##   BALTHASAR              1       25      31            11             6             20     26           31       11
##   BENVOLIO               0       39      93            39            27             53     87           99       42
##   CAPULET                0        0      65            42            39             74    131          117       52
##   FIRST CITIZEN          0        0       0             6            10             18     32           36       24
##   FIRST SERVANT          0        0       0             0             3             17     40           42       12
##   FRIAR LAWRENCE         0        0       0             0             0             15     61           72       23
##                 features
## features         MONTAGUE
##   BALTHASAR            17
##   BENVOLIO             55
##   CAPULET              65
##   FIRST CITIZEN        29
##   FIRST SERVANT        15
##   FRIAR LAWRENCE       32
## [ reached max_nfeat ... 8 more features ]

This feature-co-occurrence matrix can then serve as the input for the textplot_network function which already generates a nice network graph. The network graph can then be modified or customized easily by defining the arguments of the textplot_network function. To see how and which arguments can be modified, you can use ?textplot_network.

quanteda.textplots::textplot_network(romeo_fcm, 
                                     min_freq = 10, 
                                     edge_alpha = 0.2, 
                                     edge_color = "orange",
                                     edge_size = 5)

3 Tidy Networks

A great way to generate network graphs is to combine functions from the igraph, the ggraph, and the tidygraph packages. The advantages are that the syntax of for creating the networks aligns with the tidyverse style of writing R and that the graphs can be modified very easily.

To generate network graphs in this way, we define the nodes and we can also add information about the odes that we can use later on (such as frequency information).

va <- romeo %>%
  dplyr::mutate(Persona = rownames(.),
                Occurrences = rowSums(.)) %>%
  dplyr::select(Persona, Occurrences) %>%
  dplyr::filter(!str_detect(Persona, "SCENE"))

Now, we define the edges, i.e., the connections between nodes and, again, we can add information in separate variables that we can use later on.

ed <- romeo %>%
  dplyr::mutate(from = rownames(.)) %>%
  tidyr::gather(to, Frequency, BALTHASAR:TYBALT) %>%
  dplyr::mutate(Frequency = ifelse(Frequency == 0, NA, Frequency))

Now that we have generated tables for the edges and the nodes, we can generate a graph object.

ig <- graph_from_data_frame(d=ed, vertices=va, directed = FALSE)

We will also add labels to the nodes as follows:

tg <- tidygraph::as_tbl_graph(ig) %>% 
  activate(nodes) %>% 
  mutate(label=name)

Now, we use the number of occurrences to define Vertice size (or node size)

v.size <- V(tg)$Occurrences
# inspect
v.size
##  [1]  9 34 46 14 12 20 36 45 15 22 38 21  9 22 54 16 15 22

We can also use the frequency information to define weights

E(tg)$weight <- E(tg)$Frequency
# inspect weights
head(E(tg)$weight, 10)
##  [1] NA NA  1 NA NA  1  1  1 NA  1

Finally, we define colors (by family).

# define colors (by family)
mon <- c("ABRAM", "BALTHASAR", "BENVOLIO", "LADY MONTAGUE", "MONTAGUE", "ROMEO")
cap <- c("CAPULET", "CAPULET’S COUSIN", "FIRST SERVANT", "GREGORY", "JULIET", "LADY CAPULET", "NURSE", "PETER", "SAMPSON", "TYBALT")
oth <- c("APOTHECARY", "CHORUS", "FIRST CITIZEN", "FIRST MUSICIAN", "FIRST WATCH", "FRIAR JOHN" , "FRIAR LAWRENCE", "MERCUTIO", "PAGE", "PARIS", "PRINCE", "SECOND MUSICIAN", "SECOND SERVANT", "SECOND WATCH", "SERVANT", "THIRD MUSICIAN")
# create color vectors
Family <- dplyr::case_when(sapply(tg, "[")$nodes$name %in% mon ~ "MONTAGUE",
                           sapply(tg, "[")$nodes$name %in% cap ~ "CAPULET",
                           TRUE ~ "Other")
# inspect colors
Family
##  [1] "MONTAGUE" "MONTAGUE" "CAPULET"  "Other"    "CAPULET"  "Other"    "CAPULET"  "CAPULET"  "Other"    "MONTAGUE"
## [11] "CAPULET"  "Other"    "CAPULET"  "Other"    "MONTAGUE" "Other"    "Other"    "CAPULET"

Now, that we c´have created the different objects and defined their properties, we visualize the network.

# set seed
set.seed(12345)
# edge size shows frequency of co-occurrence
tg %>%
   ggraph(layout = "fr") +
   geom_edge_arc(colour= "gray50",
                  lineend = "round",
                 strength = .1,
                 aes(edge_width = weight,
                     alpha = weight)) +
   geom_node_point(size=log(v.size)*2, 
                   aes(color=Family)) +
   geom_node_text(aes(label = name), 
                  repel = TRUE, 
                  point.padding = unit(0.2, "lines"), 
                  size=sqrt(v.size), 
                  colour="gray10") +
  scale_edge_width(range = c(0, 2.5)) +
  scale_edge_alpha(range = c(0, .3)) +
  theme_graph(background = "white") +
  theme(legend.position = "top") +
  guides(edge_width = FALSE,
         edge_alpha = FALSE)

4 iGraph Networks

Wiedemann and Niekler (2017) have written a very recommendable tutorial on co-occurrence analysis and they propose an alternative for generating complex network visualization for co-occurrences. Their approach is to create and customize a graph object based on the iGraph package. To see how to create sophisticated network graphs using the iGraph package, see this tutorial on analyzing collocations or this tutorial.

We have reached the end of this tutorial and you now know how to create and modify networks in R and how you can highlight aspects of your data.

Citation & Session Info

Schweinberger, Martin. 2021. Network Analysis using R. Brisbane: The University of Queensland. url: https://slcladal.github.io/net.html (Version 2021.10.02).

@manual{schweinberger2021net,
  author = {Schweinberger, Martin},
  title = {Network Analysis using R},
  note = {https://slcladal.github.io/net.html},
  year = {2021},
  organization = "The University of Queensland, Australia. School of Languages and Cultures},
  address = {Brisbane},
  edition = {2021.10.02}
}
sessionInfo()
## R version 4.1.1 (2021-08-10)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19043)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252    LC_MONETARY=German_Germany.1252
## [4] LC_NUMERIC=C                    LC_TIME=German_Germany.1252    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] tidygraph_1.2.0      sna_2.6              statnet.common_4.5.0 network_1.17.1       Matrix_1.3-4        
##  [6] igraph_1.2.6         ggraph_2.0.5         GGally_2.1.2         rms_6.2-0            SparseM_1.81        
## [11] Hmisc_4.5-0          Formula_1.2-4        survival_3.2-11      lattice_0.20-44      DT_0.19             
## [16] kableExtra_1.3.4     knitr_1.34           lexRankr_0.5.2       janeaustenr_0.1.5    hashr_0.1.4         
## [21] stringdist_0.9.8     koRpus.lang.de_0.1-2 cluster_2.1.2        tm_0.7-8             NLP_0.2-1           
## [26] coop_0.6-3           hunspell_3.0.1       koRpus.lang.en_0.1-4 koRpus_0.13-8        sylly_0.1-6         
## [31] textdata_0.4.1       tidytext_0.3.1       plyr_1.8.6           flextable_0.6.8      forcats_0.5.1       
## [36] stringr_1.4.0        dplyr_1.0.7          purrr_0.3.4          readr_2.0.1          tidyr_1.1.3         
## [41] tibble_3.1.4         ggplot2_3.3.5        tidyverse_1.3.1      gutenbergr_0.2.1     quanteda_3.1.0      
## 
## loaded via a namespace (and not attached):
##   [1] utf8_1.2.2                tidyselect_1.1.1          htmlwidgets_1.5.4         grid_4.1.1               
##   [5] munsell_0.5.0             codetools_0.2-18          withr_2.4.2               colorspace_2.0-2         
##   [9] highr_0.9                 uuid_0.1-4                rstudioapi_0.13           officer_0.4.0            
##  [13] labeling_0.4.2            slam_0.1-48               polyclip_1.10-0           bit64_4.0.5              
##  [17] farver_2.1.0              rprojroot_2.0.2           coda_0.19-4               vctrs_0.3.8              
##  [21] generics_0.1.0            TH.data_1.1-0             xfun_0.26                 R6_2.5.1                 
##  [25] graphlayouts_0.7.1        nsyllable_1.0             reshape_0.8.8             assertthat_0.2.1         
##  [29] scales_1.1.1              vroom_1.5.5               multcomp_1.4-17           nnet_7.3-16              
##  [33] gtable_0.3.0              conquer_1.0.2             klippy_0.0.0.9500         sandwich_3.0-1           
##  [37] rlang_0.4.11              MatrixModels_0.5-0        systemfonts_1.0.2         splines_4.1.1            
##  [41] sylly.en_0.1-3            lazyeval_0.2.2            stopwords_2.2             quanteda.textstats_0.94.1
##  [45] broom_0.7.9               checkmate_2.0.0           yaml_2.2.1                modelr_0.1.8             
##  [49] backports_1.2.1           sylly.de_0.1-2            tokenizers_0.2.1          tools_4.1.1              
##  [53] ellipsis_0.3.2            RColorBrewer_1.1-2        Rcpp_1.0.7                base64enc_0.1-3          
##  [57] rpart_4.1-15              viridis_0.6.1             zoo_1.8-9                 haven_2.4.3              
##  [61] ggrepel_0.9.1             fs_1.5.0                  here_1.0.1                magrittr_2.0.1           
##  [65] data.table_1.14.0         reprex_2.0.1.9000         mvtnorm_1.1-2             SnowballC_0.7.0          
##  [69] matrixStats_0.60.1        hms_1.1.0                 evaluate_0.14             jpeg_0.1-9               
##  [73] readxl_1.3.1              gridExtra_2.3             compiler_4.1.1            crayon_1.4.1             
##  [77] htmltools_0.5.2           proxyC_0.2.1              tzdb_0.1.2                RcppParallel_5.1.4       
##  [81] lubridate_1.7.10          DBI_1.1.1                 tweenr_1.0.2              dbplyr_2.1.1             
##  [85] MASS_7.3-54               rappdirs_0.3.3            cli_3.0.1                 parallel_4.1.1           
##  [89] pkgconfig_2.0.3           foreign_0.8-81            xml2_1.3.2                svglite_2.0.0            
##  [93] webshot_0.5.2             rvest_1.0.1               digest_0.6.27             rmarkdown_2.5            
##  [97] cellranger_1.1.0          fastmatch_1.1-3           htmlTable_2.2.1           gdtools_0.2.3            
## [101] quanteda.textplots_0.94   quantreg_5.86             lifecycle_1.0.1           nlme_3.1-152             
## [105] jsonlite_1.7.2            viridisLite_0.4.0         fansi_0.5.0               pillar_1.6.3             
## [109] fastmap_1.1.0             httr_1.4.2                glue_1.4.2                zip_2.2.0                
## [113] png_0.1-7                 bit_4.0.4                 ggforce_0.3.3             stringi_1.7.4            
## [117] polspline_1.1.19          latticeExtra_0.6-29

Back to top

Back to HOME


References

Silge, Julia, and David Robinson. 2017. Text Mining with R: A Tidy Approach. " O’Reilly Media, Inc.".

Wiedemann, Gregor, and Andreas Niekler. 2017. “Hands-on: A Five Day Text Mining Course for Humanists and Social Scientists in R.” In Proceedings of the Workshop on Teaching NLP for Digital Humanities (Teach4DH2017), Berlin, Germany, September 12, 2017., 57–65. http://ceur-ws.org/Vol-1918/wiedemann.pdf.