This tutorial introduces classification and clustering using R. The entire R markdown document for this tutorial can be downloaded here. A more elaborate and highly recommendable introduction to cluster analysis is Kassambara (2017). Other very useful resources are, e.g., King (2015); Kettenring (2006); Romesburg (2004); and Blashfield and Aldenderfer (1988).
Cluster analyses fall within the domain of classification methods which are used to find groups or patterns in data or to predict group membership. As such, they are widely used and applied in machine learning. For linguists, classification is not only common when it comes to phylogenetics but also in annotation-based procedures such as part-of-speech tagging and syntactic parsing.
This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R here. For this tutorials, we need to install certain packages from an R library so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead and ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the packages so you do not need to worry if it takes some time).
# set options
options(stringsAsFactors = F) # no automatic data transformation
options("scipen" = 100, "digits" = 4) # suppress math annotation
# install packages
install.packages("cluster")
install.packages("factoextra")
install.packages("cluster")
install.packages("seriation")
install.packages("pvclust")
install.packages("ape")
install.packages("vcd")
install.packages("exact2x2")
install.packages("factoextra")
install.packages("seriation")
install.packages("NbClust")
install.packages("pvclust")
install.packages("flextable")
install.packages("tidyverse")
install.packages("tibble")
install.packages("gplots")
# install klippy for copy-to-clipboard button in code chunks
remotes::install_github("rlesur/klippy")
In a next step, we load the packages.
# load packages
library(cluster)
library(factoextra)
library(cluster)
library(seriation)
library(pvclust)
library(ape)
library(vcd)
library(exact2x2)
library(factoextra)
library(seriation)
library(NbClust)
library(pvclust)
library(flextable)
library(tidyverse)
library(tibble)
library(gplots)
# activate klippy for copy-to-clipboard button
klippy::klippy()
Once you have installed R and RStudio and initiated the session by executing the code shown above, you are good to go.
The most common method in linguistics that is sued to detect groups in data are cluster analyses. Cluster analyses are common in linguistics because they not only detect commonalities based on the frequency or occurrence of features but they also allow to visualize when splits between groups have occurred and are thus the method of choice in historical linguistics to determine and show genealogical relationships.
The next section focuses on the basic idea that underlies all cluster analyses. WE will have a look at some very basic examples to highlight and discuss the principles that cluster analyses rely on.
The underlying idea of cluster analysis is very simple and rather intuitive as we ourselves perform cluster analyses every day in our lives. This is so because we group things together under certain labels and into concepts. The first example used to show this, deals with types of trees and how we group these types of trees based on their outward appearance.
Imagine you see six trees representing different types of trees: a pine tree, a fir tree, an oak tree, a beech tree, a phoenix palm tree, and a nikau palm tree. Now, you were asked to group these trees according to similarity. Have a look at the plot below and see whether you would have come up with a similar type of grouping.
An alternative way to group the trees would be the following.
In this display, conifers and broad-leaf trees are grouped together because there are more similar to each other compared to palm trees. This poses the question of what is meant by similarity. Consider the display below.
Are the red and the blue line more similar because they have the same shape or are the red and the black line more similar because they are closer together? There is no single correct answer here. Rather the plot intends to raise awareness about the fact that how cluster analyses group data depends on how similarity is defined in the respective algorithm.
Let’s consider another example to better understand how cluster analyses determine which data points should be merged when. Imagine you have five students and want to group them together based on their overall performance in school. The data that you rely on are their grades in math, music, and biology (with 1 being the best grade and 6 being the worst).
Student | Math | Music | Biology |
StudentA | 2 | 3 | 2 |
StudentB | 1 | 3 | 2 |
StudentC | 1 | 2 | 1 |
StudentD | 2 | 4 | 4 |
StudentE | 3 | 4 | 3 |
The first step in determining the similarity among students is to create a distance matrix.
diststudents <- dist(students, method = "manhattan") # create a distance matrix
The distance matrix below shows that Student A and Student B only differ by one grade. Student B and Student C differ by 2 grades. Student A and Student C differ by 3 grades and so on.
Student | StudentA | StudentB | StudentC | StudentD |
StudentB | 1 | |||
StudentC | 3 | 2 | ||
StudentD | 3 | 4 | 6 | |
StudentE | 3 | 4 | 6 | 2 |
Based on this distance matrix, we can now implement a cluster analysis in R.
To create a simple cluster object in R, we use the hclust
function from the cluster
package. The resulting object is then plotted to create a dendrogram which shows how students have been amalgamated (combined) by the clustering algorithm (which, in the present case, is called ward.D
).
# create hierarchical cluster object with ward.D as linkage method
clusterstudents <- hclust(diststudents, method="ward.D")
# plot result as dendrogram
plot(clusterstudents, hang = 0)
Let us have a look at how the clustering algorithm has amalgamated the students. The amalgamation process takes the distance matrix from above as a starting point and, in a first step, has merged Student A and Student B (because they were the most similar students in the data based on the distance matrix). After collapsing Student A and Student B, the resulting distance matrix looks like the distance matrix below (notice that Student A and Student B now form a cluster that is represented by the means of the grades of the two students).
students2 <- matrix(c(1.5, 3, 2, 1, 2, 1, 2, 4, 4, 3, 4, 3),
nrow = 4, byrow = T)
students2 <- as.data.frame(students2)
rownames(students2) <- c("Cluster1", "StudentC", "StudentD", "StudentE")
diststudents2 <- dist(students2, method = "manhattan")
Student | Cluster 1 | Student C | Student D |
Student C | 2.5 | ||
Student D | 3.5 | 6.0 | |
Student E | 3.5 | 6.0 | 2.0 |
The next lowest distance now is 2.0 between Student D and Student E which means that these two students are merged next. The resulting distance matrix is shown below.
students3 <- matrix(c(1.5,3,2,1,2,1,2.5,4,3.5),
nrow = 3, byrow = T)
students3 <- as.data.frame(students3)
rownames(students3) <- c("Cluster1", "StudentC", "Cluster2")
diststudents3 <- dist(students3,
method = "manhattan")
Student | Cluster 1 | Student C |
Student C | 2.5 | |
Cluster 2 | 3.5 | 6.0 |
Now, the lowest distance value occurs between Cluster 1 and Student C. Thus, Student C and Cluster 1 are merged. In the final step, the Cluster 2 is merged with the new cluster encompassing Student C and Cluster 1. This amalgamation process can then be displayed visually as a dendrogram (see above).
How and which elements are merged depends on the what is understood as distance. Since “distance” is such an important concept in cluster analyses, we will briefly discuss this notion to understand why there are so many different types of clustering algorithms and this cluster analyses.
To understand how a cluster analysis determines to which cluster a given data point belongs, we need to understand what different distance measures represent. Have a look at the Figure below which visually represents three different ways to conceptualize distance.
The Figure above depicts three ways to measure distance: the euclidean
distance represents the distance between points as the hypotenuse of the x- and y-axis distances while the “maximum distance” represents distance as the longer distance of either the distance on the x- or the y-axis. The manhatten
distance (or block distance) is the sum of the distances on the x- and the y-axis.
We will now turn to another example in order to delve a little deeper into how clustering algorithms work. In this example, we will find cluster of varieties of English based on the relative frequency of selected non-standard features (such as the relative frequencies of cleft constructions and tag questions). As a first step, we generate some fictional data set for this analysis.
# generate data
IrishEnglish <- round(sqrt((rnorm(10, 9.5, .5))^2), 3)
ScottishEnglish <- round(sqrt((rnorm(10, 9.3, .4))^2), 3)
BritishEnglish <- round(sqrt((rnorm(10, 6.4, .7))^2), 3)
AustralianEnglish <- round(sqrt((rnorm(10, 6.6, .5))^2), 3)
NewZealandEnglish <- round(sqrt((rnorm(10, 6.5, .4))^2), 3)
AmericanEnglish <- round(sqrt((rnorm(10, 4.6, .8))^2), 3)
CanadianEnglish <- round(sqrt((rnorm(10, 4.5, .7))^2), 3)
JamaicanEnglish <- round(sqrt((rnorm(10, 1.4, .2))^2), 3)
PhillipineEnglish <- round(sqrt((rnorm(10, 1.5, .4))^2), 3)
IndianEnglish <- round(sqrt((rnorm(10, 1.3, .5))^2), 3)
clus <- data.frame(IrishEnglish, ScottishEnglish, BritishEnglish,
AustralianEnglish, NewZealandEnglish, AmericanEnglish,
CanadianEnglish, JamaicanEnglish, PhillipineEnglish, IndianEnglish)
# add row names
rownames(clus) <- c("nae_neg", "like", "clefts", "tags", "youse", "soitwas",
"dt", "nsr", "invartag", "wh_cleft")
Feature | IrishEnglish | ScottishEnglish | BritishEnglish | AustralianEnglish | NewZealandEnglish | AmericanEnglish | CanadianEnglish | JamaicanEnglish | PhillipineEnglish | IndianEnglish |
nae_neg | 9.512 | 9.300 | 5.565 | 7.584 | 6.389 | 4.799 | 4.641 | 1.108 | 1.377 | 1.472 |
like | 9.420 | 8.873 | 5.399 | 6.038 | 7.015 | 4.060 | 4.977 | 1.259 | 1.815 | 1.268 |
clefts | 10.366 | 8.940 | 7.108 | 6.803 | 7.207 | 4.182 | 5.101 | 1.450 | 1.498 | 0.974 |
tags | 9.884 | 9.194 | 5.408 | 6.315 | 7.707 | 5.794 | 4.883 | 1.295 | 1.455 | 1.044 |
youse | 9.815 | 8.923 | 7.601 | 7.182 | 6.497 | 4.211 | 2.820 | 1.731 | 1.147 | 1.213 |
soitwas | 9.125 | 9.224 | 5.657 | 6.279 | 6.604 | 4.535 | 5.884 | 1.249 | 1.763 | 1.824 |
dt | 10.398 | 9.149 | 7.900 | 6.371 | 6.501 | 4.193 | 4.321 | 1.362 | 1.328 | 1.231 |
nsr | 9.319 | 9.834 | 7.223 | 6.038 | 6.596 | 3.883 | 4.083 | 1.658 | 1.092 | 1.556 |
invartag | 8.903 | 8.588 | 5.956 | 6.813 | 6.239 | 4.469 | 4.563 | 1.379 | 1.551 | 1.264 |
wh_cleft | 8.768 | 8.765 | 6.718 | 6.284 | 6.423 | 3.160 | 4.965 | 1.427 | 1.856 | 1.590 |
As a next step, we create a cluster object based on the data we have just generated.
# clean data
clusm <- as.matrix(clus)
clust <- t(clusm) # transpose data
clust <- na.omit(clust) # remove missing values
clusts <- scale(clust) # standardize variables
clusts <- as.matrix(clusts) # convert into matrix
Variety | nae_neg | like | clefts | tags | youse | soitwas | dt | nsr | invartag | wh_cleft |
IrishEnglish | 9.512 | 9.420 | 10.366 | 9.884 | 9.815 | 9.125 | 10.398 | 9.319 | 8.903 | 8.768 |
ScottishEnglish | 9.300 | 8.873 | 8.940 | 9.194 | 8.923 | 9.224 | 9.149 | 9.834 | 8.588 | 8.765 |
BritishEnglish | 5.565 | 5.399 | 7.108 | 5.408 | 7.601 | 5.657 | 7.900 | 7.223 | 5.956 | 6.718 |
AustralianEnglish | 7.584 | 6.038 | 6.803 | 6.315 | 7.182 | 6.279 | 6.371 | 6.038 | 6.813 | 6.284 |
NewZealandEnglish | 6.389 | 7.015 | 7.207 | 7.707 | 6.497 | 6.604 | 6.501 | 6.596 | 6.239 | 6.423 |
AmericanEnglish | 4.799 | 4.060 | 4.182 | 5.794 | 4.211 | 4.535 | 4.193 | 3.883 | 4.469 | 3.160 |
CanadianEnglish | 4.641 | 4.977 | 5.101 | 4.883 | 2.820 | 5.884 | 4.321 | 4.083 | 4.563 | 4.965 |
JamaicanEnglish | 1.108 | 1.259 | 1.450 | 1.295 | 1.731 | 1.249 | 1.362 | 1.658 | 1.379 | 1.427 |
PhillipineEnglish | 1.377 | 1.815 | 1.498 | 1.455 | 1.147 | 1.763 | 1.328 | 1.092 | 1.551 | 1.856 |
IndianEnglish | 1.472 | 1.268 | 0.974 | 1.044 | 1.213 | 1.824 | 1.231 | 1.556 | 1.264 | 1.590 |
We assess if data is “clusterable” by testing if the data contains non-randomness. To this end, we calculate the Hopkins statistic which indicates how similar the data is to a random distribution.
A Hopkins value of 0.5 indicates that the data is random and that there are no inherent clusters.
If the Hopkins statistic is close to 1, then the data is highly clusterable.
Values of 0 indicate that the data is uniform (Aggarwal 2015, 158).
The n
in the get_clust_tendency
functions represents the maximum number of clusters to be tested which should be number of predictors in the data.
# apply get_clust_tendency to cluster object
clusttendency <- get_clust_tendency(clusts,
# define number of points from sample space
n = 9,
gradient = list(
# define color for low values
low = "steelblue",
# define color for high values
high = "white"))
clusttendency[1]
## $hopkins_stat
## [1] 0.7436768
As the Hopkins value is substantively higher than .5 (randomness) and closer to 1 (highly clustarable) than to .5, thus indicating that there is sufficient structure in the data to warrant a cluster analysis. As such, we can assume that there are actual clusters in the data and continue by generating a distance matrix using euclidean distances.
clustd <- dist(clusts, # create distance matrix
method = "euclidean") # use euclidean (!) distance
Variety | IrishEnglish | ScottishEnglish | BritishEnglish | AustralianEnglish | NewZealandEnglish | AmericanEnglish | CanadianEnglish | JamaicanEnglish | PhillipineEnglish | IndianEnglish |
IrishEnglish | 0.00 | 0.73 | 3.29 | 3.09 | 2.91 | 5.37 | 5.06 | 8.34 | 8.23 | 8.38 |
ScottishEnglish | 0.73 | 0.00 | 2.90 | 2.65 | 2.48 | 4.93 | 4.59 | 7.89 | 7.77 | 7.92 |
BritishEnglish | 3.29 | 2.90 | 0.00 | 1.04 | 1.16 | 2.54 | 2.32 | 5.20 | 5.13 | 5.26 |
AustralianEnglish | 3.09 | 2.65 | 1.04 | 0.00 | 0.77 | 2.42 | 2.20 | 5.34 | 5.22 | 5.37 |
NewZealandEnglish | 2.91 | 2.48 | 1.16 | 0.77 | 0.00 | 2.52 | 2.22 | 5.48 | 5.35 | 5.52 |
AmericanEnglish | 5.37 | 4.93 | 2.54 | 2.42 | 2.52 | 0.00 | 1.03 | 3.09 | 2.97 | 3.13 |
CanadianEnglish | 5.06 | 4.59 | 2.32 | 2.20 | 2.22 | 1.03 | 0.00 | 3.49 | 3.30 | 3.47 |
JamaicanEnglish | 8.34 | 7.89 | 5.20 | 5.34 | 5.48 | 3.09 | 3.49 | 0.00 | 0.41 | 0.34 |
PhillipineEnglish | 8.23 | 7.77 | 5.13 | 5.22 | 5.35 | 2.97 | 3.30 | 0.41 | 0.00 | 0.34 |
IndianEnglish | 8.38 | 7.92 | 5.26 | 5.37 | 5.52 | 3.13 | 3.47 | 0.34 | 0.34 | 0.00 |
Below are other methods to create distance matrices with some comments on when usign which metric is appropriate.
# create distance matrix (euclidean method: not good when dealing with many dimensions)
clustd <- dist(clusts, method = "euclidean")
# create distance matrix (maximum method: here the difference between points dominates)
clustd_maximum <- round(dist(clusts, method = "maximum"), 2)
# create distance matrix (manhattan method: most popular choice)
clustd_manhatten <- round(dist(clusts, method = "manhattan"), 2)
# create distance matrix (canberra method: for count data only - focuses on small differences and neglects larger differences)
clustd_canberra <- round(dist(clusts, method = "canberra"), 2)
# create distance matrix (binary method: for binary data only!)
clustd_binary <- round(dist(clusts, method = "binary"), 2)
# create distance matrix (minkowski method: is not a true distance measure)
clustd_minkowski <- round(dist(clusts, method = "minkowski"), 2)
# distance method for words: daisy (other possible distances are "manhattan" and "gower")
clustd_daisy <- round(daisy(clusts, metric = "euclidean"), 2)
If you call the individual distance matrices, you will see that depending on which distance measure is used, the distance matrices differ dramatically! Have a look at the distance matrix created using the manhatten
metric and compare it to the distance matrix created using the euclidian
metric (see above).
clustd_maximum
Variety | IrishEnglish | ScottishEnglish | BritishEnglish | AustralianEnglish | NewZealandEnglish | AmericanEnglish | CanadianEnglish | JamaicanEnglish | PhillipineEnglish | IndianEnglish |
IrishEnglish | 0.00 | 0.43 | 1.40 | 1.21 | 1.17 | 1.97 | 2.13 | 2.76 | 2.72 | 2.86 |
ScottishEnglish | 0.43 | 0.00 | 1.24 | 1.20 | 1.02 | 1.97 | 1.85 | 2.77 | 2.75 | 2.61 |
BritishEnglish | 1.40 | 1.24 | 0.00 | 0.64 | 0.72 | 1.25 | 1.45 | 1.96 | 1.97 | 2.00 |
AustralianEnglish | 1.21 | 1.20 | 0.64 | 0.00 | 0.43 | 1.10 | 1.33 | 2.07 | 1.98 | 1.95 |
NewZealandEnglish | 1.17 | 1.02 | 0.72 | 0.43 | 0.00 | 1.15 | 1.12 | 2.00 | 1.95 | 2.08 |
AmericanEnglish | 1.97 | 1.97 | 1.25 | 1.10 | 1.15 | 0.00 | 0.64 | 1.40 | 1.35 | 1.48 |
CanadianEnglish | 2.13 | 1.85 | 1.45 | 1.33 | 1.12 | 0.64 | 0.00 | 1.61 | 1.43 | 1.41 |
JamaicanEnglish | 2.76 | 2.77 | 1.96 | 2.07 | 2.00 | 1.40 | 1.61 | 0.00 | 0.19 | 0.20 |
PhillipineEnglish | 2.72 | 2.75 | 1.97 | 1.98 | 1.95 | 1.35 | 1.43 | 0.19 | 0.00 | 0.18 |
IndianEnglish | 2.86 | 2.61 | 2.00 | 1.95 | 2.08 | 1.48 | 1.41 | 0.20 | 0.18 | 0.00 |
Next, we create a distance plot using the distplot
function. If the distance plot shows different regions (non-random, non-uniform gray areas) then clustering the data is permittable as the data contains actual structures.
# create distance plot
dissplot(clustd)
The most common method for clustering is called ward.D
or ward.D2
. Both of these linkage functions seek to minimize variance. This means that they cluster in a way that the amount of variance is at a minimum (comparable to the regression line in an ordinary least squares (OLS) design).
# create cluster object
cd <- hclust(clustd, method="ward.D2")
# display dendrogram
plot(cd, hang = -1)
We will briefly go over some other, alternative linkage methods. Which linkage method is and should be used depends on various factors, for example, the type of variables (nominal versus numeric) or whether the focus should be placed on commonalities or differences.
# single linkage: cluster with nearest data point
cd_single <- hclust(clustd, method="single")
# create cluster object (ward.D linkage)
cd_wardd <- hclust(clustd, method="ward.D")
# create cluster object (ward.D2 linkage):
# cluster in a way to achieve minimum variance
cd_wardd2 <- hclust(clustd, method="ward.D2")
# average linkage: cluster with closest mean
cd_average <- hclust(clustd, method="average")
# mcquitty linkage
cd_mcquitty <- hclust(clustd, method="mcquitty")
# median linkage: cluster with closest median
cd_median <- hclust(clustd, method="median")
# centroid linkage: cluster with closest prototypical point of target cluster
cd_centroid <- hclust(clustd, method="centroid")
# complete linkage: cluster with nearest/furthest data point of target cluster
cd_complete <- hclust(clustd, method="complete")
Now, we determine the optimal number of clusters based on silhouette widths which shows the ratio of internal similarity of clusters against the similarity between clusters. If the silhouette widths have values lower than .2 then this indicates that clustering is not appropriate (Levshina 2015, 311). The function below displays the silhouette width values of 2 to 8 clusters.
optclus <- sapply(2:8, function(x) summary(silhouette(cutree(cd, k = x), clustd))$avg.width)
optclus # inspect results
## [1] 0.5957489 0.6355290 0.7037504 0.5851163 0.4715779 0.4102172 0.2644081
optnclust <- which(optclus == max(optclus)) # determine optimal number of clusters
groups <- cutree(cd, k=optnclust) # cut tree into optimal number of clusters
The optimal number of clusters is the cluster solution with the highest silhouette width. We cut the tree into the optimal number of clusters and plot the result.
groups <- cutree(cd, k=optnclust) # cut tree into optimal clusters
plot(cd, hang = -1, cex = .75) # plot result as dendrogram
rect.hclust(cd, k=optnclust, border="red") # draw red borders around clusters
In a next step, we aim to determine which factors are particularly important for the clustering - this step is comparable to measuring the effect size in inferential designs.
# which factors are particularly important
celtic <- clusts[c(1,2),]
others <- clusts[-c(1,2),]
# calculate column means
celtic.cm <- colMeans(celtic)
others.cm <- colMeans(others)
# calcualte difference between celtic and other englishes
diff <- celtic.cm - others.cm
sort(diff, decreasing = F)
## youse clefts invartag tags wh_cleft dt nae_neg soitwas like nsr
## 1.616286 1.630785 1.652893 1.654820 1.658887 1.685330 1.688034 1.718544 1.746609 1.750442
plot(sort(diff), # y-values
1:length(diff), # x-values
type= "n", # plot type (empty)
cex.axis = .75, # axis font size
cex.lab = .75, # label font size
xlab ="Prototypical for Non-Celtic Varieties (Cluster 2) <-----> Prototypical for Celtic Varieties (Cluster 1)", # x-axis label
yaxt = "n", # no y-axis tick marks
ylab = "") # no y-axis label
text(sort(diff), 1:length(diff), names(sort(diff)), cex = .75) # plot text into plot
Outer <- clusts[c(6:8),] # data of outer circle varieties
Inner <- clusts[-c(6:8),] # data of inner circle varieties
Outer.cm <- colMeans(Outer) # column means for outer circle
Inner.cm <- colMeans(Inner) # column means for inner circle
diff <- Outer.cm - Inner.cm # difference between inner and outer circle
sort(diff, decreasing = F) # order difference between inner and outer circle
## youse wh_cleft nsr dt clefts like nae_neg invartag soitwas tags
## -0.9521714 -0.9108063 -0.8635592 -0.8492939 -0.7755615 -0.7630863 -0.7562530 -0.7520867 -0.6571798 -0.5829311
plot( # start plot
sort(diff), # y-values
1:length(diff), # x-values
type= "n", # plot type (empty)
cex.axis = .75, # axis font size
cex.lab = .75, # label font size
xlab ="Prototypical for Inner Circle Varieties (Cluster 2) <-----> Prototypical for Outer Circle Varieties (Cluster 1)", # x-axis label
yaxt = "n", # no y-axis tick marks
ylab = "") # no y-axis label
text(sort(diff), 1:length(diff), names(sort(diff)), cex = .75) # plot text into plot
We see that discourse like is typical for other varieties and that the use of youse as 2nd person plural pronoun and invariant tags are typical for Celtic Englishes.
We will now test whether the cluster is justified by validating the cluster solution using bootstrapping.
res.pv <- pvclust(clus, # apply pvclust method to clus data
method.dist="euclidean", # use eucledian distance
method.hclust="ward.D2", # use ward.d2 linkage
nboot = 100) # use 100 bootstrap runs
## Bootstrap (r = 0.5)... Done.
## Bootstrap (r = 0.6)... Done.
## Bootstrap (r = 0.7)... Done.
## Bootstrap (r = 0.8)... Done.
## Bootstrap (r = 0.9)... Done.
## Bootstrap (r = 1.0)... Done.
## Bootstrap (r = 1.1)... Done.
## Bootstrap (r = 1.2)... Done.
## Bootstrap (r = 1.3)... Done.
## Bootstrap (r = 1.4)... Done.
The clustering provides approximately unbiased p-values and bootstrap probability value (see ???).
plot(res.pv, cex = .75)
pvrect(res.pv)
We can also use other packages to customize the dendrograms.
plot(as.phylo(cd), # plot cluster object
cex = 0.75, # .75 font size
label.offset = .5) # .5 label offset
One useful customization is to display an unrooted rather than a rooted tree diagram.
# plot as unrooted tree
plot(as.phylo(cd), # plot cluster object
type = "unrooted", # plot as unrooted tree
cex = .75, # .75 font size
label.offset = 1) # .5 label offset
So far, all analyses were based on numeric data. However, especially when working with language data, the data is nominal or categorical rather than numeric. The following will thus show to implement a clustering method for nominal data.
In a first step, we will create a simple data set representing the presence and absence of features across varities of English.
# generate data
IrishEnglish <- c(1,1,1,1,1,1,1,1,1,1)
ScottishEnglish <- c(1,1,1,1,1,1,1,1,1,1)
BritishEnglish <- c(0,1,1,1,0,0,1,0,1,1)
AustralianEnglish <- c(0,1,1,1,0,0,1,0,1,1)
NewZealandEnglish <- c(0,1,1,1,0,0,1,0,1,1)
AmericanEnglish <- c(0,1,1,1,0,0,0,0,1,0)
CanadianEnglish <- c(0,1,1,1,0,0,0,0,1,0)
JamaicanEnglish <- c(0,0,1,0,0,0,0,0,1,0)
PhillipineEnglish <- c(0,0,1,0,0,0,0,0,1,0)
IndianEnglish <- c(0,0,1,0,0,0,0,0,1,0)
clus <- data.frame(IrishEnglish, ScottishEnglish, BritishEnglish,
AustralianEnglish, NewZealandEnglish, AmericanEnglish,
CanadianEnglish, JamaicanEnglish, PhillipineEnglish, IndianEnglish)
# add row names
rownames(clus) <- c("nae_neg", "like", "clefts", "tags", "youse", "soitwas",
"dt", "nsr", "invartag", "wh_cleft")
# convert into factors
clus <- apply(clus, 1, function(x){
x <- as.factor(x) })
Variety | nae_neg | like | clefts | tags | youse | soitwas | dt | nsr | invartag | wh_cleft |
IrishEnglish | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
ScottishEnglish | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
BritishEnglish | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 1 |
AustralianEnglish | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 1 |
NewZealandEnglish | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 1 |
AmericanEnglish | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 |
CanadianEnglish | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 |
JamaicanEnglish | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
PhillipineEnglish | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
IndianEnglish | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
Now that we have our data, we will create a distance matrix but in contrast to previous methods, we will use a different distance measure that takes into account that we are dealing with nominal (or binary) data.
# clean data
clusts <- as.matrix(clus)
# create distance matrix
clustd <- dist(clusts, method = "binary") # create a distance object with binary (!) distance
Variety | IrishEnglish | ScottishEnglish | BritishEnglish | AustralianEnglish | NewZealandEnglish | AmericanEnglish | CanadianEnglish | JamaicanEnglish | PhillipineEnglish | IndianEnglish |
IrishEnglish | 0.0 | 0.0 | 0.40 | 0.40 | 0.40 | 0.60 | 0.60 | 0.80 | 0.80 | 0.80 |
ScottishEnglish | 0.0 | 0.0 | 0.40 | 0.40 | 0.40 | 0.60 | 0.60 | 0.80 | 0.80 | 0.80 |
BritishEnglish | 0.4 | 0.4 | 0.00 | 0.00 | 0.00 | 0.33 | 0.33 | 0.67 | 0.67 | 0.67 |
AustralianEnglish | 0.4 | 0.4 | 0.00 | 0.00 | 0.00 | 0.33 | 0.33 | 0.67 | 0.67 | 0.67 |
NewZealandEnglish | 0.4 | 0.4 | 0.00 | 0.00 | 0.00 | 0.33 | 0.33 | 0.67 | 0.67 | 0.67 |
AmericanEnglish | 0.6 | 0.6 | 0.33 | 0.33 | 0.33 | 0.00 | 0.00 | 0.50 | 0.50 | 0.50 |
CanadianEnglish | 0.6 | 0.6 | 0.33 | 0.33 | 0.33 | 0.00 | 0.00 | 0.50 | 0.50 | 0.50 |
JamaicanEnglish | 0.8 | 0.8 | 0.67 | 0.67 | 0.67 | 0.50 | 0.50 | 0.00 | 0.00 | 0.00 |
PhillipineEnglish | 0.8 | 0.8 | 0.67 | 0.67 | 0.67 | 0.50 | 0.50 | 0.00 | 0.00 | 0.00 |
IndianEnglish | 0.8 | 0.8 | 0.67 | 0.67 | 0.67 | 0.50 | 0.50 | 0.00 | 0.00 | 0.00 |
As before, we can now use hierarchical clustering to display the results as a dendrogram
# create cluster object (ward.D2 linkage) : cluster in a way to achieve minimum variance
cd <- hclust(clustd, method="ward.D2")
# plot result as dendrogram
plot(cd, hang = -1) # display dendogram
In a next step, we want to determine which features are particularly distinctive for one cluster (the “Celtic” cluster containing Irish and Scottish English).
# create factor with celtic varieties on one hand and other varieties on other
cluster <- as.factor(ifelse(as.character(rownames(clusts)) == "IrishEnglish", "1",
ifelse(as.character(rownames(clusts)) == "ScottishEnglish", "1", "0")))
# convert into data frame
clsts.df <- as.data.frame(clusts)
# determine significance
library(exact2x2)
pfish <- fisher.exact(table(cluster, clsts.df$youse))
pfish[[1]]
## [1] 0.02222222
# determine effect size
assocstats(table(cluster, clsts.df$youse))
## X^2 df P(> X^2)
## Likelihood Ratio 10.008 1 0.0015586
## Pearson 10.000 1 0.0015654
##
## Phi-Coefficient : 1
## Contingency Coeff.: 0.707
## Cramer's V : 1
assocstats(table(cluster, clsts.df$like))
## X^2 df P(> X^2)
## Likelihood Ratio 1.6323 1 0.20139
## Pearson 1.0714 1 0.30062
##
## Phi-Coefficient : 0.327
## Contingency Coeff.: 0.311
## Cramer's V : 0.327
Clustering is a highly complex topic and there many more complexities to it. However, this should have helped to get you started.
Correspondence analysis (CA) represents a multivariate statistical technique that provides a graphic method of exploring the relationship between variables in a contingency table. CA is conceptually similar to principal component analysis (PCA), but applies to categorical rather than continuous data.
CA consists out of the following four steps:
In this tutorial, we investigate similarities among amplifiers based on their co-occurrences (word embeddings) with adjectives. Adjective amplifiers are elements such as those in 1. to 5.
The similarity among adjective amplifiers can then be used to find clusters or groups of amplifiers that behave similarly and are interchangeable. To elaborate, adjective amplifiers are interchangeable with some variants but not with others (consider 6. to 8.; the question mark signifies that the example is unlikely to be used or grammatically not acceptable by L1 speakers of English).
We start by loading the data, and then displaying the data which is called vsmdata
and consist of 5,000 observations of adjectives and contains two columns: one column with the adjectives (Adjectives) and another column which has the amplifiers (0 means that the adjective occurred without an amplifier).
# load data
vsmdata <- base::readRDS(url("https://slcladal.github.io/data/vsd.rda", "rb"))
Amplifier | Adjective |
0 | serious |
0 | sure |
so | many |
0 | many |
0 | good |
0 | much |
0 | good |
0 | good |
0 | last |
0 | nice |
For this tutorial, we will reduce the number of amplifiers and adjectives and thus simplify the data to render it easier to understand what is going on. To simplify the data, we remove
In addition, we collapse all amplifiers that occur less than 20 times into a bin category (other).
# simplify data
vsmdata_simp <- vsmdata %>%
# remove non-amplifier adjectives
dplyr::filter(Amplifier != 0,
Adjective != "many",
Adjective != "much") %>%
# collapse infrequent amplifiers
dplyr::group_by(Amplifier) %>%
dplyr::mutate(AmpFreq = dplyr::n()) %>%
dplyr::ungroup() %>%
dplyr::mutate(Amplifier = ifelse(AmpFreq > 20, Amplifier, "other")) %>%
# collapse infrequent adjectives
dplyr::group_by(Adjective) %>%
dplyr::mutate(AdjFreq = dplyr::n()) %>%
dplyr::ungroup() %>%
dplyr::mutate(Adjective = ifelse(AdjFreq > 10, Adjective, "other")) %>%
dplyr::filter(Adjective != "other") %>%
dplyr::select(-AmpFreq, -AdjFreq)
Amplifier | Adjective |
very | good |
really | nice |
really | good |
really | bad |
very | nice |
really | nice |
very | hard |
other | good |
really | nice |
really | good |
We now use a balloon plot to see if tehre are any potential correlations between amplifiers and adjectives.
# 1. convert the data as a table
dt <- as.matrix(table(vsmdata_simp))
# 2. Graph
balloonplot(t(dt), main ="vsmdata_simp", xlab ="", ylab="",
label = FALSE, show.margins = FALSE)
The balloon plot suggests that there are potential correlations as the dots (balloons) are not distributed evenly according to frequency. To validate if there is significant correlation between the amplifier types and the adjectives using a \(\chi\)2- test.
chisq <- chisq.test(dt)
chisq
##
## Pearson's Chi-squared test
##
## data: dt
## X-squared = 124.4, df = 40, p-value = 1.375e-10
The \(\chi\)2- test confirms that there is a significant correlations between amplifier types and the adjectives.
res.ca <- FactoMineR::CA(dt, graph = FALSE)
# inspect results of the CA
#print(res.ca)
eig.val <- get_eigenvalue(res.ca)
eig.val
## eigenvalue variance.percent cumulative.variance.percent
## Dim.1 0.24138007 49.868237 49.86824
## Dim.2 0.14687839 30.344536 80.21277
## Dim.3 0.06125177 12.654392 92.86716
## Dim.4 0.03452547 7.132836 100.00000
The display of the eigenvalues provides information on the amount of variance that is explained by each dimension. The first dimension explains 49.87 percent of the variance, the second dimension explains another 30.34 percent of the variance, leaving all other variables with relative moderate explanatory power as they only account for 20 percent variance. We now plot and interpret the results of the CA.
# repel= TRUE to avoid text overlapping (slow if many point)
fviz_ca_biplot(res.ca,
repel = TRUE,
col.row = "orange",
col.col = "darkgray")
The results of the CA show that the adjective different is collocating with other amplifiers while very is collocating with difficult and important, pretty is collocating with big, really is collocating with nice, and so is collocating with bad.
Schweinberger, Martin. 2021. Cluster and Correspondence Analysis in R. Brisbane: The University of Queensland. url: https://slcladal.github.io/clust.html (Version 2021.10.02).
@manual{schweinberger2021clust,
author = {Schweinberger, Martin},
title = {Cluster and Correspondence Analysis in R},
note = {https://slcladal.github.io/clust.html},
year = {2021},
organization = "The University of Queensland, Australia. School of Languages and Cultures},
address = {Brisbane},
edition = {2021.10.02}
}
sessionInfo()
## R version 4.1.1 (2021-08-10)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19043)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=German_Germany.1252 LC_CTYPE=German_Germany.1252 LC_MONETARY=German_Germany.1252
## [4] LC_NUMERIC=C LC_TIME=German_Germany.1252
##
## attached base packages:
## [1] grid stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] gplots_3.1.1 NbClust_3.0 exact2x2_1.6.5 exactci_1.4-2 testthat_3.0.4
## [6] ssanv_1.1 vcd_1.4-8 ape_5.5 pvclust_2.2-0 seriation_1.3.0
## [11] factoextra_1.0.7 cluster_2.1.2 cfa_0.10-0 gridExtra_2.3 fGarch_3042.83.2
## [16] fBasics_3042.89.1 timeSeries_3062.100 timeDate_3043.102 effectsize_0.4.5 lawstat_3.4
## [21] here_1.0.1 knitr_1.34 ggpubr_0.4.0 e1071_1.7-9 flextable_0.6.8
## [26] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.7 purrr_0.3.4 readr_2.0.1
## [31] tidyr_1.1.3 tibble_3.1.4 ggplot2_3.3.5 tidyverse_1.3.1
##
## loaded via a namespace (and not attached):
## [1] readxl_1.3.1 uuid_0.1-4 backports_1.2.1 systemfonts_1.0.2 plyr_1.8.6
## [6] splines_4.1.1 TH.data_1.1-0 digest_0.6.27 foreach_1.5.1 htmltools_0.5.2
## [11] fansi_0.5.0 magrittr_2.0.1 tzdb_0.1.2 openxlsx_4.2.4 modelr_0.1.8
## [16] Kendall_2.2 officer_0.4.0 sandwich_3.0-1 colorspace_2.0-2 rvest_1.0.1
## [21] ggrepel_0.9.1 haven_2.4.3 rbibutils_2.2.3 xfun_0.26 crayon_1.4.1
## [26] jsonlite_1.7.2 survival_3.2-11 zoo_1.8-9 iterators_1.0.13 glue_1.4.2
## [31] registry_0.5-1 gtable_0.3.0 emmeans_1.6.3 car_3.0-11 abind_1.4-5
## [36] scales_1.1.1 mvtnorm_1.1-2 DBI_1.1.1 rstatix_0.7.0 Rcpp_1.0.7
## [41] xtable_1.8-4 klippy_0.0.0.9500 flashClust_1.01-2 foreign_0.8-81 proxy_0.4-26
## [46] DT_0.19 htmlwidgets_1.5.4 datawizard_0.2.0.1 httr_1.4.2 ellipsis_0.3.2
## [51] spatial_7.3-14 pkgconfig_2.0.3 farver_2.1.0 dbplyr_2.1.1 utf8_1.2.2
## [56] reshape2_1.4.4 tidyselect_1.1.1 labeling_0.4.2 rlang_0.4.11 munsell_0.5.0
## [61] cellranger_1.1.0 tools_4.1.1 cli_3.0.1 generics_0.1.0 broom_0.7.9
## [66] evaluate_0.14 fastmap_1.1.0 yaml_2.2.1 fs_1.5.0 zip_2.2.0
## [71] caTools_1.18.2 nlme_3.1-152 leaps_3.1 xml2_1.3.2 compiler_4.1.1
## [76] rstudioapi_0.13 curl_4.3.2 ggsignif_0.6.3 reprex_2.0.1.9000 stringi_1.7.4
## [81] highr_0.9 parameters_0.14.0 gdtools_0.2.3 lattice_0.20-44 Matrix_1.3-4
## [86] vctrs_0.3.8 pillar_1.6.3 lifecycle_1.0.1 Rdpack_2.1.2 lmtest_0.9-38
## [91] estimability_1.3 bitops_1.0-7 data.table_1.14.0 cowplot_1.1.1 insight_0.14.4
## [96] R6_2.5.1 TSP_1.1-10 KernSmooth_2.23-20 rio_0.5.27 codetools_0.2-18
## [101] gtools_3.9.2 boot_1.3-28 MASS_7.3-54 assertthat_0.2.1 rprojroot_2.0.2
## [106] withr_2.4.2 multcomp_1.4-17 mgcv_1.8-36 bayestestR_0.11.0 parallel_4.1.1
## [111] hms_1.1.0 coda_0.19-4 class_7.3-19 rmarkdown_2.5 carData_3.0-4
## [116] scatterplot3d_0.3-41 lubridate_1.7.10 base64enc_0.1-3 FactoMineR_2.4
Aggarwal, Charu C. 2015. Data Mining: The Textbook. Springer.
Blashfield, Roger K, and Mark S Aldenderfer. 1988. “The Methods and Problems of Cluster Analysis.” In Handbook of Multivariate Experimental Psychology, 447–73. Springer.
Kassambara, Alboukadel. 2017. Practical Guide to Cluster Analysis in R: Unsupervised Machine Learning. Vol. 1. Sthda.
Kettenring, Jon R. 2006. “The Practice of Cluster Analysis.” Journal of Classification 23 (1): 3–30.
King, Ronald S. 2015. Cluster Analysis and Data Mining: An Introduction. Stylus Publishing, LLC.
Levshina, Natalia. 2015. How to Do Linguistics with R: Data Exploration and Statistical Analysis. Amsterdam: John Benjamins Publishing Company.
Romesburg, Charles. 2004. Cluster Analysis for Researchers. Lulu Press.