Introduction

This tutorial introduces Text Similarity (see Zahrotun 2016; Li and Han 2013), i.e. how close or similar two pieces of text are with respect to either their use of words or characters (lexical similarity) or in terms of meaning (semantic similarity). The entire code for the sections below can be downloaded here.

Lexical Similarity provides a measure of the similarity of two texts based on the intersection of the word sets of same or different languages. A lexical similarity of 1 suggests that there is complete overlap between the vocabularies while a score of 0 suggests that there are no common words in the two texts. There are several different ways of evaluating lexical similarity such as Jaccard Similarity, Cosine Similarity, Levenshtein Distance etc.

Semantic Similarity on the other hand measures the similarity between two texts based on their meaning rather than their lexicographical similarity. Semantic similarity is highly useful for summarizing texts and extracting key attributes from large documents or document collections. Semantic Similarity can be evaluated using methods such as Latent Semantic Analysis (LSA), Normalised Google Distance (NGD), Salient Semantic Analysis (SSA) etc.

As a part of this tutorial we will focus primarily on Lexical Similarity. We begin with a brief overview of relevant concepts and then show different measures can be implemented in R.

Jaccard Similarity

The Jaccard similarity is defined as an intersection of two texts divided by the union of that two documents. In other words it can be expressed as the number of common words over the total number of the words in the two texts or documents. The Jaccard similarity of two documents ranges from 0 to 1, where 0 signifies no similarity and 1 signifies complete overlap.The mathematical representation of the Jaccard Similarity is shown below: -

\[\begin{equation} J(A,B) = \frac{|A \bigcap B|}{|A \bigcup B |} = \frac{|A \bigcap B|}{|A| + |B| - |A \bigcap B|} \end{equation}\]

Cosine Similarity

In case of cosine similarity the two documents are represented in a n-dimensional vector space with each word represented in a vector form. Thus the cosine similarity metric measures the cosine of the angle between two n-dimensional vectors projected in a multi-dimensional space. The cosine similarity ranges from 0 to 1. A value closer to 0 indicates less similarity whereas a score closer to 1 indicates more similarity.The mathematical representation of the Cosine Similarity is shown below: -

\[\begin{equation} similarity = cos(\theta) = \frac{A \cdot B}{||A|| ||B||} = \frac{\sum_{i=1}^{n} A_{i} B_{i}}{\sqrt{\sum_{i=1}^{n} A_{i}^{2}} \sqrt{\sum_{i=1}^{n} B_{i}^{2}}} \end{equation}\]

Levenshtein Distance

Levenshtein distance comparison is generally carried out between two words. It determines the minimum number of single character edits required to change one word to another. The higher the number of edits more are the texts different from each other.An edit is defined by either an insertion of a character, a deletion of character or a replacement of a character. For two words a and b with lengths i and j the Levenshtein distance is defined as follows: -

\[\begin{equation} lev_{a,b}(i,j) = \begin{cases} max(i,j) & \quad \text{if min(i,j) = 0,}\\ min \begin{cases} lev_{a,b}(i-1,j)+1 \\ lev_{a,b}(i, j-1)+1 & \text{otherwise.}\\ lev_{a,b}(i-1,j-1)+1_{(a_{i} \neq b_{j})} \\ \end{cases} \end{cases} \end{equation}\]

Preparation and session set up

This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R here. For this tutorials, we need to install certain packages from an R library so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the packages so you do not need to worry if it takes some time).

# set options
options(stringsAsFactors = F)
# install libraries
install.packages("stringdist")
install.packages("hashr")
install.packages("tidyverse")
install.packages("flextable")
# install klippy for copy-to-clipboard button in code chunks
remotes::install_github("rlesur/klippy")

Now that we have installed the packages, we activate them as shown below.

# set options
options(stringsAsFactors = F)          # no automatic data transformation
options("scipen" = 100, "digits" = 12) # suppress math annotation
# activate packages
library(stringdist)
library(hashr)
library(tidyverse)
library(flextable)
# activate klippy for copy-to-clipboard button
klippy::klippy()

Once you have installed R and RStudio and initiated the session by executing the code shown above, you are good to go.

Measuring Similarity in R

For evaluating the similarity scores and the edit distance for the above discussed methods in R we have installed the stringdist package and will be primarily using two functions in that: stringdist and stringsim. We are also utilising the hashr package so that Jaccard and cosine similarity are evaluated word wise instead of letter wise. The sentence is tokenised and the corresponding list of words are hashed so that the sentences are transformed into a list of integers.For the Jaccard and the Cosine similarity we will be using the same set of texts whereas for the Levenshtein edit distance we will take 3 pairs of words to illustrate insert, delete and replace operations.

text1 = "The quick brown fox jumped over the wall"
text2 = "The fast brown fox leaped over the wall"
insert_ex = c("Marta","Martha")
del_ex = c("Genome","Gnome")
rep_ex = c("Tim","Tom")

Jaccard Similarity

# Using the seq_dist function along with hash function to calculate the Jaccard similarity word-wise
jac_sim_score = seq_dist(hash(strsplit(text1, "\\s+")), hash(strsplit(text2, "\\s+")), method = "jaccard",q=2)
print(paste0("The Jaccard similarity for the two texts is ",jac_sim_score))
## [1] "The Jaccard similarity for the two texts is 0.727272727272727"

Cosine Similarity

# Using the seq_dist function along with hash function to calculate the Jaccard similarity word-wise
cos_sim_score = seq_dist(hash(strsplit(text1, "\\s+")), hash(strsplit(text2, "\\s+")), method = "cosine",q=2)
print(paste0("The Cosine similarity for the two texts is ",cos_sim_score))
## [1] "The Cosine similarity for the two texts is 0.571428571428572"

Levenshtein distance

# Insert edit
ins_edit = stringdist(insert_ex[1],insert_ex[2],method = "lv")
print(paste0("The insert edit distance for ",insert_ex[1]," and ",insert_ex[2]," is ",ins_edit))
## [1] "The insert edit distance for Marta and Martha is 1"
# Delete edit
del_edit = stringdist(del_ex[1],del_ex[2],method = "lv")
print(paste0("The delete edit distance for ",del_ex[1]," and ",del_ex[2]," is ",del_edit))
## [1] "The delete edit distance for Genome and Gnome is 1"
# Replace edit
rep_edit = stringdist(rep_ex[1],rep_ex[2],method = "lv")
print(paste0("The replace edit distance for ",rep_ex[1]," and ",rep_ex[2]," is ",rep_edit))
## [1] "The replace edit distance for Tim and Tom is 1"

Concluding remarks

As shown above, the Jaccard and Cosine similarity scores are different which is important to note when using different measures to determine similarity. The differences are primarily primarily caused because Jaccard takes only the unique words in the two texts into consideration whereas the Cosine similarity approach takes the total length of the vectors into consideration. For the Levenshtein edit distance, the examples provided above show that for the first case we have to insert an extra h, for the second we have to delete an e and for the last case we need to replace i with o. Thus, for all the pairs taken into account here the edit distance is 1.

Citation & Session Info

Majumdar, Dattatreya. 2021. Lexical Text Similarity using R. Brisbane: The University of Queensland. url: https://slcladal.github.io/lexsim.html (Version 2021.10.02).

@manual{Majumdar2021ta,
  author = {Majumdar, Dattatreya},
  title = {Text Analysis and Distant Reading using R},
  note = {https://slcladal.github.io/lexsim.html},
  year = {2021},
  organization = "The University of Queensland, Australia. School of Languages and Cultures},
  address = {Brisbane},
  edition = {2021.10.02}
}
sessionInfo()
## R version 4.1.1 (2021-08-10)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19043)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252    LC_MONETARY=German_Germany.1252
## [4] LC_NUMERIC=C                    LC_TIME=German_Germany.1252    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] hashr_0.1.4          stringdist_0.9.8     koRpus.lang.de_0.1-2 cluster_2.1.2        tm_0.7-8            
##  [6] NLP_0.2-1            coop_0.6-3           hunspell_3.0.1       koRpus.lang.en_0.1-4 koRpus_0.13-8       
## [11] sylly_0.1-6          textdata_0.4.1       tidytext_0.3.1       plyr_1.8.6           flextable_0.6.8     
## [16] forcats_0.5.1        stringr_1.4.0        dplyr_1.0.7          purrr_0.3.4          readr_2.0.1         
## [21] tidyr_1.1.3          tibble_3.1.4         ggplot2_3.3.5        tidyverse_1.3.1      gutenbergr_0.2.1    
## [26] quanteda_3.1.0      
## 
## loaded via a namespace (and not attached):
##  [1] fs_1.5.0           lubridate_1.7.10   bit64_4.0.5        httr_1.4.2         rprojroot_2.0.2    SnowballC_0.7.0   
##  [7] tools_4.1.1        backports_1.2.1    utf8_1.2.2         R6_2.5.1           DBI_1.1.1          lazyeval_0.2.2    
## [13] colorspace_2.0-2   withr_2.4.2        tidyselect_1.1.1   bit_4.0.4          compiler_4.1.1     cli_3.0.1         
## [19] rvest_1.0.1        xml2_1.3.2         officer_0.4.0      slam_0.1-48        scales_1.1.1       rappdirs_0.3.3    
## [25] systemfonts_1.0.2  digest_0.6.27      rmarkdown_2.5      base64enc_0.1-3    pkgconfig_2.0.3    htmltools_0.5.2   
## [31] dbplyr_2.1.1       fastmap_1.1.0      highr_0.9          rlang_0.4.11       readxl_1.3.1       rstudioapi_0.13   
## [37] generics_0.1.0     jsonlite_1.7.2     vroom_1.5.5        zip_2.2.0          tokenizers_0.2.1   magrittr_2.0.1    
## [43] Matrix_1.3-4       Rcpp_1.0.7         munsell_0.5.0      fansi_0.5.0        gdtools_0.2.3      lifecycle_1.0.1   
## [49] stringi_1.7.4      yaml_2.2.1         grid_4.1.1         parallel_4.1.1     crayon_1.4.1       lattice_0.20-44   
## [55] haven_2.4.3        hms_1.1.0          knitr_1.34         klippy_0.0.0.9500  pillar_1.6.3       uuid_0.1-4        
## [61] stopwords_2.2      fastmatch_1.1-3    reprex_2.0.1.9000  glue_1.4.2         evaluate_0.14      data.table_1.14.0 
## [67] RcppParallel_5.1.4 modelr_0.1.8       vctrs_0.3.8        tzdb_0.1.2         cellranger_1.1.0   gtable_0.3.0      
## [73] assertthat_0.2.1   xfun_0.26          sylly.en_0.1-3     broom_0.7.9        janeaustenr_0.1.5  sylly.de_0.1-2    
## [79] ellipsis_0.3.2     here_1.0.1

Back to top

Back to HOME


References

Li, Baoli, and Liping Han. 2013. “Distance Weighted Cosine Similarity Measure for Text Classification.” In International Conference on Intelligent Data Engineering and Automated Learning, 611–18. Springer.

Zahrotun, Lisna. 2016. “Comparison Jaccard Similarity, Cosine Similarity and Combined Both of the Data Clustering with Shared Nearest Neighbor Method.” Computer Engineering and Applications Journal 5 (1): 11–18.