The package FCPS provides standardized access to state-of-the-art clustering algorithms, datasets defining common clustering challenges, and the estimation of the number of clusters. Additionally, the cluster tendency can be investigated, the number of clusters estimated, and an appropriate accuracy can be computed for arbitrary labels.
The installation guide can be found in the README.md file.
In the following example, the high-dimensional leukemia dataset is loaded and a visualized:
library(FCPS)
data("Leukemia")
=Leukemia$Distance
Data=Leukemia$Cls
ClsClusterPlotMDS(Data,Cls,main = 'Leukemia',Plotter3D = 'plotly')
The function ClusterPlotMDS used for the figure above provides in case of datasets with dimensionality higher than three a 3D projection using multidimensionl scaling of the R package smacof on CRAN . The user can decide if the rgl package or the plotly package should be used .
In the following code a high-dimensional dataset is loaded. The leukemia dataset provides a distance matrix instead of a data matrix. Without further adjustments the function AgglomerativeNestingClustering can be called with the correct number of clusters. The resulting clustering is stored in the list element Cls which is a named vector. The names are defined by the rows of the distance or data matrix. In the next step, the user can rename the clustering to consecutive number from 1 to 6 with 1 beeing the label of the cluster with the biggest size and 6 beeing the cluster of the smallest size. The names will still match all rownames of the data or distances. Besides the Cls element the output list CA stores the original object of the clustering. In the case of hierarchical algorithms another list element stores the Dendrogram which can be visualized with ClusterDendrogram which is shown in the next section.
library(FCPS)
data('Leukemia')
set.seed(123)
=length(unique(Leukemia$Cls))
ClusterNo=AgglomerativeNestingClustering(Leukemia$DistanceMatrix,ClusterNo)
CA=ClusterRenameDescendingSize(CA$Cls)
Clssum(match(names(Cls),rownames(Leukemia$DistanceMatrix),nomatch = 0)==0)
#> [1] 0
Any clustering challenge listed in Table~2 can be generated with an arbitrary sample size. Here, Chainlink is selected and visualized in the figure below.
set.seed(600)
library(FCPS)
=ClusterChallenge("Chainlink",SampleSize = 750)
DataList=DataList$Chainlink
Data=DataList$Cls
ClsClusterPlotMDS(Data,Cls,Plotter3D = 'plotly',main = "Chainlink")
Remark: ClusterPlotMDS detects that the dataset has only three dimensions and instead of projecting the data, visualizes the tree given dimensions.
The cluster tendency or so-called clusterability can be investigated with the ggplot2 syntax as follows for the example of Chainlink:
set.seed(600)
library(FCPS)
=ClusterChallenge("Chainlink",SampleSize = 750)
DataList=DataList$Chainlink
Data=DataList$Cls
Clslibrary(ggplot2)
ClusterabilityMDplot(Data)+theme_bw()
#> [1] "This MD-plot is typically for several features at once. By calling as.matrix(), it will be now used with one feature."
The figure presents the result for the sample of Chainlink. The MD plot shows mulitmodality and the statistical testing agrees with the MD plot (p<0.01 that data has no cluster tendency). Therefore, the sample has a high cluster tendency.
Lets assume that there is no prior knowledge about the Chainlink data available and the hierarchical algorithm single linkage is selected. Looking at the dendrogram the highest change in fusion level is two. However, maybe each of the main clusters has two subclusters presented in Figure 3. Therefore the function ClusterNoEstimation is used to investigate this assumption. Figure 4 presents the Fan plot of the amount of indicators preferring a specific number of clusters for the sample of the Chainlink dataset. Majority vote proposes the cluster number 7 or 3 with the correct cluster number equal to 2 as the second. The appropriate number of clusters would be two because neither 7 or 3 are present in the dendrogram. The following code uses a numerical data matrix for the hierarchical clustering algorithm. If not set otherwise, internally the Euclidean distances computed by the parallelDist are computed and used. Furthermore, the fastcluster is used to compute the tree. The dendextend allows to color the branches user-specifc if the ClusterDendrogram is used. The function ClusterNoEstimation expects an matrix of clusterings, each column one “Cls” ordered in the range of cluster numbers of interest. In this example, the range from 2 to 7 is investigated.
library(FCPS)
set.seed(135)
=ClusterChallenge("Chainlink",SampleSize = 900)
DataList=DataList$Chainlink
Data=DataList$Cls
Cls=HierarchicalClustering(Data,1,"SingleL")[[3]]
TreeClusterDendrogram(Tree,4,main='Single Linkage')
=7
MaximumNunber<- matrix(data = 0, nrow = dim(Data)[1], ncol = MaximumNunber)
clsm for (i in 2:(MaximumNunber+1)) {
-1] <- cutree(Tree,i)
clsm[,i
}=ClusterNoEstimation(Data,ClsMatrix = clsm, MaxClusterNo = MaximumNunber,PlotIt = TRUE) out
Usually, clustering accuracy can either be computed only correctly for binary classifications or is computed per cluster as shown below. The latter does not allow for a straightforward comparison between algorithms. Often a simple approach of computing the overall accuracy is provided in packages, for example, in see . The following code outlines why the overall accuracy is not correct if computed straightforward. The solution is provided by the function ClusterAccuracy which calculates the correct accuracy of a clustering algorithm:
library(FCPS)
data("Leukemia")
=Leukemia$DistanceMatrix
Distance=Leukemia$Cls
Classification=HierarchicalClustering(Distance,6,"SingleL")$Cls
Cls
#Usual Computation Accuracy per Class
=as.matrix(table(Cls,Classification))
cmdiag(cm)/rowSums(cm)
#> 1 2 3 4 5 6
#> 0.06666667 0.00000000 1.00000000 1.00000000 1.00000000 1.00000000
# Usual overall Accuracy
sum(diag(cm)) / sum(cm)
#> [1] 0.7797834
#e.g.
#MLmetrics::Accuracy(Cls,Classification)
#Correct Computation
ClusterAccuracy(Cls,Classification)
#> [1] 0.9963899
cm#> Classification
#> Cls 1 2 3 4 5 6
#> 1 1 14 0 0 0 0
#> 2 0 0 108 0 0 0
#> 3 0 0 1 0 0 0
#> 4 0 0 0 266 0 0
#> 5 0 0 0 0 163 0
#> 6 0 0 0 0 0 1
This section outlines further functionalities focussing on the
possibilities in cases for which the definitions in Table 3 are not
met.
The user can transform factors to numerical vectors using the function
ClusterCreateClassification and perform simple cluster-based evaluations
per column of data or used the created Cls otherwise. In the example
below, the mean per cluster and feature is computed with
ClusterApply:
library(datasets)
library(FCPS)
=datasets::iris
Iris=as.matrix(Iris[,1:4])
Data=Iris$Species
SomeFactors=ClusterCreateClassification(SomeFactors)
V=V$Cls
Cls$ClusterNames
V#> 1 2 3
#> "setosa" "versicolor" "virginica"
ClusterApply(Data,mean,Cls)
#> $UniqueClusters
#> [1] "1" "2" "3"
#>
#> $meanPerCluster
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> 1 5.006 3.428 1.462 0.246
#> 2 5.936 2.770 4.260 1.326
#> 3 6.588 2.974 5.552 2.026
The same computations are possible for distance matrices. The function ClusterApply can also be used to transform distances to data. Exemplary the tetragonula dataset is loaded from the prablcus package . The dataset consists of a data frame with 236 cases and 13 string features. For a brief overview about the data please see . The computed distance matrix can be used in this package directly or via MDS transformation to a numerical matrix:
suppressPackageStartupMessages(library('prabclus',quietly = TRUE))
data(tetragonula)
#Generated Specific Distance Matrix
<- alleleconvert(strmatrix=as.matrix(tetragonula[1:236,]))
ta <- alleleinit(allelematrix=ta,distance="none")
tai =alleledist((unbuild.charmatrix(tai$charmatrix,236,13)),236,13)
Distances=rep(1,nrow(Distances))
Cls=ClusterApply(Distances,identity,Cls)$identityPerCluster
DataTransdim(DataTrans)
#> [1] 236 20
dim(Distances)
#> [1] 236 236
Fifty-four conventional clustering algorithms are provided in the R}package FCPS on CRAN with consistent input and output. This enables the user to try out many algorithms swiftly. Additionally, 26 statistical approaches for the estimation of the number of clusters, as well as the mirrored density plot (MD-plot) of clusterability, are provided. Moreover, the fundamental clustering problems suite (FCPS) offers a variety of clustering challenges any algorithm should handle when facing real-world data.