最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 正文

Performance evaluation of some clustering algorith

来源:动视网 责编:小OO 时间:2025-09-25 07:17:16
文档

Performance evaluation of some clustering algorith

PerformanceEvaluationofSomeClusteringAlgorithmsandValidityIndicesUjjwalMaulik,Member,IEEE,andSanghamitraBandyopadhyay,Member,IEEEAbstract—Inthisarticle,weevaluatetheperformanceofthreeclusteringalgorithms,hardK-Means,singlelinkage,andasimulatedanneal
推荐度:
导读PerformanceEvaluationofSomeClusteringAlgorithmsandValidityIndicesUjjwalMaulik,Member,IEEE,andSanghamitraBandyopadhyay,Member,IEEEAbstract—Inthisarticle,weevaluatetheperformanceofthreeclusteringalgorithms,hardK-Means,singlelinkage,andasimulatedanneal
Performance Evaluation of Some Clustering Algorithms and Validity Indices Ujjwal Maulik,Member,IEEE,and

Sanghamitra Bandyopadhyay,Member,IEEE Abstract—In this article,we evaluate the performance of three clustering algorithms,hard K-Means,single linkage,and a simulated annealing(SA)based technique,in conjunction with four cluster validity indices,namely Davies-Bouldin index,Dunn’s index,Calinski-Harabasz index,and a recently developed index I. Based on a relation between the index I and the Dunn’s index,a lower bound of the value of the former is theoretically estimated in order to get unique hard K-partition when the data set has distinct substructures.The effectiveness of the different validity indices and clustering methods in automatically evolving the appropriate number of clusters is demonstrated experimentally for both artificial and real-life data sets with the number of clusters varying from two to ten.Once the appropriate number of clusters is determined,the SA-based clustering technique is used for proper partitioning of the data into the said number of clusters.

Index Terms—Unsupervised classification,Euclidean distance,K-Means algorithm,single linkage algorithm,validity index,simulated annealing.

æ

1I NTRODUCTION

T HE purpose of any clustering technique[1],[2],[3],[4],[5]is to evolve a KÂn partition matrix UðXÞof a data set X (X¼f x1;x2;...;x n g)in R N,representing its partitioning into a number,say K,of clusters(C1;C2;...;C K).The partition matrix UðXÞmay be represented as U¼½u kj ,k¼1;...;K,and j¼1;...;n, where u kj is the membership of pattern x j to clusters C k.In crisp partitioning of the data,the following condition holds:u kj¼1if x j2C k;otherwise,u kj¼0.Clustering techniques broadly fall into two classes,partitional and hierarchical.K-Means and single linkage [1],[2]are widely used techniques used in the domains of partitional and hierarchical clustering,respectively.

The two fundamental questions that need to be addressed in any typical clustering system are:1)How many clusters are actually present in the data and2)how real or good is the clustering itself.That is,whatever the clustering method may be, one has to determine the number of clusters and also the goodness or validity of the clusters formed[6].The measure of validity of the clusters should be such that it will be able to impose an ordering of the clusters in terms of its goodness.In other words,if U1;U2;...;U m is m partitions of X and the corresponding values of a validity measure are V1;V2;...;V m,then V k1>¼V k2>¼...>¼V km will indicate that U k1"U k2"..."U km,for some permutation k1;k2;...;km of f1;2;...;m g.Here,00U i"U j00indicates that partition U i is a better clustering than U j.

Milligan and Cooper[6]have provided a comparison of several validity indices for data sets containing distinct non-overlapping clusters while using only hierarchical clustering algorithms.Meil!a and Heckerman provide a comparison of some clustering methods and initialization strategies in[7].Some more clustering algorithms may be found in[8],[9].In this paper,we aim to evaluate the performance of four validity indices,namely, the Davies-Bouldin index[10],Dunn’s index[11],Calinski-Harabasz index[12],and a recently developed index I,in conjunction with three clustering algorithms viz.the well-known K-means and single linkage algorithms[1],[2],as well as a recently developed simulated annealing(SA)[13],[14]based clustering scheme.The number of clusters is varied from K min to K max for K-means and the simulated annealing-based clustering algorithms,while,for single linkage algorithm(which incorpo-rates automatic variation of number of clusters),the partitions in this range are considered.As a result,all the three clustering algorithms will yield(K maxÀK minþ1)partitions,UÃK

min

,UÃK

min

þ1

, ...,UÃK

max

,with the corresponding validity index values computed as V K

min

,V K

min

þ1

,...V K

max

.Let Küargmax i¼K

min

;...;K max

½V i .There-fore,according to index V,KÃis the correct number of clusters present in the data.The corresponding UÃKÃmay be obtained by using a suitable clustering technique with the number of clusters set to KÃ.The tupleis presented as the solution to the clustering problem.

2C LUSTERING A LGORITHMS

The three clustering algorithms considered in this article are the well-known K-Means and single linkage algorithms and a recently developed simulated annealing(SA)based clustering technique that uses probabilistic redistribution of points.

The K-Means algorithm[1],[2]is an iterative scheme that evolves K crisp,compact,and hyperspheroidal clusters in the data such that a measure

X n

j¼1

X K

k¼1

u kj jj x jÀz k jj2ð1Þ

is minimized.Here,the K cluster centers are initialized to K randomly chosen points from the data,which is then partitioned based on the minimum squared distance criterion.The cluster centers are subsequently updated to the mean of the points belonging to them.This process of partitioning followed by updating is repeated until either the cluster centers do not change or there is no significant change in the J values of two consecutive iterations.

The single linkage clustering scheme is a noniterative method based on a local connectivity criterion,and is usually regarded as a graph theoretical model[2].Instead of an object data set X,single linkages process sets of n2numerical relationships,say f r jk g, between pairs of objects represented by the data.The number r jk represents the extent to which object j and k are related in the sense of some binary relation&.It starts by considering each point in a cluster of its own.The single linkage algorithm computes the distance between two clusters S and T as

SLðS;TÞ¼min

|{z}

x2S;y2T

f dðx;yÞg:

Based on these distances,it merges the two closest clusters,replacing them by the merged cluster.The distance of the remaining clusters from the merged one is recomputed as above.The process continues until the a single cluster,comprising all the points,is formed.

The third clustering algorithm considered in this article for the purpose of comparison is a simulated annealing(SA)based scheme with probabilistic redistribution of the data points[13],[14].The SA algorithm starts from a random initial configuration at high temperature T max.A configuration,encoded as a set of cluster centers,represents a partitioning of the data based on the minimum squared distance criterion.The energy function E associated with configuration C is computed as E¼

P K

i¼1

P

8x j2C i

jj x jÀz i jj2,where z i is the center of cluster C i.A new configuration C0with energy E0is generated from the old one,C,by redistributing all the elements x i;i¼1;2;...;n j in cluster C j to cluster C k;k¼1;2;...;K;j¼k with probability

exp

À½D ikÀD ij þ

T t

;

.U.Maulik is with the Department of Computer Science and Engineering, University of Texas at Arlington,Arlington,TX76019.

E-mail:maulik@cse.uta.edu.

.S.Bandyopadhyay is with the Machine Intelligence Unit,Indian Statistical Institute,203B.T.Road,Calcutta,-700108,India.

E-mail:sanghami@isical.ac.in.

Manuscript received27June2001;revised8Jan.2002;accepted9May2002. Recommended for acceptance by V.Govindaraju.

For information on obtaining reprints of this article,please send e-mail to: tpami@computer.org,and reference IEEECS Log Number114440.

0162-8828/02/$17.00ß2002IEEE

where ½x þ¼max ðx;0Þand D ik ¼jj x i Àz k jj .T is the temperature

schedule,which is a sequence of strictly positive numbers such that T 1!T 2!...T t ¼0ðlim t !1Þ.The suffix t of T indicates the number of generations through the annealing process.The new configuration is accepted/rejected according to a probability

11þexp

ÀðEÀE Þ

T

;

which is a function of the current temperature and energy

difference between the two configurations.The temperature is gradually decreased toward a minimum value T min while the system settles down to a stable low energy state.

Note that,while the single linkage algorithm precomputes the distance between all pairs of points and subsequently uses them at each level of the hierarchy,both the K-Means and the SA-based algorithms compute the distance between the points to all the cluster centers in each iteration.Therefore,if n is the total number of data points,N is the dimensionality of the data,and K is the number of clusters being considered,then the complexity of the distance computation phase in single linkage will be O ðn 2N Þ,i.e.,it is linearly dependent on N .Again,both the K-Means and the SA-based method will have complexity O ðKnN Þin each iteration,i.e.,linearly dependent on N as well.

3C LUSTER V ALIDITY I NDICES

In this section,the four cluster validity indices that have been used

in this article to evaluate the partitioning obtained by the above three techniques for different values of K are described in detail.Davies-Bouldin (DB)Index:This index [10]is a function of the ratio of the sum of within-cluster scatter to between-cluster separation .The scatter within the i th cluster,S i ,is computed as

S i ¼1

j C i j

P x 2C i fjj x Àz i jjg and the distance between cluster C i and C j ,denoted by d ij ,is defined as d ij ¼jj z i Àz j jj :Here,z i represents the i th cluster center.The Davies-Bouldin (DB)index is then defined as

DB ¼1K X

K i ¼1

R i;qt ;ð2Þ

where R i;qt ¼max j;j ¼i S i;q þS j;q ij;t

n

o

.The objective is to minimize the DB index for achieving proper clustering.

Dunn’s Index:Let S and T be two nonempty subsets of R N .Then,the diameter 4of S is defined as 4ðS Þ¼max

|ffl{zffl}x;y 2S

f d ðx;y Þg

and set distance between S and T is defined as ðS;T Þ¼min

|{z}x 2S;y 2T

f d ðx;y Þ

g .Here,d ðx;y Þindicates the distance

between points x and y .For any partition,Dunn defined the following index [11]:

#D ¼min |{z}1 i K min |{z}1 j K;j ¼i ðC i ;C j Þ

max |ffl{zffl}

1 k K

f4ðC k Þg 8<:9=;8<:9=;:

ð3Þ

Larger values of #D correspond to good clusters,and the number of

clusters that maximizes #D is taken as the optimal number of clusters.Calinski Harabasz (CH)Index:This index [12]for n data points and K clusters is computed as

½trace B=ðK À1Þ

½trace W=ðn ÀK Þ

:

Here,B and W are the between and within cluster scatter matrices.The maximum hierarchy level is used to indicate the correct number of partitions in the data.The trace of the between cluster scatter matrix B can be written as

trace B ¼

X K k ¼1

n k jj z k Àz jj 2;

where n k is the number of points in cluster k and z is the centroid of the entire data set.The trace of the within cluster scatter matrix W can be written as

traceW ¼

X K k ¼1X n k i ¼1

jj x i Àz k jj 2:

Therefore,the CH index can be written as

CH ¼P K k ¼1n k jj z k Àz jj 2K À1"#=P K k ¼1

P n k i ¼1jj x i Àz k jj 2

n ÀK "#

:

ð4Þ

Index I :The index I is defined as follows:

IðK Þ¼1K ÂE 1E K ÂD K p ;ð5Þ

where K is the number of clusters.Here,

E K ¼

X K k ¼1X n j ¼1

u kj jj x j Àz k jj ;

and

D K ¼max K

i;j ¼1

jj z i Àz j jj :

n is the total number of points in the data set,U ðX Þ¼½u kj K Ân is a

partition matrix for the data,and z k is the center of the k th cluster.The value of K for which IðKÞis maximized is considered to be the correct number of clusters.

As can be seen from (5),the index I is a composition of three

factors,namely,1K ,E 1

E K ,and D K .The first factor will try to reduce index I as K is increased.The second factor consists of the ratio of E 1,which is constant for a given data set,and E K ,which decreases with increase in K .Hence,because of this term,index I increases as E K decreases.This,in turn,indicates that formation of more numbers of clusters,which are compact in nature,would be encouraged.Finally,the third factor,D K (which measures the maximum separation between two clusters over all possible pairs of clusters),will increase with the value of K .However,note that this value is upper bounded by the maximum separation between two points in the data set.Thus,the three factors are found to compete with and balance each other critically.The power p is used to control the contrast between the different cluster configurations.In this article,we have taken p ¼2.

Xie and Beni defined an index [15]that is a ratio of the compactness %of the fuzzy K-partition of a data set to its separation s .Mathematically,the Xie Beni (XB)index may be formulated as:

XB ¼

P K k ¼1

P n

j ¼1

u 2kj jj x j Àz k jj

2

n min i;j jj z i Àz j jj 2

:ð6Þ

Here,u kj is the membership of the j th point to the k th cluster and the XB index is independent of the algorithm used to obtain it.The XB index has been mathematically justified in [15]via its relationship to a well-defined hard clustering validity function,the Dunn’s index (#D ).In this section,we provide a theoretical justification of index I by establishing its relationship to the Dunn’s index via XB index with the underlying assumption that K ffiffiffin p ,which is a practically valid assumption.From (5)and (6),we have

XB ÂI ¼1n ÂK 2ÂE 2

1Âðmax K i;j ¼1jj z i Àz j jjÞ2

min K i;j ¼1ðjj z i Àz j jjÞ

2ÂP K k ¼1P n j ¼1u 2

kj jj x j Àz k jj 2ðP k ¼1P j ¼1u kj jj x j Àz k jjÞ:ð7Þ

Note that

E 21

ð

P k ¼1

P j ¼1u kj jj x j Àz k jjÞ

!1;

and

ðmax K i;j ¼1jj z i Àz j jjÞ2

min K i;j ¼1ðjj z i Àz j jjÞ

!1:

Therefore,

XB ÂI !1n ÂK 2

ÂX K k ¼1X n j ¼1

u 2kj jj x j Àz k jj 2

:ð8Þ

Since,in most real-life situations,we have K

ffiffiffin p ,so,

XB ÂI !

P K k ¼1

P n

j ¼1u 2

kj jj x j Àz k jj

2n 2

:

ð9Þ

Let us assume that cluster k has n k points,and the distances of these n k points from the cluster center z k are d k 1;d k 2;...;d kn k .Note that P K k ¼1n k ¼n .Let us define (k as P n k i ¼1d 2

ki

n k and (min as min k ð(k Þ.Here,(min represents the minimum of the mean squared distances of the points from their respective cluster centers (or minimum of the mean

squared error of the points in the respective clusters).In (9),

P K k ¼1P n j ¼1u 2

kj jj x j Àz k jj 2can be written as P K k ¼1

P n k i ¼1d 2ki .Since P K k ¼1P n k i ¼1d 2ki ¼P K k ¼1n k (k !n(min ,so,XB ÂI !(min

:It is proven

in [15]that XB 12D

:Therefore,

I Â

1#2D

!(min

:ð10Þ

Evidently,index I becomes arbitrarily large as #D grows without

bound.It has been proven in [16]that,if #D >1,the hard K-partition is unique.Therefore,if the data sets have a distinct substructure and the clustering algorithm found it,then the corresponding I !(min

n .

4

E XPERIMENTAL R ESULTS

4.1

Data Sets and Implementation Parameters

The three artificial data sets that have been used in this article are AD_10_2,AD_4_3N ,and AD_2_10.AD_10_2is a two-dimensional overlapping data set with 10clusters,whereas AD_4_3N is a three dimensional with four clusters.Fig.1and Fig.2show the data sets AD_10_2and AD_4_3N ,respectively (assuming the numbers to be replaced by data points).AD_2_10is an overlapping 10-dimen-sional data set generated using a triangular distribution of the form shown in Fig.3for two classes,one and two,both of which have equal a priori probabilities.It has 1,000data points.The range for class one is ½0;2 ½0;2 ½0;2 ...10times and that for class two is ½1;3 ½0;2 ½0;2 ...9times.

Two real-life data sets considered are Crude_Oil and Cancer .Crude_Oil is an overlapping data [17]having 56data points,five features,and three classes.The nine-dimensional Wisconsin breast cancer data (Cancer )(http://www.ics.uci.edu/mlearn/MLReposi-tory.html)is used for the purpose of demonstrating the effectiveness of the classifier in classifying high-dimensional patterns.It has 683samples belonging to two classes:Benign (class 1)and Malignant (class 2).Table 1presents the number of points,dimensions,and the number of clusters in each data.

In this article,the K-Means algorithm was executed for a maximum 100iterations.The simulated annealing algorithm was implemented with the following parameters:T max ¼100,T min ¼0:001, ¼0:05,and N T ¼100.Both the K-Means and the SA-based clustering algorithms were initialized with the same set of cluster centers for each K in order to make the comparison fair.Note that the single linkage clustering algorithm assumes that,initially,each point forms a cluster of its own.The values of K min and K max are chosen as two and ffiffiffin p for all the algorithms,where n is the number of points in the data set.

Fig.1.AD_10_2partitioned into 10clusters by the SA-based clustering

technique.

Fig. 2.AD_4_3N partitioned into four clusters by the SA-based clustering

technique.

Fig.3.Triangular distribution along the X axis for AD_2_10.

4.2Determining the Number of Clusters

The number of clusters provided by the three clustering algorithms in conjunction with the four validity indices for the different data sets is provided in Table 2.As can be seen from the table,the index I is able to indicate the correct number of clusters for all the data sets,irrespective of the underlying clustering technique.For AD_10_2,the values of I were found to be 299.288177,295.271881,and 300.149780when K-means,single linkage,and SA-based algorithms were used,respectively.In this context,one may note that the

SA-based algorithm provides an improvement over that of K-means.As is widely known,the K-means algorithm often gets stuck at suboptimal values,a limitation that the SA-based method can overcome.The index for the single linkage algorithm may differ from those obtained using the other two clustering methods because of the difference of the underlying clustering principle.It may be noted from Table 2that,irrespective of the clustering techniques that have been used in this article,none of DB,#D ,or CH indices are able to find the appropriate number of clusters for AD_10_2.For AD_4_3N,it was found that both DB and CH (in addition to I )were able to provide the exact number of clusters.On the contrary,#D failed to do so,irrespective of the clustering techniques that have been used here.Like I ,the CH index was found to provide the exact number of clusters for AD_2_10for all three clustering techniques.However,the DB and #D indices failed to do so for this data.These are indicated in Table 2.

For Crude_Oil,apart from index I ,only the DB index provided the correct number of clusters when the single linkage algorithm was used.Even in this case,the minimum was not at a unique value of the number of clusters.As seen from Table 2,the number of clusters indicated in this case is three and four,when the DB index was found to attain the minimum value.The other two indices are unable to indicate the correct number of clusters.Cancer data has two classes which have only a small amount of overlap.As a result,all three clustering techniques,irrespective of the cluster validity index used,were found to provide the appropriate number of clusters for this data (Table 2).

4.3Determining the Appropriate Clustering

The data set is partitioned into a number of clusters (K Ã),whose value is obtained by noting the optimum of the validity index,as done in the previous section.Note that,for this purpose,we use index I since this is found to be the most reliable among the

indices used.The corresponding U Ã

K Ãis obtained by using this value of K Ãin the simulated annealing-based clustering technique that uses a probabilistic redistribution of points.It is well-known that the K-means method has the limitation of getting stuck at suboptimal configurations,depending on the choice of the initial cluster centers.On the contrary,the simulated annealing based technique can overcome this limitation since it has the power of coming out of local optima.Fig.1and Fig.2demonstrate the results for AD_10_2and AD_4_3N,respectively.Since the dimensionality of the other data sets is greater than three,their partitioning could not be demonstrated graphically.

5D ISCUSSION AND C ONCLUSIONS

In this article,an extensive comparison of several cluster validity indices has been done for both artificial and real-life data sets,where the number of clusters and dimensions range from two to ten.In this regard,the performance of three crisp clustering algorithms,

TABLE 1

Description of the Data Sets

TABLE 2

Number of Clusters Provided

by the Three Clustering Algorithms

Using the Four Validity Indices for Different Data Sets

In this context,a recently developed cluster validity index I is described in this article.This index is found to attain its maximum value when the appropriate number of clusters is achieved. Compared to the other validity indices considered,I is found to be more consistent and reliable in indicating the correct number of clusters.This is experimentally demonstrated for the five data sets, where I achieves its maximum value for the correct number of clusters,irrespective of the underlying clustering technique.A lower bound of the value of the index I is also theoretically estimated in order to get the unique hard K-partition when the data set has a distinct substructure.This is obtained as a result of establishing a relationship of the index I with the well-known Dunn’s index and the Xie Beni index.

In addition to the experimental evaluation presented in this article,an extensive theoretical analysis comparing the validity indices needs to be performed in the future.Comparison with respect to the convergence speeds,as well as the effect of distance metrics,other than the Euclidean distance considered here,on the performance of the validity indices should be investigated.Note that the clustering algorithms and validity indices considered in this article are all crisp in nature.An extensive evaluation of fuzzy algorithms and validity indices needs to be carried out.In this regard,a fuzzy version of index I may also be developed.

A CKNOWLEDGMENTS

The authors sincerely acknowledge the reviewers for their valuable suggestions which helped in improving the quality of the article.

R EFERENCES

[1]J.T.Tou and R.C.Gonzalez,Pattern Recognition Principles.Reading:

Addison-Wesley,1974.

[2] A.K.Jain and R.C.Dubes,Algorithms for Clustering Data.Prentice Hall,1988.

[3]H.Frigui and R.Krishnapuram,“A Robust Competitive Clustering

Algorithm with Application in Computer Vision,”IEEE Trans.Pattern Analysis and Machine Intelligence,vol.21,no.1,pp.450-465,Jan.1999.

[4] B.S.Everitt,Cluster Analysis,Halsted Press,third ed.,1993.

[5]U.Maulik and S.Bandyopadhyay,“Genetic Algorithm Based Clustering

Technique,”Pattern Recognition,vol.33,pp.1455-1465,2000.

[6]G.W.Milligan and C.Cooper,“An Examination of Procedures for

Determining the Number of Clusters in a Data Set,”Psychometrika,vol.50, no.2,pp.159-179,1985.

[7]M.Meil!a and D.Heckerman,“An Experimental Comparison of Several

Clustering and Initialization Methods,”Proc.14th Conf.Uncertainty in Artificial Intelligence,pp.386-395,1998.

[8] C.Fraley and A.E.Raftery,“How Many Clusters?Which Clustering

Method?Answers Via Model-Based Cluster Analysis,”The Computer J., vol.41,no.8,pp.578-588,1998.

[9]L.O.Hall,I.B.Ozyurt,and J.C.Bezdek,“Clustering with a Genetically

Optimized Approach,”IEEE Trans.Evolutionary Computation,vol.3,no.2, pp.103-112,1999.

[10] D.L.Davies and D.W.Bouldin,“A Cluster Separation Measure,”IEEE

Trans.Pattern Analysis and Machine Intelligence,vol.1,pp.224-227,1979. [11]J.C.Dunn,“A Fuzzy Relative of the ISODATA Process and Its Use in

Detecting Compact Well-Separated Clusters,”J.Cybernetics,vol.3,pp.32-57,1973.

[12]R.B.Calinski and J.Harabasz,“A Dendrite Method for Cluster Analysis,”

Comm.in Statistics,vol.3,pp.1-27,1974.

[13]S.Kirkpatrik, C.Gelatt,and M.Vecchi,“Optimization by Simulated

Annealing,”Science,vol.220,pp.671-680,1983.

[14]S.Bandyopadhyay,U.Maulik,and M.K.Pakhira,“Clustering Using

Simulated Annealing with Probabilistic Redistribution,”Int’l J.Pattern Recognition and Artificial Intelligence,vol.15,no.2,pp.269-285,2001. [15]X.L.Xie and G.Beni,“A Validity Measure for Fuzzy Clustering,”IEEE

Trans.Pattern Analysis and Machine Intelligence,vol.13,pp.841-847,1991.

[16]J.C.Dunn,“Well Separated Clusters and Optimal Fuzzy Partitions,”

J.Cybernetics,vol.4,pp.95-104,1974.

[17]R.A.Johnson and D.W.Wichern,Applied Multivariate Statistical Analysis.

Prentice Hall,1982.

文档

Performance evaluation of some clustering algorith

PerformanceEvaluationofSomeClusteringAlgorithmsandValidityIndicesUjjwalMaulik,Member,IEEE,andSanghamitraBandyopadhyay,Member,IEEEAbstract—Inthisarticle,weevaluatetheperformanceofthreeclusteringalgorithms,hardK-Means,singlelinkage,andasimulatedanneal
推荐度:
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top