 Research
 Open access
 Published:
Novel similarity measure between hesitant fuzzy set and their applications in pattern recognition and clustering analysis
Journal of Engineering and Applied Science volume 71, Article number: 5 (2024)
Abstract
The extension of classical fuzzy sets are hesitant fuzzy sets (HFSs), in which each element has a possible value from [0,1]. Similarity and distance measures are useful implements for solving medical, clustering and patternrecognition problems. Most of the researchers have suggested their ideas for HFSs using distance measures and extract the similarity measure from distance measure but most of them are getting inadequate results. Therefore, we proposed a new similarity measure to resolve these problems and also satisfied the properties of proposed measure for HFSs. Additionally, numerous examples are taken in consideration using HFS and compared the performance of existing measures with proposed measure for different cases. Furthermore, we have applied proposed measure for pattern recognition problems using three different examples and also calculate performance index (i.e., Degree of Confidence) to explore the behavior of different measures. Finally, we suggested MST based clustering algorithm using HFenvironment and contrast the performance of proposed measure with existing ones. All these comparison illustrate that proposed measure is getting efficient and reasonable results and it also verified that proposed measure is not restricted to particular domain, it can be effectively applied for diverse field of application.
Introduction
Initially, the only approach to estimate the ambiguity was probability. Nonetheless, all kind of unpredictability in daily life cannot computed through probability such as very smart, low price and fast speed etc. as these indistinct terms could not exhibited by exact terms. Thus, Zadeh [1] introduced fuzzy set theory to tackle these uncertainties and found suitable in many applications like approximate reasoning, decisionmaking, and fuzzy control. Yager [2] suggested his fuzzy information measures. However it is found difficult to solve some of the practical applications using fuzzy set. Consequently, some new strategies forinstance Intuitionistic fuzzy set (IFS) suggested by Atanassov [3], intervalvalued fuzzy set by Zadeh [4], Type 2 fuzzy set introduced by Dubois [5] and fuzzy multiset by Yager [6] were suggested which are the expansion of FS. The above expansions are centered on the hypothesis that it is uncertain to allocate the belongingness degree of an element to a fixed set Torra and Narukawa [7, 8]. Gupta and Kumar [9, 10] suggested their approach using IFSs in MCDM (Multi criteria decision making) problems. Membership degree has one specific value in all of these extensions including fuzzy set (FS). However, in practical it is not always true that same membership value will assign to an alternative. This situation can be better explain using following example.
Suppose a company wants to take a decision through its governing council. As we know governing council has lot of members having different backgrounds, knowledge, expertise and qualifications. Thus, for a particular decision it is not necessary that all the members will assign same membership value to an alternative. For instance, some decision makers assigned 0.4, some provided 0.5 and others have assigned 0.6 as the membership degree. Therefore, it is not possible for them to commemorate each other. In that case HFS is found more powerful to cope up with this problem than all the other extensions of FS. Thus, we can represent this problem with hesitant fuzzy element (HFE) {0.4,0.5,0.6} and it will express the problem more impartially than intervalvalued fuzzy numbers [0.4,0.6] or crisp number 0.4 (or 0.5 or 0.6) or intuitionistic fuzzy numbers (0.4,0.6). Subsequently, idea of HFSs were presented by [7, 8] using the membership function with possible set values. However, HFS can consider the human hesitance more equitably as compared to different expansion of fuzzy set, Thus HFS became an effective concept to tackle with unpredictability or unreliability. Thus, in the short span of time it has fascinated the curiosity of most of the researchers [11,12,13,14] dealing in evaluation and decisionmaking process. Chen et al. [15] and Singh and Lalotra [16] explored the HFS using correlation coefficient and applied in clustering analysis. Some researchers[17,18,19] also suggested their approach using HFSs in the application of decisionmaking. Suo et al. [20] suggested information measure using HFS whereas Singh [21] suggested dual HFS based similarity and distance measure and applied in decision making. Dual hesitant fuzzy set using the concept of correlation coefficient was suggested by Tyagi [22]. Further, hesitant fuzzy prioritized operator was explored by Wei [23] whereas, generalized HF Bonferroni mean operator was suggested by Yu et al. [24]. Some of them introduced their concept using HFS theory in the field of decisionmaking [25,26,27]. Using the idea of Archimedean tconorm and tnorm dual HF power aggregation operator was suggested by Wang et al. [28] for the problems of MAGDM. Further, Frank aggregation operator suggested by Qin et al. [29] and Hamacher aggregation operator by Tan et al. [30] using the notion of HFSs and applied in the field of MCDM.
The two main indexes used in FS theory are similarity and distance measure, considerably used in different fields like patterenrecognition, appropriate reasoning, decisionmaking, machine learning and market prediction. First of all, concept of similarity measure was presented by Wang [31]. Geometric distance and Hausdorff metrics was explored by Zwick et al. [32] and further comparison of similarity measure of FS have also given by them. Thereafter, the fundamental definition of inclusion measure similarity measure (SM) was examined by Zeng and Li [33]. Gupta and Kumar [34, 35] suggested their similarity measures in different field like pattern recognition and clustering. From the last few years distinct researchers have suggested their work using hesitant fuzzy set such as similarity and distance measure using HFSs was suggested by Xu and Xia [36], correlation and distance measure was suggested by [37, 38], entropy measure by Xu and Xia [39] whereas generalized HFSWDM (hesitant fuzzy synergetic weighted distance measure) was suggested by Peng et al. [40] and used in MCDM problems. Some new similarity and distance measures were suggested by Zhang and Xu [41] using HFSs in the application of clustering. Ahmad and Khan [42] recognized different themes for the study of mixed data clustering. Different authors suggested Kmeans algorithm and applied in different applications like distributed memory multiprocessor [43], edge computing environment [44] and multi core CPU [45, 46]. Lapegna and Stranieri [47] suggested direction based clustering algorithm. Lapegna et al. [48] suggested an adaptive approach with Kmeans clustering algorithm. Laccetti et al. [49, 50] suggested different clustering algorithm for edge computing environment. Distance, similarity and information measure of HFSs were studied by Farhadinia [51, 52]. He further extended his work for intervalvalued and higher order HFSs. The idea of hesitance degree was suggested by Li et al. [53, 54] and gave new formulas for calculation of similarity measures. Zeng et al. [55] also suggested his similarity measures using hesitant fuzzy set in patternrecognition. Cosine based similarity and distance measure was suggested by Liao et al. [56] and used for decisionmaking problems.
Methods
It should be noted that existing measures have calculated their values based on distance while some of the researchers have converted their distance measures into similarity measures. Some of the existing measures are not achieving reasonable results in some of the cases while some of them have introduced hesitant degree based measures for HFS and get inadequate results. This encouraged us to come up with new similarity measure for HFS. In the view of complication of practical problems, it is necessary for us to proposed similarity measures which will make their calculation simpler and they are more useful to solve the different problems like clustering, patternrecognition and approximate reasoning. Therefore, main highlights of this paper is:

A novel similarity measure using HFSs has been introduced and also proved the properties of proposed measure for HFSs.

Thereafter, numerical and comparative analysis has been performed that includes the observation of different cases of HFSs, patternrecognition problems.

Moreover, we have also calculated DOC as performance index with the aid of numerical illustration.

Finally, hesitant fuzzy clustering algorithm has been developed and with the help of proposed measure applied it in a numerical example to compare its potency than existing measure and demonstrate the usefulness and acceptance of proposed method.
The outline of paper is as follows: the “Preliminaries” section deals with basic FS, HFS and its associated properties. Existing distance and similarity measures suggested by distinct authors are covered in the “Existing similarity measures” section. The “Proposed new HFsimilarity measure” section covers the proposed similarity measure with its validation. The “Numerical and comparative analysis” section covers the numerical experiment for patternrecognition and clustering problems. Last section forwards the conclusion and scope of improvement.
Preliminaries
Definition 1
Suppose \(Y = \{y_1,y_2,...,y_v\}\), be a finite universe of discourse. A fuzzy set (FS) U for \(y_n \in Y\) defined by Zadeh [1] as:
where \(f_{U}(y_n)\) denotes membership degree such that \(0 \le f_{U}(y_n) \le 1\).
Definition 2
Torra and Narukawa [7, 8]. Consider hesitant fuzzy set (HFS) B on Y which is function whenever apply to Y return a subset between [0,1] and described as:
where \(h_{B}(y_n)\) is a set having values between [0,1] with degree of membership of element \(y \in Y\) in set B. Xu and Xia [36] considers \(h_B(y_n)\) as hesitant fuzzy element (HFE) for comfort.
The different operations like union, intersection and complement can be represented as defined in subsequent definition.
Definition 3
For hesitant fuzzy elements \(h_{B_1}\), \(h_{B_2}\) and \(h_B\), the different operations described by Torra and Narukawa [7, 8] as:

1.
Lower bound: \(h_B^{}(y_n) = min\; h_B(y_n)\);

2.
Upper bound: \(h_B^{+}(y_n) = max\; h_B(y_n)\);

3.
\({h^c_B} = \cup _{\alpha \in h_B} \{1 \alpha \}\);

4.
\(h_{B_1} \cup h_{B_2} = \{h_B \in h_{B_1} \cup h_{B_2}  h_B \ge max (h^_{B_1}, h^_{B_2})\}\);

5.
\(h_{B_1} \cap h_{B_2} = \{h_B \in h_{B_1} \cup h_{B_2}  h_B \le min (h^+_{B_1}, h^+_{B_2})\}\).
Xu and Xia [36] describes the above two operations in the following form:

6.
\(h_{B_1} \cup h_{B_2} = \cup _{\alpha _1 \in h_{B_1},\alpha _2 \in h_{B_2}}max \{\alpha _1 , \alpha _2\}\);

7.
\(h_{B_1} \cap h_{B_2} = \cup _{\alpha _1 \in h_{B_1},\alpha _2 \in h_{B_2}}min \{\alpha _1 , \alpha _2\}\),
and also described operational laws regarding HFEs \(h_{B_1}\), \(h_{B_2}\) and \(h_B\) as:
Definition 4
Xu and Xia [36] Let us consider different HFEs \(h_{B_1}\), \(h_{B_2}\) and \(h_B\) and \(\beta\) cab be consider as positive real number, then

(a)
\(h^{\beta }_B = \cup _{\alpha \in h_B} \{\alpha ^\beta \}\);

(b)
\(\beta h_B = \cup _{\alpha \in h_B} \{1(1\alpha )^{\beta }\}\);

(c)
\(h_{B_1} \oplus h_{B_2} = \cup _{\alpha _1 \in h_{B_1},\alpha _2 \in h_{B_2}}\{ \alpha _1 +\alpha _2  \alpha _1 \alpha _2\}\);

(d)
\(h_{B_1} \otimes h_{B_2} = \cup _{\alpha _1 \in h_{B_1},\alpha _2 \in h_{B_2}}\{ \alpha _1 \alpha _2\}\).
Let \(h_{B_k}(k = 1,2,...,j)\) consider as HFEs collection, (c) and (d) of Definition 4 was summarized by Liao et al. [57] as:

(e)
\(\oplus ^j _{k = 1} h_{B_k} = \cup _{\alpha _k \in h_{B_k}} \left\{ 1  \prod ^j _{k = 1} (1 \alpha _k) \right\}\);

(f)
\(\otimes _{k = 1}^j h_{B_k} = \cup _{\alpha _k \in h_{B_k}} \left\{ \prod ^j _{k = 1} \alpha _k \right\}\).
For different HFEs the number of values may be different. Let \(\bar{l}_{h_{B}(y_n)}\) be the length of \(h_{B}(y_n)\). consider \(\bar{l} = max \left\{\bar{l}_{h_{B_1}} , \bar{l}_{h_{B_2}} \right\}\) for two hesitant fuzzy elements \(h_{B_1}\) and \(h_{B_2}\). To work properly, a rule has been provided by Xu and Xia [36] based on a hypothesis that decision makers are negative which is as follows: For \(\bar{l}_{h_{B_1}} < \bar{l}_{h_{B_2}}\), we will add the minimum value in \(h_{B_1}\) until its length will be same as that of \(h_{B_2}\); if \(\bar{l}_{h_{B_1}} > \bar{l}_{h_{B_2}}\), then we add the minimum value in \(h_{B_2}\) so that its length will same as that of \(h_{B_1}\).
Liao et al. [57] introduced the following theorem according to above operational laws:
Theorem 1
Liao et al. [57]. Let us consider \(h_{B_1}\) and \(h_{B_2}\) as two HFEs, we can write
This operation also holds having j different HFEs, i.e.,
Example 1
Let \(h_{B_1} = <0.3, \; 0.4, \; 0.6>\) and \(h_{B_2} = <0.1, \; 0.3>\) are two HFEs, according to operational laws of HFSs defined in Definition 4, we can write
and then, \(\bar{l}_{h_{B_1} \oplus h_{B_2}} = 6 = 3 \times 2 = \bar{l}_{h_{B_1}} \times \bar{l}_{h_{B_2}}, \bar{l}_{h_{B_1} \otimes h_{B_2}}= 6 = 3 \times 2 = \bar{l}_{h_{B_1}} \times \bar{l}_{h_{B_2}}\).
We observe that length of the derived HFE will increase after applying the above mentioned operations. Thus, calculations complexity will also increase. Therefore, they also suggested new methodology so that length of the derived HFE will decrease while handling HFEs. The modified operational laws of Definition 4 are as follows:
Definition 5
Liao et al. [57] Let us consider a collection of HFEs \(B = \left\{h_{B_1},h_{B_2},...,h_{B_j}\right\}\) and consider \(\beta\) as positive real number, then

1.
\(h^{\beta }_B = \left\{\left(h^{q(s)}_{B}\right)^{\beta } \Bigg s = 1,2,...,t \right\}\);

2.
\(\beta h_B = \left\{1 \left(1 h^{q(s)}_B\right)^{\beta } \Bigg s = 1,2,...,t \right\}\);

3.
\(h_{B_1} \oplus h_{B_2} = \left\{ h^{q(s)}_{B_1} + h^{q(s)}_{B_2}  h^{q(s)}_{B_1} h^{q(s)}_{B_2} \Bigg s = 1,2,...,t \right\}\);

4.
\(h_{B_1} \otimes h_{B_2} = \left\{ h^{q(s)}_{B_1} h^{q(s)}_{B_2} \Bigg s = 1,2,...,t \right\}\);

5.
\(\oplus ^j _{k = 1} h_{B_k} = \left\{ 1  \prod ^j _{k = 1} (1 h^{q(s)}_{B_k}) \Bigg s = 1,2,...,t \right\}\);

6.
\(\otimes ^j _{k = 1} h_{B_k} = \left\{ \prod ^j _{k = 1} h^{q(s)}_{B_k} \Bigg s = 1,2,...,t \right\}\),
where \(h^{q(s)}_{B_k}\) is the sth smallest value in \(h_{B_k}\).
Example 2
Consider \(h_{B_1} = <0.1, 0.2, 0.4, 0.7>\) and \(h_{B_2} = <0.3, 0.5, 0.6>\) are two HFEs. After implementing summation and multiplication operation as defined in Definition 5, we have
The score function of HFE was described by Xia and Xu [58] to obtain the MAX and MIN operators of two HFEs.
Definition 6
Xia and Xu [58] For a HFE \(h_B\), \(c(h_B) = \frac{1}{\bar{l}_{h_B}}\sum _{\alpha \in h_B} \alpha\) is known as the score function of \(h_B\), \(\bar{l}_{h_B}\) consider as number of values of \(h_B\). Let us consider HFEs as \(h_{B_1}\) and \(h_{B_2}\), if \(c(h_{B_1})>c(h_{B_2})\), then \(h_{B_1} >h_{B_2}\); if \(c(h_{B_1})=c(h_{B_2})\), then \(h_{B_1} = h_{B_2}\).
Although, this comparison rule could not discriminate between two HFEs in some specific cases. Therefore, to cope up with this problem the variance function of HFE was suggested by Liao et al. [57] and also introduced a new method for ranking HFEs.
Definition 7
Liao et al. [57] consider a HFE \(h_B\), \(\bar{v}(h_B) = \frac{1}{\bar{l}_{h_B}}\sqrt{\sum _{\alpha _k , \alpha _m \in h_B}(\alpha _k \alpha _m)^2}\) is consider as variance function of h where \(\bar{l}_{h_B}\) = number of values of \(h_B\) and \(\bar{v}(h_B)\) = variance degree of h. Considering HFEs \(h_{B_1}\) and \(h_{B_2}\), if \(\bar{v}(h_{B_1})>\bar{v}(h_{B_2})\) then \(h_{B_1} <h_{B_2}\); if \(\bar{v}(h_{B_1})=\bar{v}(h_{B_2})\), then \(h_{B_1} = h_{B_2}\).
The connection of score function and variance function is same as that of mean and variance in statistics. To contrast two HFEs, a strategy can be obtained simply by taking score function \(c(h_B)\) and the variance function \(\bar{v}(h_B)\) into consideration:
Strategy:
if \(c(h_{B_1})<c(h_{B_2})\), then \(h_{B_1} <h_{B_2}\), Max\(\{h_{B_1},h_{B_2}\} = h_{B_2}\), and Min\(\{h_{B_1}, h_{B_2}\} = h_{B_1}\); if \(c(h_{B_1})=c(h_{B_2})\), then
(a) if \(\bar{v}(h_{B_1})<\bar{v}(h_{B_2})\), then \(h_{B_1} >h_{B_2}\), Max\(\{h_{B_1},h_{B_2}\} = h_{B_1}\), and Min\(\{h_{B_1}, h_{B_2}\} = h_{B_2}\);
(b) if \(\bar{v}(h_{B_1})=\bar{v}(h_{B_2})\), then \(h_{B_1} =h_{B_2}\), Max\(\{h_{B_1},h_{B_2}\} = h_{B_1} =\)Min\(\{h_{B_1}, h_{B_2}\} = h_{B_2}\).
Example 3
Let \(h_{B_1} = <0.2, 0.2, 0.5>\) and \(h_{B_2} = <0.3, 0.3>\) be two HFEs. Then by Definition 7, we have
Then, \(\bar{v}(h_{B_1})> \bar{v}(h_{B_2})\), that is the variance degree of \(h_{B_2}\) is smaller than \(h_{B_1}\), thus, \(h_{B_1} < h_{B_2}\).
Definition 8
Xu and Xia [36] For HFSs B, F and D, Distance measure \(\bar{d}(B, F)\) should satisfied the following conditions:

1.
\(0 \le \bar{d}(B,F) \le 1\).

2.
\(\bar{d}(B,F) = \bar{d}(F,B)\).

3.
\(\bar{d}(B,F) = 1\) iff \(B= F\).

4.
If \(B \subseteq F \subseteq D\), then \(\bar{d}(B,D) \ge \bar{d}(B,F)\) and \(\bar{d}(B,D) \ge \bar{d}(F,D)\).
Definition 9
Xu and Xia [36] For HFSs B, F, and D, similarity measure \(\bar{S}(B, F)\) should satisfied the following conditions:

1.
\(0 \le \bar{S}(B,F) \le 1\).

2.
\(\bar{S}(B,F) = \bar{S}(F,B)\).

3.
\(\bar{S}(B,F) = 1\) iff \(B= F\).

4.
If \(B \subseteq F \subseteq D\), then \(\bar{S}(B,D) \le \bar{S}(B,F)\) and \(\bar{S}(B,D) \le \bar{S}(F,D)\).
Remark 1
It noticed that \(\bar{S}(B,F) = 1  \bar{d}(B,F)\) is distance measure.
Existing similarity measures
Some distinct distance measures were suggested by Xu and Xia [36] for HFEs.
(1). For any two HFEs \(h_{B}^{q(k)}\) and \(h_{F}^{q(k)}\), the Manhattan distance \(s(h_{B},h_{F})\) is given by Xu and Xia [36]
where \(h_{B}^{q(k)}(y_n)\) and \(h_{F}^{q(k)}(y_n)\) are the \(k^{th}\) largest values in \(h_{B}(y_n)\) and \(h_{F}(y_n)\), respectively and \(\bar{l}_{y_n} = max\{\bar{l}(h_{B}(y_n)), \bar{l}(h_{F}(y_n))\}\).
(2). Consider two HFEs \(h_{B}^{q(k)}\) and \(h_{F}^{q(k)}\) for a universal set B, Xu and Xia [36] suggested hesitant normalized Hamming distance, the hesitant normalized Euclidean distance and the generalized hesitant normalized distance using the concept of Hamming and Euclidean distance and can be represented as:
where \(h_{B}^{q(k)}(y_n)\) and \(h_{F}^{q(k)}(y_n)\) are the \(k^{th}\) largest values in \(h_{B}(y_n)\) and \(h_{F}(y_n)\), respectively and \(\bar{l}_{y_n} = max\{\bar{l}(h_{B}(y_n)), \bar{l}(h_{F}(y_n))\}\) and \(\gamma >0\).
(3). The hesitant normalized HammingHausdorff distance, normalized EuclideanHausdorff distance, Hybrid hesitant normalized Hamming distance, and Hybrid hesitant normalized Euclidean distance were also suggested by Xu and Xia [36] which can be represented as:
Now we will describe the definition suggested by Zeng [55] which is as mentioned in next definition.
Definition 10
Zeng [55]. Consider a HFS B on the universal set \(Y = \{y_1,y_2,...,y_v\}\) and for any \(y_n \in Y,\; \bar{l}(h_B(y_n))\) be the length of \(h_B(y_n),\; \rho (h_B(y_n))\) will be considered as the hesitance degree of \(h_B(y_n)\) and is defined as follows
Various similarity and distance measures were suggested by Zeng [55] using above definition.
(4). For a universal set Y let us consider B and F as HFSs. The normalized Hamming, Euclidean and generalized distance including hesitance degree between B and F were introduced by Zeng [55] as:
Definition 11
K. Rezaei [59] For HFS B, on the universal set Y such that \(y_n \in Y\), \(\varpi (h_B(y_n))\) is called the range of \(h_B(y_n)\) and defined as follows:
where \(h^+_B(y_n) = max\;h_B(y_n)\) and \(h^_B(y_n) = min\; h_B(y_n)\). The range of hesitant fuzzy set B can be described as:
(5). For a universal set Y, consider B and F as HFS. K. Rezaei [59] suggested various distance measures‘between B and F which can be defined as:
where \(\gamma > 0\).
Proposed new HFsimilarity measure
Similarity measure for HFSs were suggested by distinct researchers as described in previous section. However, number of them are incapable to sort out the problems of decision making effectively. Some of them getting identical values on different cases while few of them are getting contrary and unreasonable results. Thus, it is necessary to characterize a new similarity measure which can overcome the short comings of existing ones. Therefore, we proposed a new similarity measure which is more advantageous to explore different applications effectively.
Consider a universal set \(Y = \{y_1,y_2,...,y_v\}\) for two HFSs, i.e., B and F in Y. Now, we proposed the following HFSimilarity measure
where \(h_{B}^{q(k)}(y_n)\) and \(h_{B}^{q(k)}(y_n)\) are the Hesitant Fuzzy elements in hesitant fuzzy sets B and F on Y. \(\bar{l}_{y_{n}}\) is the length of HFSs.
Definition 12
For a universal set \(Y = \{y_1,y_2,...,y_v\}\), we consider two HFSs as B and F in Y. After that, we describe Similarity Measure \(\bar{S} : HFS(Y) \times HFS(Y) \rightarrow [0,1]\) as below:
The normalized and generalized similarity measures
Example 4
For \(Y = \{y_1\}\), we define the two HFSs on Y as \(B = \{\langle y_1, (0.3658, 0.4655, 0.5659)\rangle \}\) and \(F = \{\langle y_1, (0.2758, 0.3955, 0.6559)\rangle \}\). Here, we calculate the value of proposed similarity measure \(\bar{S}(B,F)\) which is as follows:
Example 5
For \(Y = \{y_1,y_2\}\), we define the two HFSs on Y as
and
Here, we calculate the value of proposed similarity measure \(\bar{S}(B,F)\) which is as follows:
Theorem 2
Measure \(\bar{S}(B,F)\) holds the subsequent properties.

1.
\(0 \le \bar{S}(B,F) \le 1\)

2.
\(\bar{S}(B,F) = 1\) iff \(B = F\)

3.
\(\bar{S}(B,F) = \bar{S}(F,B)\)

4.
If \(B \subseteq F \subseteq D\), then \(\bar{S}(B,D) \le \bar{S}(B,F)\) and \(\bar{S}(B,D) \le \bar{S}(F,D)\).
Proof
(1) Let \(x^{**} = \Biggh_{B}^{q(k)}(y_n)  h_{F}^{q(k)}(y_n) \Bigg\) then we have \(0 \le \frac{1}{1+x^{**}} \le 1\), for all \(x^{**} \in <span class='reftype'>[0,1]</span>\).
Therefore, \(0 \le \hat{s} (B,F) \le 1\). Using Eq. (21), \(0 \le \bar{S}(B,F) \le 1\).
(2) Let \(B = F\), then \(h_{B}^{q(k)}(y_n) = h_{F}^{q(k)}(y_n)\) and \(\; \forall \;y_n\). So \(\Biggh_{B}^{q(k)}(y_n)  h_{F}^{q(k)}(y_n) \Bigg=0\), which gives \(\bar{S}(B,F) = 1\).
(3) It is clear from Eq. (21).
(4) Let \(B \subseteq F \subseteq D\), then \(h_{B}^{q(k)}(y_n) \ge h_{F}^{q(k)}(y_n) \ge h_{D}^{q(k)}(y_n)\). So, we have
So, \(\bar{S}(B,D) \le \bar{S}(B,F)\) and \(\bar{S}(B,D) \le \bar{S}(F,D)\).
This proves the Theorem.
We will present numerical analysis in the coming section which reveals the effectiveness of proposed measure.
Numerical and comparative analysis
This section comprises numerical experiment on different sets, Patternrecognition and Clustering analysis.
Numerical experiment
Here firstly numerical experiment has been employed to differentiate the similarity degree among HFSs. To perceive the ability of distinct measures, we have considered four different cases. The calculated values of existing and proposed measure at different pairs are as mentioned in Table 1, where contradictory results are shown by bold values. It can be easily observed from the Table 1 that all the existing measures are getting same values for different HFSs whereas proposed measure is getting different values. Therefore, it can be easily observed that proposed measure is getting reasonable and better results.
Let us take another experiment of HFSs with length two, to further explore the performance of different measures. For this purpose we have taken another three cases and the compiled results are as given in Table 2. \(\bar{s}_{man}\) getting similar values 1 and 3 case whereas \(\bar{s}_{hnh}\) is getting same results for all the three cases. \(\bar{s}_{hne}\) is also getting same value on 1 and 2 cases. \(\bar{s}_{hnhh}\) and \(\bar{s}_{hneh}\) both are getting similar values. In the similar fashion \(\bar{s}_{hhd}\) is also getting same outputs for all the three cases. \(\bar{s}_{ehd}\), \(\bar{s}_{ghd}\; (\gamma =10)\), \(\bar{s}_{nhne}\) and \(\bar{s}_{nghn} \; (\gamma =6)\) are also getting same values for 1 and 2 cases. Now consider another HFSs with length three. The computed values are as tabulated in Table 3. \(\bar{s}_{man}\) has obtained the similar value at \(I^{st}\) and \(3^{rd}\) case whereas \(\bar{s}_{hnh}\) has obtained the similar result for all the three cases. \(\bar{s}_{hnhh}\) and \(\bar{s}_{hneh}\) both have obtained the similar value with respect to each other. \(\bar{s}_{hne}\) and \(\bar{s}_{ehd}\) both have obtained the similar results on \(I^{st}\) and \(2^{nd}\) case whereas \(\bar{s}_{nhnh}\) has similar value on \(1^{st}\) and \(3^{rd}\) case. In the similar fashion \(\bar{s}_{hhd}\) has obtained the similar value on all the three cases. Thus, we have observed that some of the existing measures are not getting accurate and reasonable results but proposed measure is getting consistent and rational result.
Pattern recognition
To classify the unknown pattern from known pattern in patternrecognition problems, we generally uses two main indexes one is distance measure and another is similarity measure. Most of the existing measures have used distance measures but in our case similarity measure has been used which can easily tackle different kind of patternrecognition and real life problems. The major requirement for any measure is to take proper decision. Therefore, it is necessary for us to formulate the pattern recognition problem in an efficient manner.
Consider \(B_j\) as the known patterns and F is the unknown pattern which we have to classify from one of the known pattern \((j = 1,2,...,f)\). Therefore, problem can be described as:
Thus, we will distinguish the unknown pattern from known pattern which has more resemblance with the assistance of identification principle. Thus, we will show the identification principle using subsequent examples and compare the performance of distinct measures.
Example 6
Let us consider \(X^* = \{(\alpha , \beta , \gamma )\alpha =\beta =\gamma = 60^{\circ }\}\) as set of equilateral triangle. B is denoted as fuzzy set in \(X^*\) for every triangle and its membership degree reflects the degree to which triangle (fuzzy set). B is associated to equilateral triangle. Let us take an example of triangle having three angles \((65^{\circ },70^{\circ },45^{\circ })\). Some of the analyst thinks it seems like equilateral triangle, some thinks it is close to equilateral triangle while other thinks, it may not seems like equilateral triangle. Thus, membership value allotted to fuzzy set can be hesitate between different values. Thus, with the help of hesitant fuzzy set, we can represent this triangle.
Let us consider two triangles which are denoted in the form of HFSs
and we have to recognize triangle \(F = \{y_1, 0.70, 0.75, 0.80 \}\) which is considered as unknown pattern. The different values obtained by existing and proposed measure are as mentioned in Table 4. As we know pattern recognition problems should allocate one of the known pattern according to the identification principle. The classified results which we have calculated using different measures are as compiled in Table 4.
Example 7
Consider a company who wants to hire HR Manager for its company. The assessment of candidates in the form of HFSs are \(B_1 = \{y_1, 0.17, 0.20 \}\), \(B_2 = \{y_1, 0.19, 0.20 \}\), and \(B_3 = \{y_1, 0.18, 0.19 \}\). The standard characteristics set by interviewer which is based on the requirement of job are \(F = \{y_1, 0.20,0.20 \}\). Now the problem is to find out the best candidates for the vacant position. Thus, according to identification principle we will check the closeness of candidate characteristic with standard characteristic set by interviewer. The Candidate which will have higher similarity measure value, will be the better candidate. For this purpose, we will calculate the values of existing and proposed measures using this example and also discriminate their performances. The different values obtained by existing and proposed measure are as mentioned in Table 5.
Example 8
Now we will consider an example of Asian rice (Oryza Sativa). This can be further divided into three types—Javonica, Japonica, and Indica. Total 21 sample has to be taken into consideration means 7 samples of each type. Different parameter/attributes for each are considered as milling degree (MD), foreign matter (FM), whiteness (WT), moisture content (MC), and grain shape (GS).
After that similarity between Javonica with Japonica and Javonica with Indica has to be discussed on different existing measures and proposed measure for contrast purposes. Further, we will calculate another term i.e Degree of Confidence (DOC) to explore the behavior of different measures. Initially, Hatzimichailidis et al. [60] had given the notion of DOC which can be described as:
where X is any HFcompatibility measure.
For HF similarity/correlation measure X; \(Y = Max\{X(Javonica , Japonica), X(Javonica, Indica)\}\) and For HF dissimilarity/distance measure X; \(Y = Min\{X(Javonica , Japonica), X(Javonica, Indica)\}.\) We have taken the random data as the different parameters of Asian rice. But this data cannot exactly used on different measures for calculation purposes. Firstly, we have to modify this data into HFdomain.
Therefore, we established transformation formulas as:
Where \(\hat{{mf}_1}(y_{ij}), \;\hat{{mf}_2}(y_{ij})\; and\; \hat{{mf}_3}(y_{ij})=\) membership functions of the element \(y_{ij}\).
The computed values corresponding to Example 3 are as mentioned in Table 6 which outlined the similarity between Javonica with Japonica and Javonica with Indica along with their DOC.
Results and discussion
After observing Table 4 it can be easily reveals that \(\bar{s}_{hnhh}\) and \(\bar{s}_{hneh}\) are not classifying any of the known pattern because \((B_1,F)\) and \((B_2,F)\) both are getting same values, thus getting unclassified result. Although \(\bar{s}_{man}\) and \(\bar{s}_{hnh}\) are classifying \(B_1\) as known pattern but both are attaining same value on both cases whereas all the other measures including proposed measure are classifying \(B_1\) as the known pattern means F is classified as the class of \(B_1\), i.e. \(B_1\) has particular shape. Thus, proposed measure is getting consistent result with existing measures.
Table 5 reveals that \(\bar{s}_{man}\) and \(\bar{s}_{hnh}\) are getting same value on \((B_1,F)\) and \((B_3,F)\). Thus, they are not able to distinguish between candidates. \(\bar{s}_{hnhn}\) and \(\bar{s}_{hneh}\) are also attaining unclassified results. In the similar fashion most of the existing measures are getting unclassified results means they are not distinguished the assessment of candidates. But proposed measure \((\bar{S})\) is capable to distinguish among candidates and getting \(B_2\) as one of the best candidate because the similarity measure value of \(B_2\) is larger than \(B_1\) and \(B_3\). This shows that proposed measure is getting consistent and better results as compared to others.
We observe that Javonica has more similar with Japonica than Javonica with Indica (please refer Table 6). (Javonica, Japonica) has attained more value than (Javonica, Indica), this can be outlined with Fig. 1. With the help of Table 6 it can also perceive that proposed measure has maximum DOC than existing measures (please see Fig. 2). From all these consideration we concluded that proposed measure is more reliable and efficient than others.
Clustering analysis
Clustering is one of significant modeling technique. Some of the researchers explored the concept and elongated into IFSs and HFSs. Wang [31] and Yao et al. [61] explored the concept of clustering using FS and employed in decisionmaking, production predict and assessment process. Association coefficient based IFSs clustering algorithm was suggested by Xu et al. [62] and expanded it for Interval valued FS. The correlation coefficient of HFSs was examined by Chen et al. [15] and implemented for clustering. Farhadinia [51] suggested similarity measure using HFSs and applied in clustering process. A clustering algorithm was investigated by Wen et al. [63] using HF Lukasiewicz implication operator. Yang and Hussian [64] suggested their measures using HFSs based on Hausdorff metric and applied in the field of MCDM and clustering. Some new similarity and distance measures was suggested by Zhang and Xu [41, 65] using HFSs in the application of clustering. Zhang and Xu presented an Hierarchical clustering algorithm using hesitant fuzzy. Further, we will suggest clustering algorithm for HFenvironment which is based on MST.
Algorithm: The steps of MST based clustering algorithm using HFSs are:

1.
Firstly Consider HFSs \((B_1,B_2,...,B_i)\) in B. Next, we have to calculate the values of proposed similarity measure using these HFSs and construct hesitant fuzzy similar matrix.

2.
Draw HFgraph by interconnecting each node with the help of edge. Assign value to each edge from HF similar matrix.

3.
Next, maximum spanning tree (MST) is to be formed using Kruskal [66] or Prim method [67].

(i)
An edge is connected between two nodes and each edge has its weight. First of all we have to arrange the weight in decreasing order.

(ii)
Then, Pick an edge having highest value.

(iii)
After that, pick the another edge having next larger value among the remaining one and it should not formed the closed circuit with already selected edges.

(iv)
Repeat the previous steps until (n1) edges have been picked. In this way MST graph is formed.

(i)

4.
Make the distinct clusters using grouping of nodes by selecting threshold (\(\Psi\)).
Example 9
Consider the data of ten Ships with its four attributes in the form of HFSs. These attributes can be characterized as Cost value (\(\bar{c_1}\)), Speed (\(\bar{c_2}\)), Design (\(\bar{c_3}\)), and Fuel economy (\(\bar{c_4}\)) which is shown in Table 7.

1.
HFsimilar matrix using above data is as follows:

2.
Hesitant fuzzy graph \(\hat{Z} = (V,E)\) by interconnecting the nodes is as shown in Fig. 3.

3.
Next, we have formed the MST of Hesitant Fuzzy graph using all the following steps.
(i) Arrange the edges according to their values.
(ii) \(e_{1,8}\) is the edge having max. weight as it is connected between node 1 and 8.
(iii) Next selected edge is \(e_{5,7}\) which is having next higher value \(e_{1,8}\) and it is also not forming the closed circuit with \(e_{1,8}\).
(iv) In this way, we repeated the above steps until 9 edges has been selected and attain the MST graph (see Fig. 4).
4. Lastly, threshold \((\psi )\) has been selected to form the clusters into groups which are as mentioned in Table 8.
Comparison of clustering results: According to above algorithm, the comparative results are shown in Tables 8, 9, and 10.
Comparative analysis
Now, we will compare the MST based Clustering with hierarchical hesitant fuzzy kmeans clustering algorithm to show the ability of proposed method.
Example 10
Suppose a company want to hire a manager. The different attribute on which experts evaluate are communication, leadership, experience and confidence and respective HF information is as shown in Table 11:
Hierarchical Kmeans clustering method:
First of all we will consider each HFS as an independent cluster \(\{h_{B_1}\}, \{h_{B_2}\}, \{h_{B_3}\}, \{h_{B_4}\}, \{h_{B_5}\}\). After this, we will calculate the distance between different HFS with the help of Eq. (20).
The distance between clusters \(d(h_{B_2}, h_{B_3})\) is minimum so we have to merge these clusters. This results that hesitant fuzzy sets are divided into four clusters \(\{h_{B_1}\}, \{h_{B_2},h_{B_3}\}, \{h_{B_4}\}, \{h_{B_5}\}\). After taking the average of \(h_{B_2}\) and \(h_{B_3}\), we will further calculate distance between each cluster
We get \(d(\{h_{B_2}, h_{B_3}\}, h_{B_1})\) is the shortest distance. Thus, we get the clusters as \(\{h_{B_1},h_{B_2}, h_{B_3}\}\)\(\{h_{B_4}\}\)\(\{h_{B_5}\}\). Further we merge \(h_{B_1},h_{B_2}\) and \(h_{B_3}\) and calculate the distance between each cluster. Then we have \(d(\{h_{B_1},h_{B_2}, h_{B_3}\},h_{B_4})\) as the minimum distance. Thus, new clusters are converted into single cluster \(\{h_{B_1},h_{B_2}, h_{B_3},h_{B_4},h_{B_5}\}\).
MST Based method:After applying the different steps of MST based clustering algorithm using Example 5, we get the different clusters which are tabulated in Table 12
Results and discussion
Although lot of researchers have used different technique to find the clusters like association coefficient based clustering, Hausdorff metric and hierarchical Kmeans clustering. But we have implemented MST based clustering algorithm and different clusters obtained are tabulated in Table 8. We have also find the clusters for existing measure \((\bar{s}_{hnh})\) using the same example and clustering obtained are as mentioned in Tables 9 and 10. Tables 9 and 10 shows that clusters are not fixed for existing measure (\(\bar{s}_{hnh})\) as two different clustering results are formed. Therefore, it is difficult for users to select the right clustering results. Thus, we concluded that existing measures are not reliable and efficient whereas proposed measure is getting unique, better and efficient results using HFMST algorithm technique.
To show the ability of proposed measure further, we have compared hierarchical kmeans clustering with MST based clustering using another example. The computed results are as mentioned in Table 12. Although, same results can be obtained using both methods but in hierarchical Kmeans clustering we have to calculate clustering center in each step and merge the clusters into single cluster. Thus, in the existing clustering technique lot of computation is required where as proposed clustering technique is simpler and effective as compared to other techniques.
Conclusion and future scope
Similarity and distance measures are two important implements for solving clustering, patternrecognition, medical diagnosis problems and so on. Although, various authors have suggested their measures but they are mainly distance measures while some of them have extracted similarity measures from distance measures and can be applied for different applications. But it is observed that some of them are not getting appropriate results means they are getting unreasonable or counter intuitive results. This motivate us to develop new similarity measure which can tackle with all these problem and satisfying different properties for HFSs. Further, numerical experiment and patternrecognition problems are taken into consideration. In numerical experiment we considered cases using different length of HFSs to explore the performance of different measures which exhibits that proposed measure is attaining consistent and rational results. To verify the proposed measure validity for pattern recognition problems we have considered the different examples. In first two examples classification of unknown pattern from one of the known patterns has been carried out and in third example we calculated another term i.e. DOC (Degree of Confidence) by taking an example of Asian Rice. Furthermore, clustering algorithm using maximum spanning tree (MST) has been suggested for HFenvironment and comparison has been performed with existing measures. From all these comparative study we determined that proposed measure is reliable and getting superior outcomes than others. This work expanded to IVHFS (interval valued HFS), HIFS (Hesitant intuitionistic Fuzzy set), Hesitant picture fuzzy set (HPFS), Hesitant spherical fuzzy set, Hesitant qrung orthopair fuzzy set for future scope of improvement. In future, we can extend our work using Adaptive Kmeans and hybrid clustering algorithm.
Availability of data and materials
All data generated or analysed during this study are included in this article.
Abbreviations
 HFSs:

Hesitant fuzzy sets
 MST:

Maximum spanning tree
 DOC:

Degree of Confidence
 MCDM:

Multi criteria decision making
 HFE:

Hesitant fuzzy element
 FS:

Fuzzy set
References
Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–356
Yager RR (1979) On measures of fuzziness and negation, part I: membership in the unit interval. Int J Gen Syst 5:221–229
Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20:87–96
Zadeh LA (1975) The concept of a linguistic variable and its application to approximate reasoning. (I), (II), (III), Inf Sci 8:199249, 8:301357, 9:4380
Dubois D, Prade H (1980) Fuzzy Sets and Systems: Theory and Applications. Academic Press, New York
Yager RR (1986) On the theory of bags. Int J Gen Syst 13:23–37
Torra V, Narukawa Y (2009) On hesitant fuzzy sets and decision. The 18th IEEE International Conference on Fuzzy Systems, Jeju Island, pp 13781382
Torra V (2010) Hesitant fuzzy sets. Int J Intell Syst 25:529–539
Gupta R, Kumar S (2021) Intuitionistic fuzzy scaleinvariant entropy with correlation coefficientsbased VIKOR approach for multicriteria decisionmaking. Granul Comput 7:77–93. https://doi.org/10.1007/s41066020002520
Gupta R, Kumar S (2022) Correlation Coefficient Based Extended VIKOR Approach Under Intuitionistic Fuzzy Environment. Int J Inf Manag Sci 33(1):35–54
Wei C, Yan F, Rodriguez RM (2016) Entropy measures for hesitant fuzzy sets and their application relations and fuzzy in multicriteria decisionmaking. J Intell Fuzzy Syst 31(1):673–685
Wang YM, Que CP, Lan YX (2017) Hesitant fuzzy TOPSIS multiattribute decision method based on prospect theory. Control Decis (Chin) 32(5):864–870
Karaaslan F, Karamaz F (2021) Hesitant fuzzy parameterized hesitant fuzzy soft sets and their applications in decision making. Int J Comput Math. Taylor and Francis. https://doi.org/10.1080/00207160.2021.2019715
Chen X, Suo C, Li Y, (2021) Distance measures on intuitionistic hesitant fuzzy set and its application in decisionmaking. Comput Appl Math 40(3). https://doi.org/10.1007/s40314021014787
Chen N, Xu Z, Xia M (2013) Correlation coefficients of hesitant fuzzy sets and their applications to clustering analysis. Appl Math Model 37(4):2197–2211
Singh S, Lalotra S (2019) On generalized correlation coefficients of the hesitant fuzzy sets with their application to clustering analysis. Comput Appl Math. https://doi.org/10.1007/s4031401907650
Liao HC, Xu ZS, Zeng XJ (2015) Novel correlation coefficients between hesitant fuzzy sets and their application in decision making. KnowlBased Syst 82:115–127
Joshi R, Kumar S (2019) A new approach in multiple attribute decision making using exponential hesitant fuzzy entropy. Int J Inf Manag Sci 30:305–322
Lalotra S, Singh S (2020) Knowledge measure of hesitant fuzzy set and its application in multiattribute decisionmaking. Comput Appl Math 39(2). https://doi.org/10.1007/s403140201095y
Suo CF, Li YM, Li ZH (2020) An (R, S)norm information measure for hesitant fuzzy sets and its application in decisionmaking. Comput Appl Math. https://doi.org/10.1007/s40314020013399
Singh P (2017) Distance and similarity measures for multipleattribute decisionmaking with dual hesitant fuzzy sets. Comput Appl Math 36(1):111–126
Tyagi SK (2015) Correlation coefficient of dual hesitant fuzzy sets and its applications. Appl Math Model 39:7082–7092
Wei GW (2012) Hesitant fuzzy prioritized operators and their application to multiple attribute decision making. KnowlBased Syst 31:176–182
Yu DJ, Wu YY, Zhou W (2012) Generalized hesitant fuzzy Bonferroni mean and its application in multicriteria group decision making. J Inf Comput Sci 9:267–274
Rodriguez RM, Martinez L, Herrera F (2012) Hesitant fuzzy linguistic term sets for decision making. IEEE Trans Fuzzy Syst 20:109–119
Xu ZS (2015) Hesitant Fuzzy Sets Theory. SpringerVerlag, Berlin
Xia MM, Xu ZS, Chen N (2013) Some hesitant fuzzy aggregation operators with their application in group decision making. Group Decis Negot. 22:259–279
Wang L, Shen Q, Zhu L (2015) Dual hesitant fuzzy power aggregation operators based on Archimedean tconorm and tnorm and their application to multiple attribute group decision making. Appl Soft Comput 38:23–50
Qin J, Liu X, Pedrycz W (2016) Frank aggregation operators and their application to hesitant fuzzy multiple attribute decision making. Appl Soft Comput 41:428–452
Tan CQ, Yi WT, Chen XH (2015) Hesitant fuzzy Hamacher aggregation operators for multicriteria decision making. Appl Soft Comput 26:325–349
Wang PZ (1983) Fuzzy Sets and Its Applications. Shanghai Science and Technology Press, Shanghai (in Chinese)
Zwick R, Carlstein E, Budescu DV (1987) Measures of similarity among fuzzy concepts: a comparative analysis. Int J Approx Reason 1:221–242
Zeng WY, Li HX (2006) Inclusion measure, similarity measure and the fuzziness of fuzzy sets and their relations. Int J Intell Syst 21:639–653
Gupta R, Kumar S (2021) A new similarity measure between picture fuzzy sets with applications to pattern recognition and clustering problems. Granul Comput. https://doi.org/10.1007/s41066021002831
Gupta R, Kumar S (2022) Intuitionistic Fuzzy SimilarityBased Information Measure in the Application of Pattern Recognition and Clustering. Int J Fuzzy Syst. https://doi.org/10.1007/s40815022012725
Xu ZS, Xia MM (2011) Distance and similarity measures for hesitant fuzzy sets. Inf Sci 181:2128–2138
Xu ZS, Xia MM (2011) On distance and correlation measures of hesitant fuzzy information. Int J Intell Syst 26:410–425
Peng D, Peng B, Wang T (2020) Reconfiguring IVHFTOPSIS decision making method with parameterized reference solutions and a novel distance for corporate carbon performance evaluation. J Ambient Intell Human Comput 11:38113832. https://doi.org/10.1007/s12652019016039
Xu ZS, Xia MM (2012) Hesitant fuzzy entropy and crossentropy and their use in multiattribute decisionmaking. Int J Intell Syst 27(9):799–822
Peng DH, Gao CY, Gao ZF (2013) Generalized hesitant fuzzy synergetic weighted distance measures and their application to multiple criteria decisionmaking. Appl Math Model 37:5837–5850
Zhang X, Xu Z (2015) Novel distance and similarity measures on hesitant fuzzy sets with applications to clustering analysis. J Intell Fuzzy Syst 28(5):2279–2296
Ahmad A, Khan SS (2019) Survey of stateoftheart mixed data clustering algorithms. IEEE Access 7:31883–31902
Dhillon IS, Modha DS (2002) A DataClustering Algorithm on Distributed Memory Multiprocessors. In: Zaki MJ, Ho CT (eds) LargeScale Parallel Data Mining, vol 1759. Springer, Berlin/Heidelberg, pp 245260. Lecture Notes in Computer Science
Lapegna M, Balzano W, Meyer N, Romano D (2021) Clustering Algorithms on LowPower and HighPerformance Devices for Edge Computing Environments. 21:5395. https://doi.org/10.3390/s21165395
Laccetti G et al.(2020) A high performance modified Kmeans algorithm for dynamic data clustering in multicore CPUs based environments. In: Montella R, et al. (eds) Internet and Distributed Computing Systems, IDCS19, vol 11874. Springer, Cham. Lecture Notes in Computer Science. https://doi.org/10.1007/97830303491419
Laccetti G et al (2020) Performance enhancement of a dynamic Kmeans algorithm through a parallel adaptive strategy on multicore CPUs. J Parallel Distrib Comput 145:34–41
Lapegna M, Stranieri S (2022) A DirectionBased Clustering Algorithm for VANETs Management. In: Barolli L, Yim K, Chen HC. (eds) Innovative Mobile and Internet Services in Ubiquitous Computing. Lecture Notes in Networks and Systems. https://doi.org/10.1007/9783030797287
Lapegna M, Mele V, Romano D (2019,2020) An Adaptive Strategy for Dynamic Data Clustering with the KMeans Algorithm. In: Wyrzykowski R, Deelman E, Dongarra J, Karczewski K (eds) Parallel Processing and Applied Mathematics PPAM, vol 12044. Springer, Cham, pp 101110. Lecture Notes in Computer Science
Laccetti G, Lapegna M, Romano D (2022) A hybrid clustering algorithm for highperformance edge computing devices [Short]. In: 21st International Symposium on Parallel and Distributed Computing (ISPDC), Basel, Switzerland, pp 7882. https://doi.org/10.1109/ISPDC55340.2022.00020
Laccetti G, Lapegna M , Montella R (2022) Toward a highperformance clustering algorithm for securing edge computing environments. In: 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid), Taormina, Italy, pp 820825. https://doi.org/10.1109/CCGrid54584.2022.00097
Farhadinia B (2013) Information measures for hesitant fuzzy sets and interval valued hesitant fuzzy sets. Inf Sci 240:129–144
Farhadinia B (2014) Distance and similarity measures for higher order hesitant fuzzy sets. KnowlBased Syst 55:43–48
Li DQ, Zeng WY, Zhao YB (2015) Note on distance measure of hesitant fuzzy sets. Inf Sci 321:103–115
Li DQ, Zeng WY, Li JH (2015) New distance and similarity measures on hesitant Fuzzy sets and their applications in multiple criteria decision making. Eng Appl Artif Intell 40:11–16
Zeng W, Li D, Yin Q (2016) Distance and similarity measures between hesitant fuzzy sets and their application in pattern recognition. Pattern Recogn Lett 84:267–271
Liao HC, Xu ZS (2015) Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making. Expert Syst Appl 42:5328–5336
Liao HC, Xu ZS, Xia MM (2013) Multiplicative consistency of hesitant fuzzy preference relation and its application in group decision making. Technical report
Xia MM, Xu ZS (2011) Hesitant fuzzy information aggregation in decision making. Int J Approx Reason 52:395–407
Rezaei K, Rezaei H (2020) New distance and similarity measures for hesitant fuzzy sets and their application. J Intell Fuzzy Syst 20. https://doi.org/10.3233/JIFS200364
Hatzimichailidis AG, Papakosta GA, Kaburlasos VG (2012) A novel distance measure of intuitionistic fuzzy sets and its application to pattern recognition problems. Int J Intell Syst. 27:396–409
Yao J, Dash M (2000) Fuzzy clustering and fuzzy modeling. Fuzzy Sets Syst 113:381–388
Xu Z, Chen J, Wu J (2008) Clustering algorithm for intuitionistic fuzzy sets. Inf Sci 178:3775–3790
Wen M, Zhao H, Xu Z (2019) Hesitant fuzzy Lukasiewicz implication operation and its application to alternatives’ sorting and clustering analysis. Soft Comput 23:393–405
Yang MS, Hussain Z (2019) Distance and similarity measures of hesitant fuzzy sets based on Hausdorff metric with applications to multicriteria decision making and clustering. Soft Comput 23(14):5835–5848
Zhang X, Xu Z (2015) Hesitant fuzzy agglomerative hierarchical clustering algorithms. Int J Syst Sci 46(3):562–576
Kruskal JB (1956) On the shortest spanning subtree of a graph and the travelling salesman problem. Proc Am Math Soc 7:48–50
Prim RC (1957) Shortest connection networks and some generalizations. Bell Syst Tech J 36:1389–1401
Acknowledgements
Not applicable.
Funding
No funds, grants or other support was received.
Author information
Authors and Affiliations
Contributions
Both the authors contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Gupta, R., Kumar, S. Novel similarity measure between hesitant fuzzy set and their applications in pattern recognition and clustering analysis. J. Eng. Appl. Sci. 71, 5 (2024). https://doi.org/10.1186/s4414702300329y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s4414702300329y