Novel similarity measure between hesitant fuzzy set and their applications in pattern recognition and clustering analysis

The extension of classical fuzzy sets are hesitant fuzzy sets (HFSs), in which each element has a possible value from [0,1]. Similarity and distance measures are useful implements for solving medical, clustering and pattern-recognition problems. Most of the researchers have suggested their ideas for HFSs using distance measures and extract the similarity measure from distance measure but most of them are getting inadequate results. Therefore, we proposed a new similarity measure to resolve these problems and also satisfied the properties of proposed measure for HFSs. Additionally, numerous examples are taken in consideration using HFS and compared the performance of existing measures with proposed measure for different cases. Furthermore, we have applied proposed measure for pattern recognition problems using three different examples and also calculate performance index (i.e., Degree of Confidence) to explore the behavior of different measures. Finally, we suggested MST based clustering algorithm using HF-environment and contrast the performance of proposed measure with existing ones. All these comparison illustrate that proposed measure is getting efficient and reasonable results and it also verified that proposed measure is not restricted to particular domain, it can be effectively applied for diverse field of application.


Introduction
Initially, the only approach to estimate the ambiguity was probability.Nonetheless, all kind of unpredictability in daily life cannot computed through probability such as very smart, low price and fast speed etc. as these indistinct terms could not exhibited by exact terms.Thus, Zadeh [1] introduced fuzzy set theory to tackle these uncertainties and found suitable in many applications like approximate reasoning, decision-making, and fuzzy control.Yager [2] suggested his fuzzy information measures.However it is found difficult to solve some of the practical applications using fuzzy set.Consequently, some new strategies for-instance Intuitionistic fuzzy set (IFS) suggested by Atanassov [3], interval-valued fuzzy set by Zadeh [4], Type 2 fuzzy set introduced by Dubois [5] and fuzzy multi-set by Yager [6] were suggested which are the expansion of FS.The above expansions are centered on the hypothesis that it is uncertain to allocate the belongingness degree of an element to a fixed set Torra and Narukawa [7,8].Gupta and Kumar [9,10] suggested their approach using IFSs in MCDM (Multi criteria decision making) problems.Membership degree has one specific value in all of these extensions including fuzzy set (FS).However, in practical it is not always true that same membership value will assign to an alternative.This situation can be better explain using following example.
Suppose a company wants to take a decision through its governing council.As we know governing council has lot of members having different backgrounds, knowledge, expertise and qualifications.Thus, for a particular decision it is not necessary that all the members will assign same membership value to an alternative.For instance, some decision makers assigned 0.4, some provided 0.5 and others have assigned 0.6 as the membership degree.Therefore, it is not possible for them to commemorate each other.In that case HFS is found more powerful to cope up with this problem than all the other extensions of FS.Thus, we can represent this problem with hesitant fuzzy element (HFE) {0.4,0.5,0.6} and it will express the problem more impartially than interval-valued fuzzy numbers [0.4,0.6] or crisp number 0.4 (or 0.5 or 0.6) or intuitionistic fuzzy numbers (0.4,0.6).Subsequently, idea of HFSs were presented by [7,8] using the membership function with possible set values.However, HFS can consider the human hesitance more equitably as compared to different expansion of fuzzy set, Thus HFS became an effective concept to tackle with unpredictability or unreliability.Thus, in the short span of time it has fascinated the curiosity of most of the researchers [11][12][13][14] dealing in evaluation and decision-making process.Chen et al. [15] and Singh and Lalotra [16] explored the HFS using correlation coefficient and applied in clustering analysis.Some researchers [17][18][19] also suggested their approach using HFSs in the application of decision-making.Suo et al. [20] suggested information measure using HFS whereas Singh [21] suggested dual HFS based similarity and distance measure and applied in decision making.Dual hesitant fuzzy set using the concept of correlation coefficient was suggested by Tyagi [22].Further, hesitant fuzzy prioritized operator was explored by Wei [23] whereas, generalized HF Bonferroni mean operator was suggested by Yu et al. [24].Some of them introduced their concept using HFS theory in the field of decision-making [25][26][27].Using the idea of Archimedean t-conorm and t-norm dual HF power aggregation operator was suggested by Wang et al. [28] for the problems of MAGDM.Further, Frank aggregation operator suggested by Qin et al. [29] and Hamacher aggregation operator by Tan et al. [30] using the notion of HFSs and applied in the field of MCDM.
The two main indexes used in FS theory are similarity and distance measure, considerably used in different fields like patteren-recognition, appropriate reasoning, decision-making, machine learning and market prediction.First of all, concept of similarity measure was presented by Wang [31].Geometric distance and Hausdorff metrics was explored by Zwick et al. [32] and further comparison of similarity measure of FS have also given by them.Thereafter, the fundamental definition of inclusion measure similarity measure (SM) was examined by Zeng and Li [33].Gupta and Kumar [34,35] suggested their similarity measures in different field like pattern recognition and clustering.From the last few years distinct researchers have suggested their work using hesitant fuzzy set such as similarity and distance measure using HFSs was suggested by Xu and Xia [36], correlation and distance measure was suggested by [37,38], entropy measure by Xu and Xia [39] whereas generalized HFSWDM (hesitant fuzzy synergetic weighted distance measure) was suggested by Peng et al. [40] and used in MCDM problems.Some new similarity and distance measures were suggested by Zhang and Xu [41] using HFSs in the application of clustering.Ahmad and Khan [42] recognized different themes for the study of mixed data clustering.Different authors suggested K-means algorithm and applied in different applications like distributed memory multiprocessor [43], edge computing environment [44] and multi core CPU [45,46].Lapegna and Stranieri [47] suggested direction based clustering algorithm.Lapegna et al. [48] suggested an adaptive approach with K-means clustering algorithm.Laccetti et al. [49,50] suggested different clustering algorithm for edge computing environment.Distance, similarity and information measure of HFSs were studied by Farhadinia [51,52].He further extended his work for interval-valued and higher order HFSs.The idea of hesitance degree was suggested by Li et al. [53,54] and gave new formulas for calculation of similarity measures.Zeng et al. [55] also suggested his similarity measures using hesitant fuzzy set in pattern-recognition.Cosine based similarity and distance measure was suggested by Liao et al. [56] and used for decision-making problems.

Methods
It should be noted that existing measures have calculated their values based on distance while some of the researchers have converted their distance measures into similarity measures.Some of the existing measures are not achieving reasonable results in some of the cases while some of them have introduced hesitant degree based measures for HFS and get inadequate results.This encouraged us to come up with new similarity measure for HFS.In the view of complication of practical problems, it is necessary for us to proposed similarity measures which will make their calculation simpler and they are more useful to solve the different problems like clustering, pattern-recognition and approximate reasoning.Therefore, main highlights of this paper is: • A novel similarity measure using HFSs has been introduced and also proved the properties of proposed measure for HFSs.• Thereafter, numerical and comparative analysis has been performed that includes the observation of different cases of HFSs, pattern-recognition problems.• Moreover, we have also calculated DOC as performance index with the aid of numerical illustration.• Finally, hesitant fuzzy clustering algorithm has been developed and with the help of proposed measure applied it in a numerical example to compare its potency than existing measure and demonstrate the usefulness and acceptance of proposed method.
The outline of paper is as follows: the "Preliminaries" section deals with basic FS, HFS and its associated properties.Existing distance and similarity measures suggested by distinct authors are covered in the "Existing similarity measures" section.The "Proposed new HF-similarity measure" section covers the proposed similarity measure with its validation.The "Numerical and comparative analysis" section covers the numerical experiment for pattern-recognition and clustering problems.Last section forwards the conclusion and scope of improvement.

Preliminaries
Definition 1 Suppose Y = {y 1 , y 2 , ..., y v } , be a finite universe of discourse.A fuzzy set (FS) U for y n ∈ Y defined by Zadeh [1] as: Definition 2 Torra and Narukawa [7,8].Consider hesitant fuzzy set (HFS) B on Y which is function whenever apply to Y return a subset between [0,1] and described as: where h B (y n ) is a set having values between [0,1] with degree of membership of element y ∈ Y in set B. Xu and Xia [36] considers h B (y n ) as hesitant fuzzy element (HFE) for comfort.
The different operations like union, intersection and complement can be represented as defined in subsequent definition.

Definition 3
For hesitant fuzzy elements h B 1 , h B 2 and h B , the different operations described by Torra and Narukawa [7,8] as: Xu and Xia [36] describes the above two operations in the following form: .., j) consider as HFEs collection, (c) and (d) of Definition 4 was sum- marized by Liao et al. [57] as: For different HFEs the number of values may be different.Let lh B (y n ) be the length of h B (y n ) .consider l = max lh B 1 , lh B 2 for two hesitant fuzzy elements h B 1 and h B 2 .To work properly, a rule has been provided by Xu and Xia [36] based on a hypothesis that decision makers are negative which is as follows: For lh B 1 < lh B 2 , we will add the mini- mum value in h B 1 until its length will be same as that of h B 2 ; if lh B 1 > lh B 2 , then we add the minimum value in h B 2 so that its length will same as that of h B 1 .
Liao et al. [57] introduced the following theorem according to above operational laws: We observe that length of the derived HFE will increase after applying the above mentioned operations.Thus, calculations complexity will also increase.Therefore, they also suggested new methodology so that length of the derived HFE will decrease while handling HFEs.The modified operational laws of Definition 4 are as follows: Definition 5 Liao et al. [57] Let us consider a collection of HFEs B = h B 1 , h B 2 , ..., h B j and consider β as positive real number, then (1) .03, 0.09, 0.04, 0.12, 0.06, 0.18}

βh
Although, this comparison rule could not discriminate between two HFEs in some specific cases.Therefore, to cope up with this problem the variance function of HFE was suggested by Liao et al. [57] and also introduced a new method for ranking HFEs.
The connection of score function and variance function is same as that of mean and variance in statistics.To contrast two HFEs, a strategy can be obtained simply by taking score function c(h B ) and the variance function v(h B ) into consideration: Example 3 Let h B 1 =< 0.2, 0.2, 0.5 > and h B 2 =< 0.3, 0.3 > be two HFEs.Then by Definition 7, we have Definition 8 Xu and Xia [36] For HFSs B, F and D, Distance measure d(B, F ) should satisfied the following conditions: Definition 9 Xu and Xia [36] For HFSs B, F, and D, similarity measure S(B, F ) should satisfied the following conditions:

Existing similarity measures
Some distinct distance measures were suggested by Xu and Xia [36] for HFEs.
(1).For any two HFEs h q(k) B and h q(k) F , the Manhattan distance s(h B , h F ) is given by Xu and Xia [36] where (2).Consider two HFEs h q(k) B and h q(k) F for a universal set B, Xu and Xia [36] suggested hesitant normalized Hamming distance, the hesitant normalized Euclidean distance and the generalized hesitant normalized distance using the concept of Hamming and Euclidean distance and can be represented as: (3).The hesitant normalized Hamming-Hausdorff distance, normalized Euclidean-Hausdorff distance, Hybrid hesitant normalized Hamming distance, and Hybrid hesitant normalized Euclidean distance were also suggested by Xu and Xia [36] which can be represented as: Now we will describe the definition suggested by Zeng [55] which is as mentioned in next definition.Definition 10 Zeng [55].Consider a HFS B on the universal set Y = {y 1 , y 2 , ..., y v } and for any y n ∈ Y , l(h B (y n )) be the length of h B (y n ), ρ(h B (y n )) will be considered as the hesitance degree of h B (y n ) and is defined as follows Various similarity and distance measures were suggested by Zeng [55] using above definition. (5) ).For a universal set Y let us consider B and F as HFSs.The normalized Hamming, Euclidean and generalized distance including hesitance degree between B and F were introduced by Zeng [55] as: Definition 11 K. Rezaei [59] For HFS B, on the universal set Y such that y n ∈ Y , ̟ (h B (y n )) is called the range of h B (y n ) and defined as follows: The range of hesitant fuzzy set B can be described as: (5).For a universal set Y, consider B and F as HFS.K. Rezaei [59] suggested various distance measures'between B and F which can be defined as: where γ > 0. (12) Page 10 of 26 Gupta and Kumar Journal of Engineering and Applied Science (2024) 71:5

Proposed new HF-similarity measure
Similarity measure for HFSs were suggested by distinct researchers as described in previous section.However, number of them are incapable to sort out the problems of decision making effectively.Some of them getting identical values on different cases while few of them are getting contrary and unreasonable results.Thus, it is necessary to characterize a new similarity measure which can overcome the short comings of existing ones.Therefore, we proposed a new similarity measure which is more advantageous to explore different applications effectively.Consider a universal set Y = {y 1 , y 2 , ..., y v } for two HFSs, i.e., B and F in Y. Now, we proposed the following HF-Similarity measure where h q(k) B (y n ) and h q(k) B (y n ) are the Hesitant Fuzzy elements in hesitant fuzzy sets B and F on Y. ly n is the length of HFSs.

Definition 12
For a universal set Y = {y 1 , y 2 , ..., y v } , we consider two HFSs as B and F in Y.After that, we describe Similarity Measure S : HFS(Y ) × HFS(Y ) → [0, 1] as below: The normalized and generalized similarity measures (20) .
Example 4 For Y = {y 1 } , we define the two HFSs on Y as B = {�y 1 , (0.3658, 0.4655, 0.5659)�} and F = {�y 1 , (0.2758, 0.3955, 0.6559)�} .Here, we calculate the value of proposed similarity measure S(B, F ) which is as follows: Example 5 For Y = {y 1 , y 2 } , we define the two HFSs on Y as and Here, we calculate the value of proposed similarity measure S(B, F ) which is as follows: Theorem 2 Measure S(B, F ) holds the subsequent properties.

Numerical and comparative analysis
This section comprises numerical experiment on different sets, Pattern-recognition and Clustering analysis.

Numerical experiment
Here firstly numerical experiment has been employed to differentiate the similarity degree among HFSs.To perceive the ability of distinct measures, we have considered four different cases.The calculated values of existing and proposed measure at different pairs are as mentioned in Table 1, where contradictory results are shown by bold values.It can be easily observed from the Table 1 that all the existing measures are getting same values for different HFSs whereas proposed measure is getting different values.Therefore, it can be easily observed that proposed measure is getting reasonable and better results.
Let us take another experiment of HFSs with length two, to further explore the performance of different measures.For this purpose we have taken another three cases and the compiled results are as given in Table 2. sman getting similar values 1 and 3 case whereas shnh is getting same results for all the three cases.shne is also getting same value on 1 and 2 cases.shnhh and shneh both are getting similar values.In the similar fashion shhd is also getting same outputs for all the three cases.sehd , sghd (γ = 10) , snhne and snghn (γ = 6) are also getting same values for 1 and 2 cases.Now consider another HFSs with length three.The computed values are as tabulated in Table 3. sman has obtained the similar value at I st and 3 rd case whereas shnh has obtained the similar result for all the three cases.shnhh and shneh both have obtained the similar value with respect to each other.shne and sehd both have obtained the similar results on I st and 2 nd case whereas snhnh has similar value on 1 st and 3 rd case.In the similar fashion shhd has obtained the similar value on all the three cases.Thus, we have observed that some of the existing

Pattern recognition
To classify the unknown pattern from known pattern in pattern-recognition problems, we generally uses two main indexes one is distance measure and another is similarity measure.Most of the existing measures have used distance measures but in our case similarity measure has been used which can easily tackle different kind of pattern-recognition and real life problems.The major requirement for any measure is to take proper decision.Therefore, it is necessary for us to formulate the pattern recognition problem in an efficient manner.Consider B j as the known patterns and F is the unknown pattern which we have to classify from one of the known pattern (j = 1, 2, ..., f ) .Therefore, problem can be described as: Thus, we will distinguish the unknown pattern from known pattern which has more resemblance with the assistance of identification principle.Thus, we will show the identification principle using subsequent examples and compare the performance of distinct measures.

Example 6
Let us consider X * = {(α, β, γ )|α = β = γ = 60 • } as set of equilateral triangle.B is denoted as fuzzy set in X * for every triangle and its membership degree reflects the degree to which triangle (fuzzy set).B is associated to equilateral triangle.Let us take an example of triangle having three angles (65 • , 70 • , 45 • ) .Some of the ana- lyst thinks it seems like equilateral triangle, some thinks it is close to equilateral triangle while other thinks, it may not seems like equilateral triangle.Thus, membership value allotted to fuzzy set can be hesitate between different values.Thus, with the help of hesitant fuzzy set, we can represent this triangle.
Let us consider two triangles which are denoted in the form of HFSs and we have to recognize triangle F = {y 1 , 0.70, 0.75, 0.80} which is considered as unknown pattern.The different values obtained by existing and proposed measure are as mentioned in Table 4.As we know pattern recognition problems should allocate one of the known pattern according to the identification principle.The classified results which we have calculated using different measures are as compiled in Table 4.

Example 7
Consider a company who wants to hire HR Manager for its company.The assessment of candidates in the form of HFSs are B 1 = {y 1 , 0.17, 0.20} , B 2 = {y 1 , 0.19, 0.20} , and B 3 = {y 1 , 0.18, 0.19} .The standard characteristics set by interviewer which is based on the requirement of job are F = {y 1 , 0.20, 0.20} .Now the Example 8 Now we will consider an example of Asian rice (Oryza Sativa).This can be further divided into three types-Javonica, Japonica, and Indica.Total 21 sample has to be taken into consideration means 7 samples of each type.Different parameter/attributes for each are considered as milling degree (MD), foreign matter (FM), whiteness (WT), moisture content (MC), and grain shape (GS).
After that similarity between Javonica with Japonica and Javonica with Indica has to be discussed on different existing measures and proposed measure for contrast purposes.Further, we will calculate another term i.e Degree of Confidence (DOC) to explore the behavior of different measures.Initially, Hatzimichailidis et al. [60] had given the notion of DOC which can be described as: where X is any HF-compatibility measure.
We have taken the random data as the different parameters of Asian rice.But this data cannot exactly used on different measures for calculation purposes.Firstly, we have to modify this data into HF-domain.
Therefore, we established transformation formulas as: Where mf 1 (y ij ), mf 2 (y ij ) and mf 3 (y ij ) = membership functions of the element y ij .The computed values corresponding to Example 3 are as mentioned in Table 6 which outlined the similarity between Javonica with Japonica and Javonica with Indica along with their DOC.

Results and discussion
After observing Table 4 it can be easily reveals that shnhh and shneh are not classifying any of the known pattern because (B 1 , F ) and (B 2 , values, thus getting unclassified result.Although sman and shnh are classifying B 1 as known pattern but both are attaining same value on both cases whereas all the other measures including proposed measure are classifying B 1 as the known pattern means F is classified as the class of B 1 , i.e.B 1 has particular shape.Thus, proposed measure is getting consistent result with existing measures. Table 5 reveals that sman and shnh are getting same value on (B 1 , F ) and (B 3 , F ) .Thus, they are not able to distinguish between candidates.shnhn and shneh are also attaining ( 24) DoC(X) = X(Javonica, Japonica) + X(Javonica, Indica) − Y , unclassified results.In the similar fashion most of the existing measures are getting unclassified results means they are not distinguished the assessment of candidates.But proposed measure ( S) is capable to distinguish among candidates and getting B 2 as one of the best candidate because the similarity measure value of B 2 is larger than B 1 and B 3 .This shows that proposed measure is getting consistent and better results as compared to others.We observe that Javonica has more similar with Japonica than Javonica with Indica (please refer Table 6).(Javonica, Japonica) has attained more value than (Javonica, Indica), this can be outlined with Fig. 1.With the help of Table 6 it can also perceive that proposed measure has maximum DOC than existing measures (please see Fig. 2).From all these consideration we concluded that proposed measure is more reliable and efficient than others.

Clustering analysis
Clustering is one of significant modeling technique.Some of the researchers explored the concept and elongated into IFSs and HFSs.Wang [31] and Yao et al. [61] explored the concept of clustering using FS and employed in decision-making, production predict and assessment process.Association coefficient based IFSs clustering algorithm was suggested by Xu et al. [62] and expanded it for Interval valued FS.The correlation coefficient of HFSs was examined by Chen et al. [15] and implemented for clustering.Farhadinia [51] suggested similarity measure using HFSs and applied in clustering process.A clustering algorithm was investigated by Wen et al. [63] using HF Lukasiewicz implication operator.Yang and Hussian [64] suggested their measures using HFSs based on Hausdorff metric and applied in the field of MCDM and clustering.Some new similarity and distance measures was suggested by Zhang and Xu [41,65] using HFSs in the application of clustering.Zhang and Xu presented an Hierarchical clustering algorithm using hesitant fuzzy.Further, we will suggest clustering algorithm for HF-environment which is based on MST.Algorithm: The steps of MST based clustering algorithm using HFSs are: 1. Firstly Consider HFSs (B 1 , B 2 , ..., B i ) in B. Next, we have to calculate the values of proposed similarity measure using these HFSs and construct hesitant fuzzy similar matrix.2. Draw HF-graph by interconnecting each node with the help of edge.Assign value to each edge from HF similar matrix.3. Next, maximum spanning tree (MST) is to be formed using Kruskal [66] or Prim method [67].
(i) An edge is connected between two nodes and each edge has its weight.First of all we have to arrange the weight in decreasing order.(ii) Then, Pick an edge having highest value.1. HF-similar matrix using above data is as follows: 2. Hesitant fuzzy graph Ẑ = (V , E) by interconnecting the nodes is as shown in Fig. 3. 3. Next, we have formed the MST of Hesitant Fuzzy graph using all the following steps.
(i) Arrange the edges according to their values.
(ii) e 1,8 is the edge having max.weight as it is connected between node 1 and 8.
(iii) Next selected edge is e 5,7 which is having next higher value e 1,8 and it is also not forming the closed circuit with e 1,8 .
(iv) In this way, we repeated the above steps until 9 edges has been selected and attain the MST graph (see Fig. 4).
4. Lastly, threshold (ψ) has been selected to form the clusters into groups which are as mentioned in Table 8.
Comparison of clustering results: According to above algorithm, the comparative results are shown in Tables 8, 9, and 10.  8.We have also find the clusters for existing measure (s hnh ) using the same example and clustering obtained are as mentioned in Tables 9 and 10.Tables 9 and  10 shows that clusters are not fixed for existing measure ( shnh ) as two different clustering results are formed.Therefore, it is difficult for users to select the right clustering results.Thus, we concluded that existing measures are not reliable and efficient whereas proposed measure is getting unique, better and efficient results using HF-MST algorithm technique.
To show the ability of proposed measure further, we have compared hierarchical k-means clustering with MST based clustering using another example.The computed results are as mentioned in Table 12.Although, same results can be obtained using both methods but in hierarchical K-means clustering we have to calculate clustering center in each step and merge the clusters into single cluster.Thus, in the existing clustering technique lot of computation is required where as proposed clustering technique is simpler and effective as compared to other techniques.

Conclusion and future scope
Similarity and distance measures are two important implements for solving clustering, pattern-recognition, medical diagnosis problems and so on.Although, various authors have suggested their measures but they are mainly distance measures while some of them have extracted similarity measures from distance measures and can be applied for different applications.But it is observed that some of them are not getting appropriate results means they are getting unreasonable or counter intuitive results.This motivate us to develop new similarity measure which can tackle with all these problem and satisfying different properties for HFSs.Further, numerical experiment and pattern-recognition problems are taken into consideration.In numerical experiment we considered cases using different length of HFSs to explore the performance of different measures which exhibits that proposed measure is attaining consistent and rational results.To verify the proposed measure validity for pattern recognition problems we have considered the different examples.In first two examples classification of unknown pattern from one of the known patterns has been carried out and in third example we calculated another term i.e.DOC (Degree of Confidence) by taking an example of Asian Rice.Furthermore, clustering algorithm using maximum spanning tree (MST) has been suggested for HF-environment and comparison has been performed with existing measures.From all these comparative study we determined that proposed measure is reliable and getting superior outcomes than others.This work expanded to IV-HFS (interval valued HFS), HIFS (Hesitant intuitionistic Fuzzy set), Hesitant picture fuzzy set (HPFS), Hesitant spherical fuzzy set, Hesitant q-rung orthopair fuzzy set for future scope of improvement.In future, we can extend our work using Adaptive K-means and hybrid clustering algorithm.

Definition 6
2, 0.4, 0.7 > and h B 2 =< 0.3, 0.5, 0.6 > are two HFEs.After implementing summation and multiplication operation as defined in Definition 5, we have The score function of HFE was described by Xia and Xu[58] to obtain the MAX and MIN operators of two HFEs.Xia and Xu[58] For a HFE h B , c(h B ) = 1 lh B α∈h B α is known as the score function of h B , lh B consider as number of values of h B .Let us consider HFEs as h B 1 and

1
Liao et al. [57].Let us consider h B 1 and h B 2 as two HFEs, we can write 3 > are two HFEs, according to operational laws of HFSs defined in Definition 4, we can write and then, lh B 1 ⊕h

Table 1
Calculated values at different pairs

Table 2
Calculated values at different pairs

Table 6
Different values corresponding to Example 3