Comparison by Correlation Metric the TOPSIS and ELECTRE II Multi-Criteria Decision Aid Methods: Application to the Environmental Preservation in the European Union Countries

A R T I C L E I N F O A B S T R A C T Article history: Received: 02 September, 2020 Accepted: 23 September, 2020 Online: 20 October, 2020 This article is part of the field of Multi-Criteria Decision Aid (MCDA), where several criteria must be considered in decision making. All criteria are generally as varied as possible and express different dimensions, and aspects of the decision problem posed. For more than four decades, several MCDA methods have emerged and have been applied perfectly to solve a large number of multi-criteria decision problems. Several studies have tried to compare these methods directly with one another. Since each method has its disadvantages and advantages, a direct comparison between the two methods is normally far from common sense and becomes subjective. In this article, we propose a rational and objective approach that will be used to compare the methods between them. This approach consists of using the famous correlation measure to evaluate the quality of the results obtained by different MCDA approaches. To prove the effectiveness of the proposed approach, experimental examples, as well as a study of real cases, will be studied. Indeed, a set of indicators, known as The Europe 2020 indicators, are defined by the European Commission (EC) to control the smart, sustainable and inclusive growth performance of the European Union countries (EU). In this proposed real study, a subset of indicators is used to compare the performance of environmental preservation and protection of the EU states. For this, the two-renowned methods MCDA ELECTRE II and TOPSIS are used to classify from the best to the worst CE countries with regard to environmental preservation. The results of the experiment that the proposed ranking quality measure is significant. For the case study shows that the ELECTRE II method results in a better ranking than that obtained by the TOPSIS method.


Introduction
This present article is an extension of the paper published at the international conference IRASET'20 [1]. In this article, we have shown the importance of the correlation metric to evaluate the quality of the ranking results of the MCDA methods. In this paper, we will suggest an extension of the measurement of quality, this time considering the relative importance of the criteria selected. Indeed, in numerous multi-criteria decision problems, the decision-makers (DMs) do not have the same vision and the same levels of importance of the criteria, which is naturally given the priorities of the choices are not always equal and even sometimes conflicting.
For taking into consideration the criteria importance, the MCDA methods use a weighting system, represented by a set W, in which the highest weight is assigned to the most important criterion, and the lowest weight is assigned to the least important criterion. The difference between the MCDA methods lies in the approach used to aggregate the criteria with their weights to select the best choice with regard to the criteria considered.
Nowadays, the field of MCDA has known a remarkable abundance of methods which have emerged and applied to several areas [2], [3] such as Human Resources, Health, Industry and Logistic Management, Economy Management, Energy Management, Water Resources Management, the Environment Management, and recently some methods are used in applications on the fight against Covid19 [4]. Generally, an MCDA problem is defined by considering a finite set A of n alternatives, where each alternative is described by a family F of m criteria. In MCDA discipline, three obvious problematics are possible. The first allows ranking the set A from the best to the bad alternative, known as Ranking Problematic. The second consists in classifying the set A into predefined classes, called Sorting Problematic. Finally, in the third decision problem, we find to select the best alternative, known as Choice Problematic. In this article, we discuss the ranking problematic. Prospects are possible to apply the results of this paper to the other two problematics.
For the same ranking problematic, there are many MCDA methods are proposed in multi-criteria analysis literature, each with its resolution process as well as its advantages and disadvantages [5]. Thus, for a given multi-criteria ranking problem, the DM obtains several proposals for ranking solutions, and it becomes not obvious to opt objectively for a single solution.
The approach proposed in this work allows us to remedy this inconvenience of the choice embarrassment of ranking solutions. Indeed, a metric will be defined to evaluate the quality of each ranking solution obtained. The ranking which gives a better quality will, therefore, be retained. In the first version of the proposed metric [1] no reference was made to the importance of the criteria. It was supposed that all the criteria are treated with the same importance, i.e. each criterion is not considered more interesting than others. In this paper, we extend this metric for measuring the quality of a ranking to the general case where all the criteria do not necessarily have the same importance.
The proposed correlation metric not only can be used to distinguish the best ranking among several results of the MCDA methods, but it can also be used to guide and help the DM to perform the robustness analysis. The latter is a primordial activity and highly recommended in the multi-criteria analysis [6]. Indeed, the primary motivation for this activity is since the data provided by the DMs are often subject to uncertainty and imprecision, in particular at the level of the choice of the parameters required by specific MCDA methods is not sometimes obvious for DMs, as in the case of criteria weights [7]. The uncertain and imprecise choices of parameters will undoubtedly have repercussions on the quality of the final result. The robustness analysis then consists in verifying the stability of the results by testing a set of slightly different values of parameters. The metric thus proposed could help to compare objectively all the results obtained by the robustness analysis.
Intending to prove and illustrate the significance and importance of the ranking quality measure, we propose a real case study that aims to rank the European Union countries according to the level of preservation environmental. In fact, a set of indicators are defined and monitored by the European Commission, known as Europe 2020 indicators, to compare and control the smart sustainable and inclusive growth performance of all the EU countries (http://epp.eurostat.ec.europa.eu/portal/page/portal/europe_2020_ indicators/headline_indicators). In the proposed case study, a subset of indicators is selected as criteria, and all relate to environmental performance. As for the example of these indicators: "Waste generated except main mineral waste", "Recycling rate of e-waste", "Exposure to PM10 pollution", "Exposure of the urban population to air pollution by fine particles", "Final energy consumption", "Greenhouse gas emissions", "Share of renewable energies", and etc. A total of 11 indicators are selected. In first exploitation, these indicators are used by the two-popular methods MCDA ELECTRE II and TOPSIS to rank and evaluate the environmental performance of the EU countries. All results obtained are compared based on quality measurement.
The case study remains valid and open to all other MCDA ranking methods. The choice of methods used in this paper is only illustrative.
The rest of the article is structured as follows. In the second section, a brief reminder will be given on MCMA methods. In section III, a reminder of the ELECTRE II and TOPSIS methods will be presented. Section IV presents the case study to rank the EU countries according to the environmental preservation performance. In section V, we will present the extension of the ranking quality measurement approach. In section VI, all the numerical experiments for the test example and the case study will be detailed. Lastly, the paper will be concluded with new and possible research axes.

Background
Certainly, the decision-making is often multi-criteria, where several criteria are considered to find a solution, such as a better choice, a ranking or a sorting, according to the problematics mentioned above. The criteria adopted are often contradictory insofar as a better choice in relation to one criterion is not necessarily so for another criterion, as price and quality are two contradictory criteria. In addition, the criteria are not always expressed on the same measurement scale and can represent from different points of view [8]: such as political, military, economic, comfort, social, education, investment cost, environmental impact, etc.
In some MCDA methods, such as the Weighted Sum method [9] and TOPSIS method [3] all criteria are normalized and aggregated into a single criterion, called synthesis criterion, on the basis of which the final decision will be made. Note that any transformation of the criteria by normalization will not be innocent and will have an influence on the final solution. Indeed, the final solution may depend on the normalization operation used, so these methods are to be used with recklessness [10].
Nowadays, the MCDA field has experienced great progress both in theory and in application [11]. Many methods have emerged, each has its own approach to aggregate criteria, and each has its advantages and disadvantages. There are currently two main resolution processes [12].
The first process is known as the Synthesis Criteria Approach. The principle of the methods of this approach is to transform the multi-criteria problem into a simple mono-criterion problem, by the first normalization of all the criteria, and then an aggregation of all the normalized criteria into a single decision criterion. As an example of these methods, we find the method of the weighted sum (WSM) [7], [9], the method of programming by objective [13], TOPSIS method [3] and many other methods. In this paper, the TOPSIS method will be used. The second resolution process takes the name of outranking approach. Whose main idea is to develop a relationship, by comparing the alternatives two by two, named outranking relation and denoted by S. This relation S will be used in a second step of the process to find the compromise solution according to the problem to be solved: problematic choice, classification or sorting. There are numerous methods which are based on the principle of this approach, of which we cite the two popular methods: the methods family PROMETHEE (Preference Ranking Organization METHod for Enrichment of Evaluations) [14], and the methods family ELECTRE (Elimination And Choice Translating Reality) [15], [16]. In this paper, the ELECTRE method will be used and compared to the method TOPSIS.
The principal objective of the presented paper is to propose a rational tool to compare MCDA methods objectively. Several authors have tackled this question, but for the majority of them, they have tried to compare the methods directly according to their resolution processes. For example, we cite the works [17], [18]. The direct comparison between methods, for example, based on their own characteristics and the approach to which they belong, will undoubtedly be a devoid comparison of objectivity, as each method has its limitations and advantages. We propose to use the correlation metric as a tool to compare the results obtained by the ranking methods instead of a direct comparison.

The data necessary for an MCDA method
The data hypotheses of an MCDA problem are at least the set of n alternatives A, which contains all the possible solutions, and a set of m criteria F, which are the dimensions along which the alternatives will be evaluated.
The following Table 1, called the performance matrix M [16], summarizes all the data which we need in a decision problem.

The remainder of the MCDA ELECTRE II and TOPSIS methods
The ELECTRE II and TOPSIS methods are considered among the most widely used methods in the MCDA field. Several research works and real applications have successfully deployed these two methods [8], [19]. However, the two methods proceed differently. The ELECTRE II method is a method which is the basis of the outranking approach, while the TOPSIS method is a method which is part of the approach of the unique synthesis criterion. The common point between the two methods is that both are able to rank the alternatives of set A from the best alternative(s) to the bad alternative(s); moreover, they take as starting data the decision matrix M and a set W of criteria weights.
In this section, we present the algorithms of the two methods, which we will need for the case study.

The TOPSIS method
The TOPSIS method (Technique for Order Preference by Similarity to Ideal Solution) [3] is developed to rank all the alternatives of the set A from the best alternative(s) to the bad alternative(s). As shown in Figure 1, the TOPSIS method starts with a normalization of the decision matrix M, then it calculates a Euclidean distance between all the alternatives and two reference solutions Ab, and Aw, respectively called Ideal Solution and Anti-Ideal Solution. Then the similarity Swi, called the relative closeness, is calculated between each alternative Xi and the tow solutions Ab and Aw. Lastly, the alternatives are ranked according to the similarities Sw, thus calculated. In the first step, we build the decision matrix M which is composed of m criteria and n alternatives, as shown in Table 1.

▪ Step 2: Normalization of the performance matrix
To compare the performances of the alternatives by the Euclidean distance, one of the conditions imposed by the TOPSIS method is that all the performances must be expressed on the same measurement scale. In this step, a normalization is then calculated. This normalization consists of replacing each performance gj(Xi) by an equivalent normalized performance calculated by the following equation 1.
For the rest of the decision process, the decision matrix M is therefore replaced by the new normalized matrix = ( ) × .

▪ Step 3: Construction of weighted normalized decision matrix T.
For taking into account the importance wj of criteria in the decision-making process, the matrix R is again replaced by a new matrix = ( ) × which is obtained by the following equation 2: In equation 2, the performance of alternative Xi on criterion gj is reinforced by the weight of criterion gj. The performance will, therefore, be multiple depending on the importance of the criterion.

▪ Step 4: Calculation of the Ideal solution Ab and the anti-ideal solution Aw
In step 4, we determinate the worst solution Aw and the best solution Ab. For each criterion gj, we calculate the performances Abj and Awj by the following equations 3 and 4.
The worst solution Aw is calculated in an opposite way to the best solution Ab, it is obtained by the following formula 4.
Step 5: Calculation of the Euclidian distance between each action Xi and the Ab and Aw We calculate by equations (5) and (6) the Euclidean distance between all the alternatives Xi and the solutions Aw and Ab. And Step 6: Calculation the similitude coefficient Sw to Aw and Ab For each action Xi, the similarity Swi, called the relative closeness, is calculated, by equation 7, between each action Xi and the Ideal Ab solution and the Anti-Ideal solution Aw. This similarity is the Euclidean distance between the action Xi and the Anti-Ideal Aw attenuated by the sum of the two distances from Xi to the solutions Aw and Ab. An alternative obtains so the best ranking when its similarity is better.
Step 7: Rank the actions in descending order by similitude coefficient Lastly, the actions are ranked from the best action to the bad action according to similarities swi (i=1,…, n) calculated.
In summary, the main idea of the TOPSIS method is an alternative will be better when it is closer to the Ideal Ab solution and further from the Anti-Ideal Aw solution.

ELECTRE II method
The ELECTRE family of methods currently has 6 different methods ELECTRE I, IS, II, III, VI, and ELECTRE TRI [8] [10]. The six versions have the same principle of constructing an outranking relation in the first step of the method, and then its exploitation in the second step. However, the six ELECTRE methods are distinguished by the problem posed (choice, sorting or ranking), and whether the DM hesitates to prefer an alternative x to another alternative y, in the case where the alternatives have very similar performances.
In this article, we will compare the ELECTRE II method and the TOPSIS method on the basis of the real case study on environment preservation and the quality measurement of ranking.
The ELECTRE II method [10] [12], like all the other versions, proceeds in two phases. In the first phase, two outranking relations S 1 and S 2 are developed. In the second phase, the relations S 1 and S 2 are exploited to rank the alternatives.
In the approach to developing the outranking relation, pairwise comparisons between pairs of alternatives (x, y) are performed. For all ELECTRE versions, the outranking relation S is developed as follows: xSy, if two conditions are satisfied: • x is better than y for most criteria (majority principle) • without there being a criterion for which y has a preference much greater than that of x (principle of minority).
The two conditions of majority and minority are known as the concordance condition and the non-discordance condition.

▪ Step 1: Construction of relations S 1 and S 2
In ELECTRE II method, we construct two relations S 1 and S 2 , such that S 1 is included in S 2 , i.e.: if xS 1 y then x S 2 y. To do this we require two thresholds of concordance c1 and c2, and two thresholds of discordance, which all verify: c1>c2 and d1<d2.
The relation S 1 is, therefore, more stringent than the relation S 2 , because the majority of criteria required to satisfy the condition of concordance in relation S 1 is much larger than that required for the relation S 2 : c1>c2. In addition, the maximum acceptable difference to reject the discordance effect is too small in S 1 and larger in S 2 : d1<d2.
The S 1 and S 2 outranking relations are called respectively the "strong outranking" relation and the "weak outranking" relation: The concordance threshold defines the minimum majority required of the criteria that match the assertion of the outranking relation. As an example, a concordance threshold of 0.6 requires a majority of more than 60% of criteria to accept the concordance test.
However, the discordance threshold defines the maximum difference supported between the performance of two alternatives on a given criterion to accept the second discordance test.
The construction of the two outranking relations S 1 and S 2 is formulated by the following equation 8. After the construction in the first step of the two relations S 1 and S 2 , we calculate two reverse pre-orders, the first, named P1, is obtained by exploring the graph, corresponding to the relation S 1 , from the root to the leaves. A second pre-order, named P2, is obtained by exploring the graph in the reverse direction, this time starting from the leaves towards the root. Then the two pre-orders P1 and P2 are combined to give a final median pre-order P of the form = Lastly, the alternatives having obtained the same rank in the ranking median P, will be separated according to the second relation S 2 .

A real case application
The Europe 2020 indicators (http://epp.eurostat.ec.europa.eu/portal/page/portal/europe_2020 _indicators/headline_indicators ) are set up and deployed by the European Commission (EC) in order to control the objectives of the strategy set out for smart, sustainable and inclusive growth in the member states of the European Union (EU). The objectives of sustainable growth aim for a more resource-efficient, greener, and more competitive economy. It is decided to achieve a 20% reduction in greenhouse gas emissions from 1990 levels, in addition, an increase in the share of renewable energy sources in energy consumption. All of the above objectives must be measurable and comparable. This is why the main indicators have been defined by the EC to facilitate the monitoring of the progress of the indicators in each member state, of which we cite: ▪ Greenhouse gas emissions; ▪ Share of renewable energies in gross final energy consumption; ▪ Contributions to eco-innovation; ▪ Waste management and recycling; ▪ Water management and production; ▪ Energy intensity of the economy; ▪ Employment rate by sex; ▪ Early leavers from education and training; ▪ The population at risk of poverty or exclusion; ▪ Integration rate of emigrants; ▪ Etc. The different indicators can reflect the diversity of performance in each country. Also, they measure the level of progress of the goals over time and can, therefore, be used for comparison purposes at the European and international level.
In the case study presented in this paper, the study focuses on the level of ecological conservation performance and environmental preservation in the EU. For this, a subset of the Europe 2020 indicators is used. More precisely, all the indicators having a direct and indirect relationship with the environmental dimension are retained. The list of indicators selected is not exhaustive and remains the first exploitation of the institutional database developed and put online by Eurostat (https://ec.europa.eu/eurostat/).
The annual values recorded on the indicators cover several years from the years 1990 to the year 2018. For the indicators selected for the evaluation of environmental performance, we have deployed the latest data available on each indicator and each country. Some countries are excluded from the study because they lack information on certain indicators, such as Switzerland.
As shown below, there are a multitude and varied of conflicting indicators and not necessarily reducible into a single indicator. The multi-criteria approach is, therefore, essential to compare and classify European countries according to the different indicators. The European countries represent the set A of the alternatives, and the indicators constitute the set F of the criteria. The proposed problematic consists of ranking the EU member states, according to environmental performance. The decision matrix M is shown in Table 2, and a brief description of the criteria is given below. In this first analysis of the indicators, we consider that no criterion is privileged over the others. In other words, all the criteria have a weight equal to 1.

• Criterion 1: Waste generated except main mineral waste
This indicator is defined as all hazardous and non-hazardous waste produced in a country per year and per capita. The total annual number of kilograms of waste produced per person measures the indicator.

• Criterion 2: Recycling of electronic waste (e-waste)
This criterion e-waste is a rate which is estimated by multiplying the "collection rate" by the "reuse and recycling rate".
The indicator is expressed as a percentage (%).

• Criterion 3: Exposure to PM10 pollution
This criterion expresses the percentage of citizens living in urban areas exposed to concentrations of particles <10 µm (PM10) exceeding the daily limit value (50 µg / m3).
The European Environment Agency collects air quality data on an annual basis.
• Criterion 4: Exposure of urban citizens to atmospheric pollution by fine particles This criterion expresses the concentration of suspended particles PM10 and PM2.5 weighted according to the urban population potentially exposed to air pollution.
The particles PM10 and PM2.5 are harmful, and they can cause serious lung inflammation.

• Criterion 5: Agricultural area covered by organic farming
The criterion is expressed in terms of the share of the agricultural area using only organic farming. It is a criterion that we choose to maximize in the ranking.

• Criterion 6: Final energy consumption
By "final energy consumption" we mean the sum of the energy consumption of the transport industry in the residential sector, services, and agriculture. This quantity is relevant for measuring energy consumption in the last resort of energy use and for comparing it with the objectives of the Europe 2020 strategy. More information can be found on the statistics of energy savings on Statistics Explained.

This indicator is measured in millions of tons of oil equivalent (TOE)
• Criterion 7: Primary energy consumption By "primary energy consumption" is meant gross domestic consumption with the exception of any non-energy use of energy products (e.g. natural gas used not for combustion but for the production of chemicals). This quantity is relevant for measuring actual energy consumption and for comparing it with the Europe 2020 targets. This indicator shows trends in total anthropogenic greenhouse gas emissions contained in the "Kyoto basket". It presents the total annual emissions compared to 1990 emissions. The "Kyoto basket" includes the following greenhouse gases: carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and so-called fluorinated gases (hydrofluorocarbons, per-fluorinated hydrocarbons, nitrogen trifluoride (NF3) and sulfur hexafluoride -SF6). These gases are grouped into a single unit according to specific factors corresponding to their global warming potential (GWP). Aggregate greenhouse gas emissions are expressed in CO2 equivalent units.
The EU as a whole is committed to reducing its greenhouse gas emissions by at least 20% by 2020 compared to 1990.
• Criterion 10: Use of renewable energies This criterion expresses the level of use of renewable energies.
This indicator is then to be maximized in the ranking.

• Criterion 11: The eco-innovation index
The criterion is calculated on the basis of 16 sub-indicators from 8 data sources in 5 thematic areas: contributions to ecoinnovation, eco-innovation activities, consequences of ecoinnovation, results in terms of 'efficient use of resources and socioeconomic results.
The overall index of an EU country is evaluated by the average of these 16 sub-indicators. It shows how each country practices eco-innovation compared to the EU average. This indicator is then to be maximized in the ranking.

Process of the extension approach
For the extension of the quality measure of any ranking P, we propose to compare this ranking P to all the rankings induced by the criteria. Indeed, it is so easy to rank the alternatives on each criterion gk, and we baptize this rank by P k . The quality measurement then makes it possible to measure the correlations between the P ranking and the various P k rankings.
In practice, the rankings P k and P are replaced by the comparison matrices R k and R given by the equations 9 and 10. In order to consider the differences between the importance of the criteria, the final result of the comparisons is aggregated by a weighted average of the correlations between the R and R k .
The approach proceeds in three steps: In the first step, the comparison matrices R k induced by the different criteria gk are evaluated. In the second step, the comparison matrix R induced by the ranking P result of the MCDA method is evaluated. In the last step, all the matrices R k are compared to the matrix R. The results of the comparison are then aggregated with the weighted average. The approach is presented as follows.
The matrix R K contains only the numbers 0 and 1. The value 1 means that the alternative i is preferred to the alternative j according to the criterion gk.
▪ Step 2: Compute the comparisons matrix R induced to the ranking P The matrix R is calculated by formula 10.
The value 1 indicates that the action i is ranked before the action j for the MCDA method used.

▪
Step 3: Evaluate the quality for the ranking P As the example E1 of the experiment section proves, a P ranking will be better if it follows the same direction of all the P k rankings of the gk criteria. This amounts to measuring the dependence between the matrix R and each matrix R k . The dependence between the matrices is measured mathematically by the correlation coefficient [1]. To take into account all the correlations calculated as well as the relative importance of the criteria W, the quality measure Q(P) is then calculated by a weighted average, which is given by the following formula (11).

Propriety: Equivalence between P and R
Let P be a ranking and R the comparison matrix deduced from P. The matrix is given by the equations (1) or (2) and let Xi be any alternative of A.
The rank of Xi in P can be deduced from the matrix R and conversely. In other words, the vector P and the matrix R are equivalent. Indeed, by definition, from ranking P we can build the matrix R. Now, we suppose that we only have the matrix R. If for the alternative Xi we calculate the sum of all the values 1 on its line of the matrix R. Let L be this value, so we have L(Xi)= ∑ =1 .
The value L(Xi) gives the number of alternatives that are classified behind Xi. The ranking P is obtained by sorting the alternatives Xi in decreasing order of the values L(Xi).
As an essential result of this propriety, it is that the comparison between two any comparison matrices R1 and R2 gives the same result as the direct comparison of the rankings P1 and P2 associated because as we have just demonstrated, the comparison matrices of and rankings are equivalent.
We will show in the discussion paragraph this equivalence at the base of the numerical results obtained by the case study.

Numerical results of the experimentation example
To show that the metric of correlation proposed gives a significant result for the measurement of the quality of the rankings, we propose a sample of 15 varied rankings. In this example, we consider an MCDA problem of 3 criteria: F={g1, g2, g3}, and a set of four alternatives A={A1, A2, A3, A4}. For the simplification of the example, we propose that the three criteria give the same ranking: A1> A2> A3> A4, as shown in Table 2 below. This ranking expresses that the alternatives A1, A2, A3 and A4 are respectively in rank 1, 2, 3 and 4.
The 15 rankings are chosen as an experiment to determine the significance of the proposed correlation metric. We have carefully chosen these rankings in order to cover almost all possible cases. Moreover, to show how the quality can vary according to these classifications choices. Table 4 gives all the rankings selected for the test.
In total, we propose two borderline cases of rankings with other intermediate cases. The first limit ranking is the ranking E1: X1>X2>X3>X4 which is the same as all the rankings given by the criteria g1, g2 and g3. The second limit ranking is the E15 ranking: X4> X3> X2> X1, which is the opposite of the three classifications given by the three criteria. The 13 other cases are the rankings intermediate where the alternatives permute their ranks between cases E1 and E15.
In this experiment, we also consider the case where the rankings can contain alternatives obtained from the same ranks. It is the case of the rankings E11, E12, E13 and E14. Table 5 summarizes the calculated quality results for the 15 selected rankings. An interpretation of the results will be given in the following discussion section. The graph illustrated by Figure 2 below represents the variation in the quality Q(P) of the 15 selected test rankings. The graph undoubtedly proves that the quality measure Q(P) is significant. Indeed, for the ideal-ranking, E1: X1>X2>X3>X4, which coincides with the three rankings induced by the three supposed criteria g1, g2 and g3, gives a maximum quality, which is worth Q(E1)=1. From more, the ranking E15: X4>X3>X2>X1, which is opposed to the three rankings induced by the three criteria g1, g2 and g3, gives the most inferior quality which is worth Q(E15)=-0.63. For all other cases of rankings, even for rankings with equal rank, the quality varies between 1 and -0.63. In addition, that depends on the ranks of the alternatives. In summary, the quality remains close to the maximum value when the alternatives keep almost the same ranks of the rankings induced by the criteria. And, the quality becomes poorer when the alternatives score far from the ranks of the rankings induced by the criteria.

Numerical results of the study case
For the comparison and ranking of European countries according to performance and environmental preservation, we use the two methods TOPSIS and ELECTRE II at the base of the decision matrix M illustrated by Table 2. The rankings obtained by the two methods are given in Table 8.

Ranking of countries by the TOPSIS method
We calculate the swi similarities, given by equation 7, for each country. Then we rank the countries in descending order according to the swi scores thus calculated. The result of the ranking obtained by the TOPSIS method is given in Table 8.

Ranking of countries by the ELECTRE II method
The first step of the ELECTRE II method consists of calculating the matrix of concordance indices C(Xi, Xk), given by equation 8, for all pairs (Xi, Xk) of countries. Then, in the second step of the method, we build the two outranking relations S1 and S2, as indicated previously in the method remainder paragraph.
For the concordance thresholds to be provided, we choose c1=0.8 and c2= 0.6. These values are the standard choices of several MCDA software. Moreover, for the discordance thresholds, we choose d1=60% of the extent of each criterion. Furthermore, d2=80% of the extent for each criterion, see Table 6.
The extent of a criterion gj is given by extent(gj)=Max(gj(a))-Min (gj (a))) for any a of A.
In MCDA, for the case of the ELECTRE method, it is strongly recommended to make a robustness analysis [6], which shows the stability of results. This analysis involves testing multiple values for the parameters required by the method, such as concordance and discordance thresholds, and seeing how the results obtained by the method may change depending on the parameter values used. For this reason, we choose the second test of discordance values. However, we keep the same concordance thresholds, because we obtained almost the same rankings for different values of concordance thresholds. The values used are the best thresholds which disperses the ranking of countries as much as possible.

Comparison of ELECTRE II method to TOPSIS method
All this work aims to show how quality measurement can be used as a rational tool to compare the results obtained by several methods objectively. This same tool can be used to compare the results of the robustness analysis, as is the case of the ELECTRE II V1 and ELECTRE II V2 versions. To choose the best ranking, we use the quality measure of rankings Q(P) given by equation 11. Table 9 gives the results of the comparison between the ELECTRE methods, for versions V1 and V2, and the TOPSIS method.
Where R is the comparison matrix of the ranking P result of the method. R k is the comparison matrix induced by the criterion gk, with 1k11.
According to this Table 9, we can confirm that the ranking obtained by the ELECTRE II V2 method is the best ranking to be prescribed and recommended to the decision-maker.
According to the rankings results of the three methods ELECTRE II V1, ELECTRE II V2 and TOPSIS, almost all the most industrialized countries, such as Germany, France, and Italy, are placed at the end of the rankings, but more or less not with the same ranks in the three rankings. This result is well justified by the fact that most industrial and developed countries consume much energy and have high rates of carbon dioxide emissions CO2. Except for Sweden, Denmark and Austria, which are industrial countries, but according to the three rankings, they are considered among the top five most environmentally conservative countries in Europe.
According to the numerical results, the robustness analysis is very useful in the MCDA context, where the parameters are sometimes very vague and uncertain concerning the decisionmaker. For example, the countries having obtained equal ranks, in the ELECTRE II V1 version, were separated into countries with different ranks in the ELECTRE II version, which gave the best ranking in terms of the measurement of quality Q(P).

Conclusions
In summary, this article has addressed the following contributions.
On the one hand, a quality measure at the base of the correlation metric of the matrices, which takes into account the relative importance is proposed. This proposed quality measurement is a rational tool for the decision-maker to compare the rankings results of several MCDA methods adopted for its decision problem to be solved.
On the other hand, in order to prove the significance of the proposed quality measure, an experimental test was tested. This example clearly showed the relevance of the proposed measure. In addition, a real application on the preservation of the environment in the countries of the European community was studied. This case study has been proposed to practically illustrate the meaning and relevance of quality measurement to compare MCDA methods. Two popular methods ELECTRE II and TOPSIS were used and compared on the basis of quality measurement. It turns out that the ELECTRE II method gives a better ranking.
Besides, it was shown that the quality measurement Q(P) could be very useful to support the decision-maker in the operation of the robustness analysis. An illustrated example of the robustness analysis has been done on the ELECTRE II method.
The results obtained in this article apply to the case of ranking methods. In our future works, we intend to use the metric for evaluating the quality of the rankings for the case of sorting and choosing problematics.