Illustrative example for weighting methods
This example explains the usage of the Python 3 library package pyrepo-mcda
that provides methods for multi-criteria decision analysis using objective weighting methods. This library contains module weighting_methods
with the following weighting methods:
Equal
equal_weighting
Entropy
entropy_weighting
Standard deviation
std_weighting
CRITIC
critic_weighting
Gini coefficient-based
gini_weighting
MEREC
merec_weighting
Statistical variance
stat_var_weighting
CILOS
cilos_weighting
IDOCRIW
idocriw_weighting
Angle
angle_weighting
Coefficient of variance
coeff_var_weighting
In addition to the weighting methods, the library also provides other methods necessary for multi-criteria decision analysis, which are as follows:
The VIKOR method for multi-criteria decision analysis VIKOR
in module mcda_methods
,
Normalization techniques:
Linear
linear_normalization
Minimum-Maximum
minmax_normalization
Maximum
max_normalization
Sum
sum_normalization
Vector
vector_normalization
Correlation coefficients:
Spearman rank correlation coefficient rs
spearman
Weighted Spearman rank correlation coefficient rw
weighted_spearman
Pearson coefficent
pearson_coeff
Import other necessary Python modules.
[1]:
import copy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
Import the necessary modules and methods from package pyrepo_mcda
.
[2]:
from pyrepo_mcda.mcda_methods import VIKOR
from pyrepo_mcda.mcda_methods import VIKOR_SMAA
from pyrepo_mcda.additions import rank_preferences
from pyrepo_mcda import correlations as corrs
from pyrepo_mcda import normalizations as norm_methods
from pyrepo_mcda import weighting_methods as mcda_weights
Functions for results visualization.
[3]:
# Functions for visualizations
def plot_barplot(df_plot, x_name, y_name, title):
"""
Display stacked column chart of weights for criteria for `x_name == Weighting methods`
and column chart of ranks for alternatives `x_name == Alternatives`
Parameters
----------
df_plot : dataframe
dataframe with criteria weights calculated different weighting methods
or with alternaives rankings for different weighting methods
x_name : str
name of x axis, Alternatives or Weighting methods
y_name : str
name of y axis, Ranks or Weight values
title : str
name of chart title, Weighting methods or Criteria
Examples
----------
>>> plot_barplot(df_plot, x_name, y_name, title)
"""
list_rank = np.arange(1, len(df_plot) + 1, 1)
stacked = True
width = 0.5
if x_name == 'Alternatives':
stacked = False
width = 0.8
elif x_name == 'Alternative':
pass
else:
df_plot = df_plot.T
ax = df_plot.plot(kind='bar', width = width, stacked=stacked, edgecolor = 'black', figsize = (9,4))
ax.set_xlabel(x_name, fontsize = 12)
ax.set_ylabel(y_name, fontsize = 12)
if x_name == 'Alternatives':
ax.set_yticks(list_rank)
ax.set_xticklabels(df_plot.index, rotation = 'horizontal')
ax.tick_params(axis = 'both', labelsize = 12)
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='lower left',
ncol=4, mode="expand", borderaxespad=0., edgecolor = 'black', title = title, fontsize = 11)
ax.grid(True, linestyle = '--')
ax.set_axisbelow(True)
plt.tight_layout()
plt.savefig('results/bar_chart_weights_' + x_name + '.pdf')
plt.savefig('results/bar_chart_weights_' + x_name + '.eps')
plt.show()
def draw_heatmap(data, title):
"""
Display heatmap with correlations of compared rankings generated using different methods
Parameters
----------
data : dataframe
dataframe with correlation values between compared rankings
title : str
title of chart containing name of used correlation coefficient
Examples
----------
>>> draw_heatmap(data, title)
"""
plt.figure(figsize = (6, 4))
sns.set(font_scale=1.0)
heatmap = sns.heatmap(data, annot=True, fmt=".2f", cmap="RdYlBu",
linewidth=0.5, linecolor='w')
plt.yticks(va="center")
plt.xlabel('Weighting methods')
title = title.replace("$", "")
title = title.replace("{", "")
title = title.replace("}", "")
plt.title('Correlation coefficient: ' + title)
plt.tight_layout()
plt.savefig('results/heatmap_weights.pdf')
plt.savefig('results/heatmap_weights.eps')
plt.show()
def draw_heatmap_smaa(data, title):
"""
Display heatmap with correlations of compared rankings generated using different methods
Parameters
----------
data : dataframe
dataframe with correlation values between compared rankings
title : str
title of chart containing name of used correlation coefficient
Examples
----------
>>> draw_heatmap(data, title)
"""
sns.set(font_scale=1.0)
heatmap = sns.heatmap(data, annot=True, fmt=".2f", cmap="RdYlBu_r",
linewidth=0.05, linecolor='w')
plt.yticks(rotation=0)
plt.ylabel('Alternatives')
plt.tick_params(labelbottom=False,labeltop=True)
plt.title(title)
plt.tight_layout()
plt.savefig('results/heatmap_smaa.pdf')
plt.savefig('results/heatmap_smaa.eps')
plt.show()
def plot_boxplot(data):
"""
Display boxplot showing distribution of criteria weights determined with different methods.
Parameters
----------
data : dataframe
dataframe with correlation values between compared rankings
Examples
---------
>>> plot_boxplot(data)
"""
df_melted = pd.melt(data)
plt.figure(figsize = (7, 4))
ax = sns.boxplot(x = 'variable', y = 'value', data = df_melted, width = 0.6)
ax.grid(True, linestyle = '--')
ax.set_axisbelow(True)
ax.set_xlabel('Criterion', fontsize = 12)
ax.set_ylabel('Different weights distribution', fontsize = 12)
plt.tight_layout()
plt.savefig('results/boxplot_weights.pdf')
plt.savefig('results/boxplot_weights.eps')
plt.show()
# Create dictionary class
class Create_dictionary(dict):
# __init__ function
def __init__(self):
self = dict()
# Function to add key:value
def add(self, key, value):
self[key] = value
As an illustrative example, a dataset will be used containing performances of the twelve best-selling electric cars in 2021 according to a ranking available at https://www.caranddriver.com/features/g36278968/best-selling-evs-of-2021/ The dataset is displayed below. \(A_1\)-\(A_{12}\) are the individual alternatives in rows, columns \(C_1\)-\(C_{11}\) denote the criteria, and the Type row contains the criteria type, where 1 indicates a profit criterion (stimulant) and -1 a cost criterion (destimulant). The following are the evaluation criteria for the electric cars evaluated in this research.
[4]:
criteria_presentation = pd.read_csv('criteria_electric_cars.csv', index_col = 'Cj')
criteria_presentation
[4]:
Name | Unit | Type | |
---|---|---|---|
Cj | |||
C1 | Max speed | mph | 1 |
C2 | Battery capacity | kWh | 1 |
C3 | Electric motor | kW | 1 |
C4 | Maximum torque | Nm | 1 |
C5 | Horsepower | hp | 1 |
C6 | EPA Fuel Economy Combined | MPGe | 1 |
C7 | EPA Fuel Economy City | MPGe | 1 |
C8 | EPA Fuel Economy Highway | MPGe | 1 |
C9 | EPA range | miles | 1 |
C10 | Turning Diameter / Radius, curb to curb | feet | -1 |
C11 | Base price | USD | -1 |
[5]:
data_presentation = pd.read_csv('electric_cars_2021.csv', index_col = 'Ai')
data_presentation
[5]:
Name | C1 Max speed [mph] | C2 Battery [kWh] | C3 Electric motor [kW] Front | C4 Torque [Nm] Front | C5 Mechanical horsepower [hp] | C6 EPA Fuel Economy Combined [MPGe] | C7 EPA Fuel Economy City [MPGe] | C8 EPA Fuel Economy Highway [MPGe] | C9 EPA range [miles] | C10 Turning Diameter / Radius, curb to curb [feet] | C11 Base price [$] | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Ai | ||||||||||||
A1 | Tesla Model Y | 155.3 | 74.0 | 340 | 673 | 456.0 | 111 | 115 | 106 | 244 | 39.8 | 65440 |
A2 | Tesla Model 3 | 162.2 | 79.5 | 247 | 639 | 283.0 | 113 | 118 | 107 | 263 | 38.8 | 60440 |
A3 | Ford Mustang Mach-E | 112.5 | 68.0 | 198 | 430 | 266.0 | 98 | 105 | 91 | 230 | 38.1 | 56575 |
A4 | Chevrolet Bolt EV and EUV | 90.1 | 66.0 | 150 | 360 | 201.2 | 120 | 131 | 109 | 259 | 34.8 | 32495 |
A5 | Volkswagen ID.4 | 99.4 | 77.0 | 150 | 310 | 201.2 | 97 | 102 | 90 | 260 | 36.4 | 45635 |
A6 | Nissan Leaf | 89.5 | 40.0 | 110 | 320 | 147.5 | 111 | 123 | 99 | 226 | 34.8 | 28425 |
A7 | Audi e-tron and e-tron Sportback | 124.3 | 95.0 | 125 | 247 | 187.7 | 78 | 78 | 77 | 222 | 40.0 | 84595 |
A8 | Porsche Taycan | 155.3 | 79.2 | 160 | 300 | 214.6 | 79 | 79 | 80 | 227 | 38.4 | 105150 |
A9 | Tesla Model S | 162.2 | 100.0 | 205 | 420 | 502.9 | 120 | 124 | 115 | 402 | 40.3 | 96440 |
A10 | Hyundai Kona Electric | 96.3 | 39.2 | 100 | 395 | 134.1 | 120 | 132 | 108 | 258 | 34.8 | 35245 |
A11 | Tesla Model X | 162.2 | 100.0 | 205 | 420 | 502.9 | 98 | 103 | 93 | 371 | 40.8 | 127940 |
A12 | Hyundai Ioniq Electric | 102.5 | 38.3 | 101 | 295 | 136.1 | 133 | 145 | 121 | 170 | 34.8 | 34250 |
Type | NaN | 1.0 | 1.0 | 1 | 1 | 1.0 | 1 | 1 | 1 | 1 | -1.0 | -1 |
Load a decision matrix containing only the performance values of the alternatives against the criteria and the criteria type in the last row, as shown below. Transform the decision matrix and criteria type from dataframe to NumPy array.
[6]:
# Load data from CSV
filename = 'dataset_cars.csv'
data = pd.read_csv(filename, index_col = 'Ai')
# Load decision matrix from CSV
df_data = data.iloc[:len(data) - 1, :]
# Criteria types are in the last row of CSV
types = data.iloc[len(data) - 1, :].to_numpy()
# Convert decision matrix from dataframe to numpy ndarray type for faster calculations.
matrix = df_data.to_numpy()
# Symbols for alternatives Ai
list_alt_names = [r'$A_{' + str(i) + '}$' for i in range(1, df_data.shape[0] + 1)]
# Symbols for columns Cj
cols = [r'$C_{' + str(j) + '}$' for j in range(1, data.shape[1] + 1)]
print('Decision matrix')
df_data
Decision matrix
[6]:
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 | C11 | |
---|---|---|---|---|---|---|---|---|---|---|---|
Ai | |||||||||||
A1 | 155.3 | 74.0 | 340 | 673 | 456.0 | 111 | 115 | 106 | 244 | 39.8 | 65440 |
A2 | 162.2 | 79.5 | 247 | 639 | 283.0 | 113 | 118 | 107 | 263 | 38.8 | 60440 |
A3 | 112.5 | 68.0 | 198 | 430 | 266.0 | 98 | 105 | 91 | 230 | 38.1 | 56575 |
A4 | 90.1 | 66.0 | 150 | 360 | 201.2 | 120 | 131 | 109 | 259 | 34.8 | 32495 |
A5 | 99.4 | 77.0 | 150 | 310 | 201.2 | 97 | 102 | 90 | 260 | 36.4 | 45635 |
A6 | 89.5 | 40.0 | 110 | 320 | 147.5 | 111 | 123 | 99 | 226 | 34.8 | 28425 |
A7 | 124.3 | 95.0 | 125 | 247 | 187.7 | 78 | 78 | 77 | 222 | 40.0 | 84595 |
A8 | 155.3 | 79.2 | 160 | 300 | 214.6 | 79 | 79 | 80 | 227 | 38.4 | 105150 |
A9 | 162.2 | 100.0 | 205 | 420 | 502.9 | 120 | 124 | 115 | 402 | 40.3 | 96440 |
A10 | 96.3 | 39.2 | 100 | 395 | 134.1 | 120 | 132 | 108 | 258 | 34.8 | 35245 |
A11 | 162.2 | 100.0 | 205 | 420 | 502.9 | 98 | 103 | 93 | 371 | 40.8 | 127940 |
A12 | 102.5 | 38.3 | 101 | 295 | 136.1 | 133 | 145 | 121 | 170 | 34.8 | 34250 |
[7]:
print('Criteria types')
types
Criteria types
[7]:
array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., -1., -1.])
Objective weighting methods
Calculate the weights with the selected weighing method. In this case, the Entropy weighting method (entropy_weighting) is selected.
[8]:
weights = mcda_weights.entropy_weighting(matrix)
df_weights = pd.DataFrame(weights.reshape(1, -1), index = ['Weights'], columns = cols)
df_weights
[8]:
$C_{1}$ | $C_{2}$ | $C_{3}$ | $C_{4}$ | $C_{5}$ | $C_{6}$ | $C_{7}$ | $C_{8}$ | $C_{9}$ | $C_{10}$ | $C_{11}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|
Weights | 0.057741 | 0.099843 | 0.142673 | 0.096488 | 0.236087 | 0.024544 | 0.032432 | 0.018126 | 0.053958 | 0.003863 | 0.234244 |
Use the VIKOR method to determine the value of the preference function (pref) and the ranking of alternatives (rank). The VIKOR method ranks alternatives ascendingly according to preference function values, so the reverse parameter in the rank_preferences
method is set to False.
[9]:
# Create the VIKOR method object
vikor = VIKOR(normalization_method=norm_methods.minmax_normalization)
# Calculate alternatives preference function values with VIKOR method
pref = vikor(matrix, weights, types)
# rank alternatives according to preference values
rank = rank_preferences(pref, reverse = False)
df_results = pd.DataFrame(index = list_alt_names)
df_results['Pref'] = pref
df_results['Rank'] = rank
df_results
[9]:
Pref | Rank | |
---|---|---|
$A_{1}$ | 0.000000 | 1 |
$A_{2}$ | 0.325154 | 2 |
$A_{3}$ | 0.531050 | 4 |
$A_{4}$ | 0.682258 | 5 |
$A_{5}$ | 0.734162 | 7 |
$A_{6}$ | 0.922091 | 10 |
$A_{7}$ | 0.884828 | 9 |
$A_{8}$ | 0.821773 | 8 |
$A_{9}$ | 0.332600 | 3 |
$A_{10}$ | 0.940460 | 11 |
$A_{11}$ | 0.696434 | 6 |
$A_{12}$ | 0.954832 | 12 |
The second part of the manual contains codes for benchmarking against several different criteria weighting methods. List the weighting methods you wish to explore.
[10]:
# Create a list with weighting methods that you want to explore
weighting_methods_set = [
mcda_weights.equal_weighting,
mcda_weights.entropy_weighting,
#mcda_weights.std_weighting,
mcda_weights.critic_weighting,
mcda_weights.gini_weighting,
mcda_weights.merec_weighting,
mcda_weights.stat_var_weighting,
#mcda_weights.cilos_weighting,
mcda_weights.idocriw_weighting,
mcda_weights.angle_weighting,
mcda_weights.coeff_var_weighting
]
Below is a loop with code to collect results for each weighting technique. Then display the results, namely weights, preference function values and rankings.
[11]:
df_weights = pd.DataFrame(index = cols)
df_preferences = pd.DataFrame(index = list_alt_names)
df_rankings = pd.DataFrame(index = list_alt_names)
# Create dataframes for weights, preference function values and rankings determined using different weighting methods
df_weights = pd.DataFrame(index = cols)
df_preferences = pd.DataFrame(index = list_alt_names)
df_rankings = pd.DataFrame(index = list_alt_names)
# Create the VIKOR method object
vikor = VIKOR()
for weight_type in weighting_methods_set:
if weight_type.__name__ in ["cilos_weighting", "idocriw_weighting", "angle_weighting", "merec_weighting"]:
weights = weight_type(matrix, types)
else:
weights = weight_type(matrix)
df_weights[weight_type.__name__[:-10].upper().replace('_', ' ')] = weights
pref = vikor(matrix, weights, types)
rank = rank_preferences(pref, reverse = False)
df_preferences[weight_type.__name__[:-10].upper().replace('_', ' ')] = pref
df_rankings[weight_type.__name__[:-10].upper().replace('_', ' ')] = rank
[12]:
df_weights
[12]:
EQUAL | ENTROPY | CRITIC | GINI | MEREC | STAT VAR | IDOCRIW | ANGLE | COEFF VAR | |
---|---|---|---|---|---|---|---|---|---|
$C_{1}$ | 0.090909 | 0.057741 | 0.093960 | 0.080882 | 0.067363 | 0.143855 | 0.089362 | 0.081732 | 0.079378 |
$C_{2}$ | 0.090909 | 0.099843 | 0.099277 | 0.103800 | 0.125195 | 0.103976 | 0.076405 | 0.103002 | 0.101129 |
$C_{3}$ | 0.090909 | 0.142673 | 0.066132 | 0.128202 | 0.103489 | 0.067308 | 0.094271 | 0.129702 | 0.129595 |
$C_{4}$ | 0.090909 | 0.096488 | 0.075874 | 0.103200 | 0.093050 | 0.076665 | 0.079572 | 0.108379 | 0.106746 |
$C_{5}$ | 0.090909 | 0.236087 | 0.071195 | 0.163513 | 0.124581 | 0.112880 | 0.154235 | 0.162354 | 0.166788 |
$C_{6}$ | 0.090909 | 0.024544 | 0.112865 | 0.052308 | 0.064886 | 0.074361 | 0.071876 | 0.053145 | 0.051074 |
$C_{7}$ | 0.090909 | 0.032432 | 0.120602 | 0.060388 | 0.077107 | 0.073925 | 0.076822 | 0.060739 | 0.058510 |
$C_{8}$ | 0.090909 | 0.018126 | 0.103536 | 0.046188 | 0.053708 | 0.076150 | 0.069418 | 0.046061 | 0.044183 |
$C_{9}$ | 0.090909 | 0.053958 | 0.065514 | 0.073099 | 0.087109 | 0.060565 | 0.039702 | 0.081691 | 0.079337 |
$C_{10}$ | 0.090909 | 0.003863 | 0.098432 | 0.021151 | 0.018566 | 0.126025 | 0.017062 | 0.021711 | 0.020518 |
$C_{11}$ | 0.090909 | 0.234244 | 0.092612 | 0.167270 | 0.184947 | 0.084289 | 0.231276 | 0.151484 | 0.162742 |
[13]:
df_preferences
[13]:
EQUAL | ENTROPY | CRITIC | GINI | MEREC | STAT VAR | IDOCRIW | ANGLE | COEFF VAR | |
---|---|---|---|---|---|---|---|---|---|
$A_{1}$ | 0.276946 | 0.000000 | 0.193324 | 0.000000 | 0.000000 | 0.210477 | 0.000000 | 0.000000 | 0.000000 |
$A_{2}$ | 0.061114 | 0.325154 | 0.053863 | 0.267784 | 0.096602 | 0.062729 | 0.100057 | 0.290131 | 0.285029 |
$A_{3}$ | 0.410427 | 0.531050 | 0.351973 | 0.519285 | 0.353853 | 0.442186 | 0.332813 | 0.544429 | 0.535374 |
$A_{4}$ | 0.665445 | 0.682258 | 0.384420 | 0.629619 | 0.376115 | 0.705929 | 0.353278 | 0.656196 | 0.650874 |
$A_{5}$ | 0.618993 | 0.734162 | 0.449121 | 0.713059 | 0.485436 | 0.680768 | 0.473333 | 0.737880 | 0.731601 |
$A_{6}$ | 0.819258 | 0.922091 | 0.558323 | 0.879933 | 0.619888 | 0.856815 | 0.549559 | 0.905704 | 0.901084 |
$A_{7}$ | 1.000000 | 0.884828 | 1.000000 | 0.869011 | 0.662208 | 0.710609 | 0.657640 | 0.888708 | 0.885934 |
$A_{8}$ | 0.909559 | 0.821773 | 0.920743 | 0.786866 | 0.809377 | 0.435339 | 0.798193 | 0.797143 | 0.796411 |
$A_{9}$ | 0.375000 | 0.332600 | 0.223787 | 0.289556 | 0.255499 | 0.263261 | 0.301515 | 0.256822 | 0.278596 |
$A_{10}$ | 0.745923 | 0.940460 | 0.490234 | 0.868050 | 0.580755 | 0.677000 | 0.528558 | 0.890187 | 0.889102 |
$A_{11}$ | 0.670693 | 0.696434 | 0.493401 | 0.676774 | 0.682902 | 0.506772 | 0.732254 | 0.613263 | 0.652189 |
$A_{12}$ | 0.726178 | 0.954832 | 0.453666 | 0.869930 | 0.585575 | 0.544842 | 0.495033 | 0.896613 | 0.895689 |
[14]:
df_rankings
[14]:
EQUAL | ENTROPY | CRITIC | GINI | MEREC | STAT VAR | IDOCRIW | ANGLE | COEFF VAR | |
---|---|---|---|---|---|---|---|---|---|
$A_{1}$ | 2 | 1 | 2 | 1 | 1 | 2 | 1 | 1 | 1 |
$A_{2}$ | 1 | 2 | 1 | 2 | 2 | 1 | 2 | 3 | 3 |
$A_{3}$ | 4 | 4 | 4 | 4 | 4 | 5 | 4 | 4 | 4 |
$A_{4}$ | 6 | 5 | 5 | 5 | 5 | 10 | 5 | 6 | 5 |
$A_{5}$ | 5 | 7 | 6 | 7 | 6 | 9 | 6 | 7 | 7 |
$A_{6}$ | 10 | 10 | 10 | 12 | 9 | 12 | 9 | 12 | 12 |
$A_{7}$ | 12 | 9 | 12 | 10 | 10 | 11 | 10 | 9 | 9 |
$A_{8}$ | 11 | 8 | 11 | 8 | 12 | 4 | 12 | 8 | 8 |
$A_{9}$ | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 2 | 2 |
$A_{10}$ | 9 | 11 | 8 | 9 | 7 | 8 | 8 | 10 | 10 |
$A_{11}$ | 7 | 6 | 9 | 6 | 11 | 6 | 11 | 5 | 6 |
$A_{12}$ | 8 | 12 | 7 | 11 | 8 | 7 | 7 | 11 | 11 |
Visualize the results as column graphs of weights, rankings, and correlations.
[15]:
plot_barplot(df_weights, 'Weighting methods', 'Weight value', 'Criteria')
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.

[16]:
plot_boxplot(df_weights.T)

[17]:
plot_barplot(df_rankings, 'Alternatives', 'Rank', 'Weighting methods')
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.

[18]:
results = copy.deepcopy(df_rankings)
method_types = list(results.columns)
dict_new_heatmap_rw = Create_dictionary()
for el in method_types:
dict_new_heatmap_rw.add(el, [])
# heatmaps for correlations coefficients
for i, j in [(i, j) for i in method_types[::-1] for j in method_types]:
dict_new_heatmap_rw[j].append(corrs.weighted_spearman(results[i], results[j]))
df_new_heatmap_rw = pd.DataFrame(dict_new_heatmap_rw, index = method_types[::-1])
df_new_heatmap_rw.columns = method_types
# correlation matrix with rw coefficient
draw_heatmap(df_new_heatmap_rw, r'$r_w$')

Stochastic Multicriteria Acceptability Analysis Method (SMAA)
[19]:
cols_ai = [str(el) for el in range(1, matrix.shape[0] + 1)]
[20]:
# criteria number
n = matrix.shape[1]
# number of SMAA iterations
iterations = 10000
[21]:
# create the VIKOR_SMAA method object
vikor_smaa = VIKOR_SMAA()
# generate multiple weight vectors in matrix
weight_vectors = vikor_smaa._generate_weights(n, iterations)
[22]:
# Calculate the rank acceptability index, central weight vector and final ranking
rank_acceptability_index, central_weight_vector, rank_scores = vikor_smaa(matrix, weight_vectors, types)
[23]:
acc_in_df = pd.DataFrame(rank_acceptability_index, index = list_alt_names, columns = cols_ai)
acc_in_df.to_csv('results_smaa/ai.csv')
Rank acceptability indexes
This is dataframe with rank acceptability indexes for each alternative in relation to ranks. Rank acceptability index shows the share of different scores placing an alternative in a given rank.
[24]:
acc_in_df
[24]:
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
$A_{1}$ | 0.2361 | 0.2458 | 0.1879 | 0.1354 | 0.0507 | 0.0546 | 0.0227 | 0.0480 | 0.0151 | 0.0037 | 0.0000 | 0.0000 |
$A_{2}$ | 0.2208 | 0.3555 | 0.2194 | 0.1165 | 0.0455 | 0.0345 | 0.0078 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
$A_{3}$ | 0.0001 | 0.0111 | 0.0229 | 0.0725 | 0.2870 | 0.1485 | 0.1467 | 0.1366 | 0.1656 | 0.0090 | 0.0000 | 0.0000 |
$A_{4}$ | 0.1136 | 0.0670 | 0.0717 | 0.1356 | 0.1719 | 0.2304 | 0.0778 | 0.0375 | 0.0305 | 0.0322 | 0.0318 | 0.0000 |
$A_{5}$ | 0.0003 | 0.0123 | 0.0129 | 0.0217 | 0.0780 | 0.0999 | 0.2542 | 0.1427 | 0.1256 | 0.2322 | 0.0202 | 0.0000 |
$A_{6}$ | 0.0000 | 0.0007 | 0.0070 | 0.0511 | 0.0251 | 0.0369 | 0.1353 | 0.1146 | 0.1594 | 0.1655 | 0.1277 | 0.1767 |
$A_{7}$ | 0.0000 | 0.0000 | 0.0011 | 0.0012 | 0.0062 | 0.0298 | 0.0306 | 0.0327 | 0.0822 | 0.0739 | 0.0924 | 0.6499 |
$A_{8}$ | 0.0000 | 0.0011 | 0.0025 | 0.0050 | 0.0626 | 0.0392 | 0.0569 | 0.1442 | 0.0743 | 0.1210 | 0.4544 | 0.0388 |
$A_{9}$ | 0.3802 | 0.1025 | 0.2888 | 0.0389 | 0.0271 | 0.0282 | 0.0239 | 0.0177 | 0.0680 | 0.0247 | 0.0000 | 0.0000 |
$A_{10}$ | 0.0106 | 0.0425 | 0.0703 | 0.0684 | 0.0860 | 0.1690 | 0.0987 | 0.0715 | 0.1419 | 0.1403 | 0.0911 | 0.0097 |
$A_{11}$ | 0.0000 | 0.1083 | 0.0779 | 0.2967 | 0.0668 | 0.0490 | 0.0606 | 0.0944 | 0.0243 | 0.0794 | 0.1010 | 0.0416 |
$A_{12}$ | 0.0383 | 0.0532 | 0.0376 | 0.0570 | 0.0931 | 0.0800 | 0.0848 | 0.1601 | 0.1131 | 0.1181 | 0.0814 | 0.0833 |
Rank acceptability indexes displayed in the form of stacked bar chart.
[25]:
matplotlib.rcdefaults()
plot_barplot(acc_in_df, 'Alternative', 'Rank acceptability index', 'Rank')
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.

Rank acceptability indexes displayed in the form of heatmap
[26]:
draw_heatmap_smaa(acc_in_df, 'Rank acceptability indexes')

Central weight vector
The central weight vector describes the preferences of a typical decision-maker, supporting this alternative with the assumed preference model. It allows the decision-maker to see what criteria preferences result in the best evaluation of given alternatives. Rows containing only zeroes mean that a given alternative never becomes a leader.
[27]:
central_weights_df = pd.DataFrame(central_weight_vector, index = list_alt_names, columns = cols)
central_weights_df.to_csv('results_smaa/cw.csv')
[28]:
central_weights_df
[28]:
$C_{1}$ | $C_{2}$ | $C_{3}$ | $C_{4}$ | $C_{5}$ | $C_{6}$ | $C_{7}$ | $C_{8}$ | $C_{9}$ | $C_{10}$ | $C_{11}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|
$A_{1}$ | 0.080044 | 0.065913 | 0.166905 | 0.126321 | 0.122206 | 0.077438 | 0.071596 | 0.080231 | 0.056639 | 0.054395 | 0.098312 |
$A_{2}$ | 0.117195 | 0.089724 | 0.076283 | 0.128039 | 0.056262 | 0.080681 | 0.078959 | 0.081181 | 0.071432 | 0.110424 | 0.109820 |
$A_{3}$ | 0.003336 | 0.023275 | 0.030771 | 0.102729 | 0.283001 | 0.043135 | 0.002438 | 0.036464 | 0.007505 | 0.311859 | 0.155486 |
$A_{4}$ | 0.044721 | 0.084670 | 0.065501 | 0.058350 | 0.066241 | 0.089237 | 0.090514 | 0.078441 | 0.080131 | 0.214907 | 0.127288 |
$A_{5}$ | 0.054054 | 0.277008 | 0.068419 | 0.020685 | 0.040810 | 0.035974 | 0.048625 | 0.025737 | 0.096084 | 0.255233 | 0.077371 |
$A_{6}$ | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
$A_{7}$ | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
$A_{8}$ | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
$A_{9}$ | 0.096100 | 0.116361 | 0.066750 | 0.061925 | 0.105006 | 0.105905 | 0.100644 | 0.103442 | 0.130803 | 0.054416 | 0.058647 |
$A_{10}$ | 0.052457 | 0.032178 | 0.039803 | 0.150401 | 0.038353 | 0.099733 | 0.111932 | 0.081925 | 0.068672 | 0.230646 | 0.093900 |
$A_{11}$ | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
$A_{12}$ | 0.075694 | 0.037325 | 0.044251 | 0.046534 | 0.046289 | 0.143603 | 0.161465 | 0.138131 | 0.039332 | 0.150218 | 0.117158 |
Rank scores
[29]:
rank_scores_df = pd.DataFrame(rank_scores, index = list_alt_names, columns = ['Rank'])
rank_scores_df.to_csv('results_smaa/fr.csv')
[30]:
rank_scores_df
[30]:
Rank | |
---|---|
$A_{1}$ | 3 |
$A_{2}$ | 1 |
$A_{3}$ | 6 |
$A_{4}$ | 4 |
$A_{5}$ | 9 |
$A_{6}$ | 10 |
$A_{7}$ | 12 |
$A_{8}$ | 11 |
$A_{9}$ | 2 |
$A_{10}$ | 7 |
$A_{11}$ | 5 |
$A_{12}$ | 8 |
[ ]: