Usage

Installation

To use pyrepo-mcda, first install it using pip:

pip install pyrepo-mcda

Usage examples

The TOPSIS method

The TOPSIS method is used to calculate the preference of evaluated alternatives. When creating the object of the TOPSIS method, you have to provide normalization_method (it is minmax_normalization by default) and distance_metric (it is euclidean by default). The TOPSIS method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The TOPSIS method returns a vector with preference values pref. To generate the TOPSIS ranking of alternatives, pref has to be sorted in descending order. The ranking is generated by rank_preferences, providing pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import TOPSIS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda import distance_metrics as dists
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[256, 8, 41, 1.6, 1.77, 7347.16],
[256, 8, 32, 1.0, 1.8, 6919.99],
[256, 8, 53, 1.6, 1.9, 8400],
[256, 8, 41, 1.0, 1.75, 6808.9],
[512, 8, 35, 1.6, 1.7, 8479.99],
[256, 4, 35, 1.6, 1.7, 7499.99]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.405, 0.221, 0.134, 0.199, 0.007, 0.034])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, 1, 1, 1, -1, -1])

# Create the TOPSIS method object providing normalization method and distance metric.
topsis = TOPSIS(normalization_method = norms.minmax_normalization, distance_metric = dists.euclidean)

# Calculate the TOPSIS preference values of alternatives
pref = topsis(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the TOPSIS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.4242 0.3217 0.4453 0.3353 0.8076 0.2971]
Ranking:  [3 5 2 4 1 6]

The VIKOR method

The VIKOR method is used to calculate the preference of evaluated alternatives. When creating the object of the VIKOR method, you have to provide normalization_method (it is None by default) and v parameter. The VIKOR method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The VIKOR method returns a vector with preference values pref. To generate the VIKOR ranking of alternatives, pref has to be sorted in ascending order. The ranking is generated by rank_preferences, providing pref as argument and setting parameter reverse as False because we need to sort preferences ascendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import VIKOR
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[8, 7, 2, 1],
[5, 3, 7, 5],
[7, 5, 6, 4],
[9, 9, 7, 3],
[11, 10, 3, 7],
[6, 9, 5, 4]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.4, 0.3, 0.1, 0.2])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, 1, 1, 1])

# Create the VIKOR method object providing chosen normalization method ``normalization_method`` (if you don't want to use normalization set ``normalization_method`` as None, it is default), and
v parameter. The default v parameter is set to 0.5, so if you do not provide it, v will be equal to 0.5.
vikor = VIKOR(normalization_method = None, v = 0.625)

# Calculate the VIKOR preference values of alternatives
pref = vikor(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives ascendingly according to the VIKOR algorithm (reverse = False means sorting in ascending order) according to preference values
rank = rank_preferences(pref, reverse = False)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.6399 1.     0.6929 0.2714 0.     0.6939]
Ranking:  [3 6 4 2 1 5]

The SPOTIS method

The SPOTIS method is used to calculate the preference of evaluated alternatives. The SPOTIS method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types and minimum and maximum bounds of alternatives performance values for particular criteria. The SPOTIS method returns a vector with preference values pref. To generate the SPOTIS ranking of alternatives, pref has to be sorted in ascending order. The ranking is generated by rank_preferences, providing pref as argument and setting parameter reverse as False because we need to sort preferences ascendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import SPOTIS
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[15000, 4.3, 99, 42, 737],
        [15290, 5.0, 116, 42, 892],
        [15350, 5.0, 114, 45, 952],
        [15490, 5.3, 123, 45, 1120]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.2941, 0.2353, 0.2353, 0.0588, 0.1765])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([-1, -1, -1, 1, 1])

# Determine minimum bounds of performance values for each criterion in decision matrix
bounds_min = np.array([14000, 3, 80, 35, 650])

# Determine maximum bounds of performance values for each criterion in decision matrix
bounds_max = np.array([16000, 8, 140, 60, 1300])

# Stack minimum and maximum bounds vertically using vstack. You will get a matrix that has two rows and a number of columns equal to the number of criteria
bounds = np.vstack((bounds_min, bounds_max))

# Create the SPOTIS method object
spotis = SPOTIS()

# Calculate the SPOTIS preference values of alternatives
pref = spotis(matrix, weights, types, bounds)

# Generate ranking of alternatives by sorting alternatives ascendingly according to the SPOTIS algorithm (reverse = False means sorting in ascending order) according to preference values
rank = rank_preferences(pref, reverse = False)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.478  0.5781 0.5557 0.5801]
Ranking:  [1 3 2 4]

The CODAS method

The CODAS method is used to calculate the preference of evaluated alternatives. When creating the object of the CODAS method, you have to provide normalization_method (it is linear_normalization by default) and distance_metric (it is euclidean by default). The CODAS method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The CODAS method returns a vector with preference values pref. To generate the CODAS ranking of alternatives, pref has to be sorted in descending order. The ranking is generated by rank_preferences method, providing pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import CODAS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda import distance_metrics as dists
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[45, 3600, 45, 0.9],
[25, 3800, 60, 0.8],
[23, 3100, 35, 0.9],
[14, 3400, 50, 0.7],
[15, 3300, 40, 0.8],
[28, 3000, 30, 0.6]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.2857, 0.3036, 0.2321, 0.1786])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, -1, 1, 1])

# Create the CODAS method object providing normalization method (in CODAS it is ``linear_normalization`` by default), distance metric, and tau parameter, which is equal to 0.02 default. tau must be in the range from 0.01 to 0.05.
codas = CODAS(normalization_method = norms.linear_normalization, distance_metric = dists.euclidean, tau = 0.02)

# Calculate the CODAS preference values of alternatives
pref = codas(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the CODAS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [ 1.3914  0.3411 -0.217  -0.5381 -0.7292 -0.2481]
Ranking:  [1 2 3 5 6 4]

The WASPAS method

The WASPAS method is used to calculate the preference of evaluated alternatives. When creating the object of the WASPAS method, you have to provide normalization_method (it is linear_normalization by default) and lambda_param (it is equal to 0.5 by default). The WASPAS method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The WASPAS method returns a vector with preference values pref. To generate the WASPAS ranking of alternatives, pref has to be sorted in descending order. The ranking is generated by rank_preferences method, providing pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import WASPAS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[5000, 3, 3, 4, 3, 2],
[680, 5, 3, 2, 2, 1],
[2000, 3, 2, 3, 4, 3],
[600, 4, 3, 1, 2, 2],
[800, 2, 4, 3, 3, 4]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.157, 0.249, 0.168, 0.121, 0.154, 0.151])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([-1, 1, 1, 1, 1, 1])

# Create the WASPAS method object providing normalization method (in WASAPS it is linear_normalization by default), and lambda parameter, which is equal to 0.5 default. tau must be in the range from 0 to 1.
waspas = WASPAS(normalization_method=norms.linear_normalization, lambda_param=0.5)

# Calculate the WASPAS preference values of alternatives
pref = waspas(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the WASPAS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.5622 0.6575 0.6192 0.6409 0.7228]
Ranking:  [5 2 4 3 1]

The EDAS method

The EDAS method is used to calculate the preference of evaluated alternatives. The EDAS method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The EDAS method returns a vector with preference values pref. To generate the EDAS ranking of alternatives, pref has to be sorted in descending order. The ranking is generated by rank_preferences method, providing pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import EDAS
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[256, 8, 41, 1.6, 1.77, 7347.16],
[256, 8, 32, 1.0, 1.8, 6919.99],
[256, 8, 53, 1.6, 1.9, 8400],
[256, 8, 41, 1.0, 1.75, 6808.9],
[512, 8, 35, 1.6, 1.7, 8479.99],
[256, 4, 35, 1.6, 1.7, 7499.99]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.405, 0.221, 0.134, 0.199, 0.007, 0.034])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, 1, 1, 1, -1, -1])

# Create the EDAS method object.
edas = EDAS()

# Calculate the EDAS preference values of alternatives
pref = edas(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the EDAS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.4141 0.13   0.4607 0.212  0.9443 0.043 ]
Ranking:  [3 5 2 4 1 6]

The MABAC method

The MABAC method is used to calculate the preference of evaluated alternatives. When creating the object of the MABAC method, you have to provide normalization_method (it is minmax_normalization by default). The MABAC method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The WASPAS method returns a vector with preference values pref. To generate the MABAC ranking of alternatives, pref has to be sorted in descending order. The ranking is generated by rank_preferences method, providing pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import MABAC
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[2.937588, 2.762986, 3.233723, 2.881315, 3.015289, 3.313491],
[2.978555, 3.012820, 2.929487, 3.096154, 3.012820, 3.593939],
[3.286673, 3.464600, 3.746009, 3.715632, 3.703427, 4.133620],
[3.322037, 3.098638, 3.262154, 3.147851, 3.206675, 3.798684],
[3.354866, 3.270945, 3.221880, 3.213207, 3.670508, 3.785941],
[2.796570, 2.983000, 2.744904, 2.692550, 2.787563, 2.878851],
[2.846491, 2.729618, 2.789990, 2.955624, 3.123323, 3.646595],
[3.253458, 3.208902, 3.678499, 3.580044, 3.505663, 3.954262],
[2.580718, 2.906903, 3.176497, 3.073653, 3.264727, 3.681887],
[2.789011, 3.000000, 3.101099, 3.139194, 2.985348, 3.139194],
[3.418681, 3.261905, 3.187912, 3.052381, 3.266667, 3.695238]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.171761, 0.105975, 0.191793, 0.168824, 0.161768, 0.199880])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, 1, 1, 1, 1, 1])

# Create the MABAC method object providing normalization method. In MABAC it is minmax_normalization by default.
mabac = MABAC(normalization_method=norms.minmax_normalization)

# Calculate the MABAC preference values of alternatives
pref = mabac(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the MABAC algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.1553 -0.0895  0.5054  0.1324  0.2469 -0.3868 -0.1794  0.3629 -0.0842
 -0.1675  0.1399]
Ranking:  [ 8  7  1  5  3 11 10  2  6  9  4]

The MULTIMOORA method

The MULTIMOORA method is used to calculate ranking of alternatives. When creating the object of the MULTIMOORA method, you have to provide compromise_rank_method (it is dominance_directed_graph by default) because the MULTIMOORA creates ranking based on three subordinate rankings generated by three approaches: Ratio System (RS), Reference Point (RP) and Full Multiplicative Form (FMF). The MULTIMOORA method requires providing the decision matrix matrix, vector with criteria weights weights, and vector with criteria types types. The MULTIMOORA method returns a vector with ranking rank.

import numpy as np
from pyrepo_mcda.mcda_methods import MULTIMOORA
from pyrepo_mcda.additions import rank_preferences
from pyrepo_mcda import compromise_rankings as compromises

# provide decision matrix in array numpy.darray
matrix = np.array([[4, 3, 3, 4, 3, 2, 4],
[3, 3, 4, 3, 5, 4, 4],
[5, 4, 4, 5, 5, 5, 4]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.215, 0.215, 0.159, 0.133, 0.102, 0.102, 0.073])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, 1, 1, 1, 1, 1, 1])

# Create the MULTIMOORA method object providing compromise_rank_method. In MULTIMOORA it is dominance_directed_graph by default.
multimoora = MULTIMOORA(compromise_rank_method = compromises.dominance_directed_graph)

# Calculate the MULTIMOORA ranking of alternatives
rank = multimoora(matrix, weights, types)

print('Ranking: ', rank)

Output

Ranking:  [3 2 1]

The MOORA method

The MOORA method is used to obtain preference values of alternatives. Then alternatives have to be sorted according to preference values in descending order. The MOORA method can be applied using MULTIMOORA_RS from multimoora. This method requires providing decision matrix matrix, vector with criteria weights weights (all weights must sum to 1) and vector with criteria types types which are equal to 1 for profit criteria and -1 for cost criteria.

import numpy as np
from pyrepo_mcda.mcda_methods import MULTIMOORA_RS as MOORA

matrix = np.array([[4, 3, 3, 4, 3, 2, 4],
[3, 3, 4, 3, 5, 4, 4],
[5, 4, 4, 5, 5, 5, 4]])

weights = np.array([0.215, 0.215, 0.159, 0.133, 0.102, 0.102, 0.073])
types = np.array([1, 1, 1, 1, 1, 1, 1])

moora = MOORA()
pref = moora(matrix, weights, types)
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.241  0.1702 0.1431 0.1068 0.1027 0.13  ]
Ranking:  [1 2 3 5 6 4]

The ARAS method

The ARAS method is used to obtain utility function values for alternatives. Then alternatives have to be ranked according to utility function values in descending order. There is a possibility to select the normalization method of the decision matrix during the ARAS method object initialization. The default normalization for ARAS is sum_normalization. If you do not provide a normalization method, it will be set automatically to sum_normalization. The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import ARAS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[80, 16, 2, 5],
[110, 32, 2, 9],
[130, 64, 4, 9],
[185, 64, 4, 1],
[135, 64, 3, 4],
[140, 32, 3, 5],
[185, 64, 6, 7],
[110, 16, 3, 3],
[120, 16, 4, 3],
[340, 128, 6, 5]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.60338, 0.13639, 0.19567, 0.06456])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([-1, 1, 1, 1])

# Create the ARAS method object providing the normalization method. In ARAS, it is ``sum_normalization`` by default, so if you do not provide a normalization method, it will be set as ``sum_normalization``.
aras = ARAS(normalization_method=norms.sum_normalization)

# Calculate the ARAS preference values of alternatives
pref = aras(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the ARAS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.6891 0.5852 0.6279 0.4667 0.5492 0.498  0.5696 0.5495 0.5451 0.5355]
Ranking:  [ 1  3  2 10  6  9  4  5  7  8]

The COPRAS method

The COPRAS method is used to obtain utility function values for alternatives. Then alternatives have to be ranked according to utility function values in descending order. There is a possibility to select the normalization method of the decision matrix during the COPRAS method object initialization. In the COPRAS method, each normalization is performed automatically as for profit criteria according to the algorithm of this method. The default normalization for COPRAS is sum_normalization. If you do not provide a normalization method, it will be set automatically to sum_normalization. The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import COPRAS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[80, 16, 2, 5],
[110, 32, 2, 9],
[130, 64, 4, 9],
[185, 64, 4, 1],
[135, 64, 3, 4],
[140, 32, 3, 5],
[185, 64, 6, 7],
[110, 16, 3, 3],
[120, 16, 4, 3],
[340, 128, 6, 5]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.60338, 0.13639, 0.19567, 0.06456])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([-1, 1, 1, 1])

# Create the COPRAS method object providing the normalization method. In COPRAS, it is ``sum_normalization`` by default, so if you do not provide a normalization method, it will be set as ``sum_normalization``.
copras = COPRAS(normalization_method=norms.sum_normalization)

# Calculate the COPRAS preference values of alternatives
pref = copras(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the COPRAS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [1.     0.8526 0.9193 0.6852 0.8052 0.7259 0.8344 0.7976 0.791  0.7953]
Ranking:  [ 1  3  2 10  5  9  4  6  8  7]

The MARCOS method

The MARCOS method is used to obtain utility function values for alternatives. Then alternatives have to be ranked according to utility function values in descending order. MARCOS does not require the decision-maker to select the normalization method because it has its own decision matrix normalization method. The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import MARCOS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[6.257, 4.217, 4.217, 6.257, 3.000, 4.217, 5.000, 3.557, 3.557, 3.557, 3.000, 5.000, 4.718, 3.557, 3.557, 2.080, 3.557, 3.000, 4.718, 3.557, 2.080],
[4.217, 6.804, 7.000, 5.000, 7.000, 6.804, 5.593, 5.593, 6.804, 7.000, 5.000, 7.612, 5.593, 6.257, 5.000, 7.000, 5.593, 5.593, 6.804, 5.593, 5.593],
[4.718, 5.593, 6.257, 5.000, 4.718, 5.000, 5.593, 4.718, 5.593, 5.000, 3.557, 6.257, 5.000, 4.718, 4.718, 5.000, 3.557, 5.593, 5.593, 3.557, 4.217],
[5.000, 6.804, 5.000, 3.000, 5.000, 6.257, 7.612, 3.557, 5.000, 6.257, 6.257, 5.593, 6.257, 5.000, 5.593, 7.000, 5.000, 6.257, 5.000, 3.557, 4.217],
[3.557, 5.593, 6.804, 3.000, 5.000, 7.000, 5.593, 5.000, 6.257, 7.000, 5.593, 7.612, 6.257, 6.257, 5.000, 6.257, 5.593, 7.000, 5.000, 4.718, 5.000],
[6.257, 3.000, 4.217, 5.000, 3.557, 3.000, 4.217, 3.000, 4.217, 3.000, 2.466, 3.000, 4.217, 3.557, 5.000, 3.000, 4.217, 3.557, 2.080, 5.000, 3.000],
[4.217, 5.000, 6.257, 5.593, 3.557, 5.593, 4.217, 5.593, 5.000, 6.257, 3.557, 5.000, 6.257, 5.593, 5.593, 7.000, 6.257, 5.000, 6.257, 5.593, 5.000],
[7.612, 1.442, 3.000, 9.000, 2.080, 3.000, 1.442, 2.080, 1.442, 3.000, 1.000, 1.442, 2.080, 3.000, 3.000, 1.000, 1.442, 1.442, 3.000, 2.080, 1.000]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.127, 0.159, 0.060, 0.075, 0.043, 0.051, 0.075, 0.061, 0.053, 0.020, 0.039, 0.022, 0.017, 0.027, 0.022, 0.039, 0.017, 0.035, 0.015, 0.024, 0.016])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([-1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])

# Create the MARCOS method object. MARCOS does not require the decision-maker to select the normalization method because it has its own decision matrix normalization method.
marcos = MARCOS()

# Calculate the MARCOS preference values of alternatives
pref = marcos(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the MARCOS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.5244 0.8457 0.7035 0.7963 0.8432 0.499  0.722  0.2906]
Ranking:  [6 1 5 3 2 7 4 8]

The CRADIS method

The CRADIS method is used to obtain utility function values for alternatives. Then alternatives have to be ranked according to utility function values in descending order. There is a possibility to select the normalization method of the decision matrix during the CRADIS method object initialization. The default normalization for CRADIS is linear_normalization. If you do not provide a normalization method, it will be set automatically to linear_normalization. The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import CRADIS
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[1.82, 1.59, 2.62, 2.62, 4.31, 3.3, 2.29, 3.3, 4.31, 5.31, 2.29, 1.26, 0.36, 30, 10, 5.02],
[1.82, 1.59, 2.62, 2.62, 3.63, 3.3, 2.29, 3.3, 4.31, 6., 2.29, 1.26, 0.54, 40., 11.5, 6.26],
[2.88, 2.62, 3.3, 3., 4.64, 3.91, 2.52, 3.91, 3.3, 6., 3.3, 1.44, 0.75, 50., 12.5, 8.97],
[1.82, 1.59, 2.62, 3.17, 3.63, 3.3, 2.29, 3.3, 4.31, 6., 3.3, 2., 0.57, 65., 17.5, 8.79],
[3.11, 3., 3.91, 4., 5., 4.58, 3.3, 4., 2.29, 5., 3.91, 2.88, 1.35, 100., 16.5, 11.68],
[2.88, 2.29, 3.63, 3.63, 5., 4.31, 3.3, 4.31, 2.88, 6., 4.31, 2.29, 1.2, 100., 15.5, 12.9]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.07, 0.05, 0.05, 0.06, 0.09, 0.06, 0.06, 0.06, 0.05, 0.07, 0.05, 0.05, 0.09, 0.06, 0.07, 0.06])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([-1., -1., -1., -1., -1., -1., -1., -1.,  1.,  1., -1., -1.,  1.,  1., -1., -1.])

# Create the CRADIS method object. The default normalization for CRADIS is ``linear_normalization`` but you can select others.
cradis = CRADIS(normalization_method=norms.linear_normalization)

# Calculate the CRADIS preference values of alternatives
pref = cradis(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the CRADIS algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.7943 0.8213 0.6423 0.7375 0.5803 0.6125]
Ranking:  [2 1 4 3 6 5]

The PROMETHEE II method

The PROMETHEE II method does not require providing normalization method by decision-maker. It requires providing list with preference functions for each criterion selected from six available preference functions: Type 1 _usual_function, Type 2 _ushape_function, Type 3 _vshape_function, Type 4 _level_function, Type 5 _linear_function, Type 6 _gaussian_function.

If the decision-maker does not provide a list with preference functions preference_functions, this list is generated automatically, and preference functions in this list are set to default preference function Type 1 _usual_function.

Depending on chosen preference function PROMETHEE II requires providing p, q or both p and q parameters. The Type 1 _usual_function does not require any parameter from p, q. Type 2 _ushape_function requires q parameter. Type 3 _vshape_function requires p parameter. Type 4 _level_function, Type 5 _linear_function and Type 6 _gaussian_function require p and q parameters.

p is a vector with values of the threshold of the absolute preference for each criterion. q is a vector with values of the threshold of indifference for each criterion.

If the decision-maker does not provide p or q parameters, they are set automatically.

The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import PROMETHEE_II
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[8, 7, 2, 1],
[5, 3, 7, 5],
[7, 5, 6, 4],
[9, 9, 7, 3],
[11, 10, 3, 7],
[6, 9, 5, 4]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.4, 0.3, 0.1, 0.2])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([1, 1, 1, 1])

# Create the PROMETHEE II method object. PROMETHEE II does not require normalization method.
promethee_II = PROMETHEE_II()

# provide preference functions selected from six preference functions available for PROMETHEE II for each criterion
preference_functions = [promethee_II._linear_function for pf in range(len(weights))]

# provide p or q or both p and q parameters depending on chosen preference function
p = 2 * np.ones(len(weights))
q = 1 * np.ones(len(weights))

# Calculate the PROMETHEE II preference values of alternatives
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions, p = p, q = q)

# Generate ranking of alternatives by sorting alternatives descendingly according to the PROMETHEE II algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.26 -0.52 -0.22  0.36  0.7  -0.06]
Ranking:  [5 6 4 2 1 3]

Usage examples for other preference functions with matrix, weights, types, p, and q provided above for PROMETHEE II

Usual

promethee_II = PROMETHEE_II()
preference_functions = [promethee_II._usual_function for pf in range(len(weights))]
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions)
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.28 -0.5  -0.24  0.32  0.84 -0.14]
Ranking:  [5 6 4 2 1 3]

U-shape

promethee_II = PROMETHEE_II()
preference_functions = [promethee_II._ushape_function for pf in range(len(weights))]
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions, q = q)
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.26 -0.52 -0.22  0.36  0.7  -0.06]
Ranking:  [5 6 4 2 1 3]

V-shape

promethee_II = PROMETHEE_II()
preference_functions = [promethee_II._vshape_function for pf in range(len(weights))]
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions, p = p)
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.27 -0.51 -0.23  0.34  0.77 -0.1 ]
Ranking:  [5 6 4 2 1 3]

Level

promethee_II = PROMETHEE_II()
preference_functions = [promethee_II._level_function for pf in range(len(weights))]
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions, p = p, q = q)
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.25 -0.46 -0.22  0.32  0.65 -0.04]
Ranking:  [5 6 4 2 1 3]

Linear

promethee_II = PROMETHEE_II()
preference_functions = [promethee_II._linear_function for pf in range(len(weights))]
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions, p = p, q = q)
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.26 -0.52 -0.22  0.36  0.7  -0.06]
Ranking:  [5 6 4 2 1 3]

Gaussian

promethee_II = PROMETHEE_II()
preference_functions = [promethee_II._gaussian_function for pf in range(len(weights))]
pref = promethee_II(matrix, weights, types, preference_functions = preference_functions, p = p, q = q)
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.2339 -0.4536 -0.2213  0.3048  0.6569 -0.0528]
Ranking:  [5 6 4 2 1 3]

The PROSA-C method

The PROSA-C method is based on the PROMETHEE II method. Thus it is similar. It does not require providing normalization method by decision-maker. It requires providing list with preference functions for each criterion selected from six available preference functions: Type 1 _usual_function, Type 2 _ushape_function, Type 3 _vshape_function, Type 4 _level_function, Type 5 _linear_function, Type 6 _gaussian_function.

If the decision-maker does not provide a list with preference functions preference_functions, this list is generated automatically, and preference functions in this list are set to default preference function Type 1 _usual_function.

Depending on chosen preference function PROSA-C requires providing p, q or both p and q parameters. The Type 1 _usual_function does not require any parameter from p, q. Type 2 _ushape_function requires q parameter. Type 3 _vshape_function requires p parameter. Type 4 _level_function, Type 5 _linear_function and Type 6 _gaussian_function require p and q parameters.

p is a vector with values of the threshold of the absolute preference for each criterion. q is a vector with values of the threshold of indifference for each criterion.

If the decision-maker does not provide p or q parameters, they are set automatically.

PROSA-C enables to reduce criteria compensation using sustainability coefficient s, which is an additional parameter of this method compared to PROMETHEE II.

The s parameter is a vector with sustainability coefficient values for each criterion. It is recommended to set the value of s in the range from 0 to 0.5. If decision-maker does not provide s parameter, it is set automatically to a default value equal to 0.3 for each criterion.

The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import PROSA_C
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[38723, 34913, 25596, 34842, 22570, 39773, 19500, 34525, 16486],
[33207, 32085, 2123, 32095, 1445, 17485, 868, 16958, 958],
[0, 0.2, 5, 0.2, 0.2, 0.2, 99, 99, 99],
[3375, 3127, 3547, 3115, 3090, 4135, 3160, 4295, 3653],
[11.36, 12.78, 12.78, 12.86, 12.86, 17, 12.86, 17, 12.86],
[-320.9, -148.4, -148.4, -9.9, -9.9, 0, -9.9, 0, -9.9],
[203.7, 463, 356.2, 552.5, 295, 383, 264, 352, 264],
[0, 11.7, 44.8, 11.7, 95.9, 95.9, 116.8, 116.8, 164.9],
[0, 4.9, 10.7, 5.4, 11.2, 11.2, 11.2, 11.2, 11.2],
[1, 1, 1, 3.5, 4, 4, 4, 4, 4],
[21.5, 47.9, 27.7, 39.7, 1.5, 22.7, 2.7, 23.9, 1],
[0, 3.7, 4.5, 10.3, 11.5, 11.5, 11.3, 11.3, 11.4]])

matrix = matrix.T

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.3333, 0.1667, 0.1667, 0.3333, 0.25, 0.75, 1, 1, 0.4, 0.20, 0.40, 1])
weights = weights / np.sum(weights)

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([-1, -1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1])

# Create the PROSA-C method object. PROSA-C does not require normalization method.
prosa_c = PROSA_C()

# provide preference functions selected from six preference functions available for PROSA-C for each criterion
preference_functions = [prosa_c._linear_function for pf in range(len(weights))]

# provide p or q or both p and q parameters depending on chosen preference function and s parameter
p = np.array([2100, 5000, 50, 200, 5, 20, 100, 80, 4, 2, 23, 3])
q = np.array([420, 1000, 10, 40, 1, 7, 50, 20, 1, 1, 4.6, 1])
s = np.array([0.4, 0.5, 0.3, 0.4, 0.3, 0.4, 0.3, 0.3, 0.2, 0.4, 0.4, 0.2])

# Calculate the PROSA-C preference values of alternatives
pref = prosa_c(matrix, weights, types, preference_functions = preference_functions, p = p, q = q, s = s)

# Generate ranking of alternatives by sorting alternatives descendingly according to the PROSA-C algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [-0.5921 -0.6014 -0.324  -0.4381  0.2791 -0.0703  0.3739  0.0451  0.3592]
Ranking:  [8 9 6 7 3 5 1 4 2]

The SAW method

The SAW method is used to obtain utility function values for alternatives. Then alternatives have to be ranked according to utility function values in descending order. There is a possibility to select the normalization method of the decision matrix during the SAW method object initialization. The default normalization for SAW is linear_normalization. If you do not provide a normalization method, it will be set automatically to linear_normalization. The ranking is generated using rank_preferences method from additions submodule providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

import numpy as np
from pyrepo_mcda.mcda_methods import SAW
from pyrepo_mcda import normalizations as norms
from pyrepo_mcda.additions import rank_preferences


# provide decision matrix in array numpy.darray
matrix = np.array([[0.75, 0.50, 0.75, 0, 0, 0, 1],
[0.75, 1, 0.75, 0, 0, 0, 0.75],
[0.75, 0.75, 0.75, 0, 0.50, 0.25, 1],
[0.50, 0.50, 0.75, 1, 0.50, 0, 0.75]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.1, 0.1, 0.1, 0.15, 0.2, 0.25, 0.1])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([1, 1, 1, 1, 1, 1, 1])

# Create the SAW method object. The default normalization for SAW is ``linear_normalization`` but you can select others.
saw = SAW(normalization_method=norms.linear_normalization)

# Calculate the SAW preference values of alternatives
pref = saw(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the SAW algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.35   0.375  0.825  0.6417]
Ranking:  [4 3 1 2]

The AHP method

The first step of the classical AHP method application requires a matrix of significance comparisons of criteria. Next, check the consistency of the matrix with criteria comparisons. Criteria weights are calculated based on the criteria comparison matrix using one of three methods: _calculate_eigenvector, _normalized_column_sum or _geometric_mean. Then provide a comparison matrix of alternatives for each criterion. Utility function values of AHP are calculated by _classic_ahp function. The ranking is generated using the rank_preferences method from the additions submodule, providing utility function values pref as argument and setting parameter reverse as True because we need to sort preferences descendingly.

Classical AHP

import numpy as np
from pyrepo_mcda.mcda_methods import AHP
from pyrepo_mcda.additions import rank_preferences

# Step 1 - provide matrix for criteria comparisons
PCcriteria = np.array([[1, 1, 5, 3], [1, 1, 5, 3],
[1/5, 1/5, 1, 1/3], [1/3, 1/3, 3, 1]])

# Create the object of the AHP method
ahp = AHP()

# Step 2 - check consistency of matrix with criteria comparison
ahp._check_consistency(PCcriteria)

# Step 3 - compute priority vector of criteria (weights)
weights = ahp._calculate_eigenvector(PCcriteria)

# Step 4 - provide pairwise comparison matrices of the alternatives for each criterion
PCM1 = np.array([[1, 5, 1, 1, 1/3, 3],
[1/5, 1, 1/3, 1/5, 1/7, 1],
[1, 3, 1, 1/3, 1/5, 1],
[1, 5, 3, 1, 1/3, 3],
[3, 7, 5, 3, 1, 7],
[1/3, 1, 1, 1/3, 1/7, 1]])
PCM2 = np.array([[1, 7, 3, 1/3, 1/3, 1/3],
[1/7, 1, 1/3, 1/7, 1/9, 1/7],
[1/3, 3, 1, 1/5, 1/5, 1/5],
[3, 7, 5, 1, 1, 1],
[3, 9, 5, 1, 1, 1],
[3, 7, 5, 1, 1, 1]])
PCM3 = np.array([[1, 1/9, 1/7, 1/9, 1, 1/5],
[9, 1, 1, 1, 5, 3],
[7, 1, 1, 1, 5, 1],
[9, 1, 1, 1, 7, 3],
[1, 1/5, 1/5, 1/7, 1, 1/3],
[5, 1/3, 1, 1/3, 3, 1]])
PCM4 = np.array([[1, 1/5, 1/5, 1/3, 1/7, 1/5],
[5, 1, 1, 3, 1/3, 1],
[5, 1, 1, 1, 1/3, 1],
[3, 1/3, 1, 1, 1/7, 1],
[7, 3, 3, 7, 1, 5],
[5, 1, 1, 1, 1/5, 1]])

# Form pairwise comparison matrices of the alternatives for each criterion
alt_matrices = []
alt_matrices.append(PCM1)
alt_matrices.append(PCM2)
alt_matrices.append(PCM3)
alt_matrices.append(PCM4)

# Step 5 - Consistency check of pairwise comparison matrices of the alternatives

# Compute local priority vectors of alternatives
# select the method to calculate priority vector
# the default method to calculate priority vector is ahp._calculate_eigenvector
pref = ahp._classic_ahp(alt_matrices, weights, calculate_priority_vector_method = ahp._calculate_eigenvector)

# Step 6 - Generate ranking of alternatives by sorting alternatives descendingly according to the SAW algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.1174 0.0713 0.0947 0.2116 0.3501 0.1548]
Ranking:  [4 6 5 2 1 3]

Another usage of AHP - for numerical values of performances and weights

If you have a decision matrix with numerical performance values, a vector with numerical criteria weights, and determined criteria types (profit or cost), you can use the AHP method like other MCDA methods (for example, SAW):

import numpy as np
from pyrepo_mcda.mcda_methods import AHP
from pyrepo_mcda.additions import rank_preferences

# provide decision matrix in array numpy.darray
matrix = np.array([[0.75, 0.50, 0.75, 0, 0, 0, 1],
[0.75, 1, 0.75, 0, 0, 0, 0.75],
[0.75, 0.75, 0.75, 0, 0.50, 0.25, 1],
[0.50, 0.50, 0.75, 1, 0.50, 0, 0.75]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.1, 0.1, 0.1, 0.15, 0.2, 0.25, 0.1])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1, and cost criteria by -1.
types = np.array([1, 1, 1, 1, 1, 1, 1])

# Create the AHP method object. The default normalization for SAW is ``sum_normalization`` but you can select others.
ahp = AHP(normalization_method=norms.sum_normalization)

# Calculate the AHP preference values of alternatives
pref = ahp(matrix, weights, types)

# Generate ranking of alternatives by sorting alternatives descendingly according to the AHP algorithm (reverse = True means sorting in descending order) according to preference values
rank = rank_preferences(pref, reverse=True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [0.099  0.1101 0.4581 0.3328]
Ranking:  [4 3 1 2]

The COCOSO method

import numpy as np
from pyrepo_mcda.mcda_methods import COCOSO

# Provide decision matrix
matrix = np.array([[60, 0.4, 2540, 500, 990],
[6.35, 0.15, 1016, 3000, 1041],
[6.8, 0.1, 1727.2, 1500, 1676],
[10, 0.2, 1000, 2000, 965],
[2.5, 0.1, 560, 500, 915],
[4.5, 0.08, 1016, 350, 508],
[3, 0.1, 1778, 1000, 920]])

# Provide criteria weights
weights = np.array([0.036, 0.192, 0.326, 0.326, 0.12])

# Provide criteria types
types = np.array([1, -1, 1, 1, 1])

# Initialize the COCOSO method object
cocoso = COCOSO()

# Calculate preference values
pref = cocoso(matrix, weights, types)

# Rank alternatives according to preference values. Best scored alternative has the highest preference value
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Ranking: ', rank)

Output

Preference values:  [2.0413 2.788  2.8823 2.416  1.2987 1.4431 2.5191]
Ranking:  [5 2 1 4 7 6 3]

The VMCM method

Application of the VMCM method requires providing a decision matrix, criteria weights, criteria types, pattern, and anti-pattern. It is recommended to eliminate criteria with low values of significance coefficient calculated with vmcm._elimination function. Criteria weights can be determined using vmcm._weighting function. Pattern and anti-pattern can be calculated using vmcm._pattern_determination function. Classes are assigned to evaluated objects with vmcm._classification function.

import numpy as np
from pyrepo_mcda.mcda_methods import VMCM

# Provide decision matrix
matrix = np.array([[60, 0.4, 2540, 500, 990],
[6.35, 0.15, 1016, 3000, 1041],
[6.8, 0.1, 1727.2, 1500, 1676],
[10, 0.2, 1000, 2000, 965],
[2.5, 0.1, 560, 500, 915],
[4.5, 0.08, 1016, 350, 508],
[3, 0.1, 1778, 1000, 920]])

# Initialize the VMCM method object
vmcm = VMCM()

# Print the criteria to be eliminated
vmcm._elimination(matrix)

# Provide criteria types
types = np.array([1, -1, 1, 1, 1])

# Determine criteria weights
weights = vmcm._weighting(matrix)

# Determine pattern and anti-pattern
pattern, antipattern = vmcm._pattern_determination(matrix, types)

# Calculate value of the synthetic measure for each object
pref = vmcm(matrix, weights, types, pattern, antipattern)

# Classify evaluated objects according to synthetic measure values
classes = vmcm._classification(pref)

# Rank evaluated objects according to synthetic measure values. Best scored alternative has the highest preference value
rank = rank_preferences(pref, reverse = True)

print('Preference values: ', np.round(pref, 4))
print('Classes: ', classes)
print('Ranking: ', rank)

Output

Elimination of variables stage (significance coefficient of features):
C1 = 1.5590
C2 = 0.6994
C3 = 0.4878
C4 = 0.7698
C5 = 0.3446
Criteria to eliminate:
None
Preference values:  [ 0.5427  1.04    1.0445  0.558  -0.091   0.0133  0.6857]
Classes:  [2. 1. 1. 2. 4. 4. 2.]
Ranking:  [5 2 1 4 7 6 3]

Methods for determining compromise rankings

The Copeland Method for compromise ranking

This method is used to generate compromise ranking based on several rankings provided by different MCDA methods. The copeland method requires providing two-dimensional matrix matrix with different rankings in particular columns. copeland returns vector with compromise ranking.

import numpy as np
from pyrepo_mcda import compromise_rankings as compromises

# Provide matrix with different rankings given by different MCDA methods in columns
matrix = np.array([[7, 8, 7, 6, 7, 7],
[4, 7, 5, 7, 5, 4],
[8, 9, 8, 8, 9, 8],
[1, 4, 1, 1, 1, 1],
[2, 2, 2, 4, 3, 2],
[3, 1, 4, 3, 2, 3],
[10, 5, 10, 9, 8, 10],
[6, 3, 6, 5, 4, 6],
[9, 10, 9, 10, 10, 9],
[5, 6, 3, 2, 6, 5]])

# Calculate the compromise ranking using ``copeland`` method
result = compromises.copeland(matrix)

print('Copeland compromise ranking: ', result)

Output

Copeland compromise ranking:  [ 7  6  8  1  2  3  9  5 10  4]

The Dominance Directed Graph

This method is used to generate compromise ranking based on several rankings provided by different MCDA methods. The dominance_directed_graph method requires providing two-dimensional matrix matrix with different rankings in particular columns. dominance_directed_graph returns vector with compromise ranking.

import numpy as np
from pyrepo_mcda import compromise_rankings as compromises

# Provide matrix with different rankings given by different MCDA methods in columns
matrix = np.array([[3, 2, 3],
[2, 3, 2],
[1, 1, 1]])

# Calculate the compromise ranking using ``dominance_directed_graph`` method
result = compromises.dominance_directed_graph(matrix)

print('Dominance directed graph compromise ranking: ', result)

Output

Dominance directed graph compromise ranking:  [3 2 1]

The Rank Position compromise ranking method

This method is used to generate compromise ranking based on several rankings provided by different MCDA methods. The rank_position_method method requires providing two-dimensional matrix matrix with different rankings in particular columns. rank_position_method returns vector with compromise ranking.

import numpy as np
from pyrepo_mcda import compromise_rankings as compromises

# Provide matrix with different rankings given by different MCDA methods in columns
matrix = np.array([[3, 2, 3],
[2, 3, 2],
[1, 1, 1]])

# Calculate the compromise ranking using ``rank_position_method`` method
result = compromises.rank_position_method(matrix)

print('Rank position compromise ranking: ', result)

Output

Rank position compromise ranking:  [3 2 1]

The Improved Borda Rule compromise ranking method for MULTIMOORA

This method is used to generate compromise ranking based on three rankings provided by particular approaches (RS, RP and FMF) of MULTIMOORA method. The improved_borda_rule method requires providing two-dimensional matrix matrix with three rankings in particular columns. improved_borda_rule returns vector with compromise ranking.

import numpy as np
from pyrepo_mcda import compromise_rankings as compromises

# Provide matrix with different preference values given by different MCDA methods in columns
prefs = np.array([[4.94364901e-01, 4.56157867e-02, 3.85006756e-09],
[5.26950959e-01, 6.08111832e-02, 9.62516889e-09],
[6.77457681e-01, 0.00000000e+00, 4.45609671e-08]])

# Provide matrix with different rankings given by different MCDA methods in columns
ranks = np.array([[3, 2, 3],
[2, 3, 2],
[1, 1, 1]])

# Calculate the compromise ranking using ``improved_borda_rule`` method
result = compromises.improved_borda_rule(prefs, ranks)

print('Improved Borda Rule compromise ranking: ', result)

Output

Improved Borda Rule compromise ranking:  [2 3 1]

Correlation coefficients

Spearman correlation coefficient

This method is used to calculate correlation between two different rankings. It requires two vectors R and Q with rankings of the same size. It returns value of correlation.

import numpy as np
from pyrepo_mcda import correlations as corrs

# Provide two vectors with rankings obtained with different MCDA methods
R = np.array([1, 2, 3, 4, 5])
Q = np.array([1, 3, 2, 4, 5])

# Calculate the correlation using ``spearman`` coefficient
coeff = corrs.spearman(R, Q)
print('Spearman coeff: ', np.round(coeff, 4))

Output

Spearman coeff:  0.9

Weighted Spearman correlation coefficient

This method is used to calculate correlation between two different rankings. It requires two vectors R and Q with rankings of the same size. It returns value of correlation.

import numpy as np
from pyrepo_mcda import correlations as corrs

# Provide two vectors with rankings obtained with different MCDA methods
R = np.array([1, 2, 3, 4, 5])
Q = np.array([1, 3, 2, 4, 5])

# Calculate the correlation using ``weighted_spearman`` coefficient
coeff = corrs.weighted_spearman(R, Q)
print('Weighted Spearman coeff: ', np.round(coeff, 4))

Output

Weighted Spearman coeff:  0.8833

Similarity rank coefficient WS

This method is used to calculate similarity between two different rankings. It requires two vectors R and Q with rankings of the same size. It returns value of similarity.

import numpy as np
from pyrepo_mcda import correlations as corrs

# Provide two vectors with rankings obtained with different MCDA methods
R = np.array([1, 2, 3, 4, 5])
Q = np.array([1, 3, 2, 4, 5])

# Calculate the similarity using ``WS_coeff`` coefficient
coeff = corrs.WS_coeff(R, Q)
print('WS coeff: ', np.round(coeff, 4))

Output

WS coeff:  0.8542

Pearson correlation coefficient

This method is used to calculate correlation between two different rankings. It requires two vectors R and Q with rankings of the same size. It returns value of correlation.

import numpy as np
from pyrepo_mcda import correlations as corrs

# Provide two vectors with rankings obtained with different MCDA methods
R = np.array([1, 2, 3, 4, 5])
Q = np.array([1, 3, 2, 4, 5])

# Calculate the correlation using ``pearson_coeff`` coefficient
coeff = corrs.pearson_coeff(R, Q)
print('Pearson coeff: ', np.round(coeff, 4))

Output

Pearson coeff:  0.9

Methods for criteria weights determination

Entropy weighting method

This method is used to calculate criteria weights based on alternatives perfromance values provided in decision matrix. This method requires providing two-dimensional decision matrix matrix with perfromance values of alternatives in rows considering criteria in columns. It returns vector with criteria weights. All values in vector weights must sum to 1.

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[30, 30, 38, 29],
[19, 54, 86, 29],
[19, 15, 85, 28.9],
[68, 70, 60, 29]])

weights = mcda_weights.entropy_weighting(matrix)

print('Entropy weights: ', np.round(weights, 4))

Output

Entropy weights:  [0.463  0.3992 0.1378 0.    ]

CRITIC weighting method

This method is used to calculate criteria weights based on alternatives perfromance values provided in decision matrix. This method requires providing two-dimensional decision matrix matrix with perfromance values of alternatives in rows considering criteria in columns. It returns vector with criteria weights. All values in vector weights must sum to 1.

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[5000, 3, 3, 4, 3, 2],
[680, 5, 3, 2, 2, 1],
[2000, 3, 2, 3, 4, 3],
[600, 4, 3, 1, 2, 2],
[800, 2, 4, 3, 3, 4]])

weights = mcda_weights.critic_weighting(matrix)

print('CRITIC weights: ', np.round(weights, 4))

Output

CRITIC weights:  [0.157  0.2495 0.1677 0.1211 0.1541 0.1506]

Standard deviation weighting method

This method is used to calculate criteria weights based on alternatives perfromance values provided in decision matrix. This method requires providing two-dimensional decision matrix matrix with perfromance values of alternatives in rows considering criteria in columns. It returns vector with criteria weights. All values in vector weights must sum to 1.

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[0.619, 0.449, 0.447],
[0.862, 0.466, 0.006],
[0.458, 0.698, 0.771],
[0.777, 0.631, 0.491],
[0.567, 0.992, 0.968]])

weights = mcda_weights.std_weighting(matrix)

print('Standard deviation weights: ', np.round(weights, 4))

Output

Standard deviation weights:  [0.2173 0.2945 0.4882]

Equal weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[0.619, 0.449, 0.447],
[0.862, 0.466, 0.006],
[0.458, 0.698, 0.771],
[0.777, 0.631, 0.491],
[0.567, 0.992, 0.968]])

weights = mcda_weights.equal_weighting(matrix)
print('Equal weights: ', np.round(weights, 3))

Output

Equal weights:  [0.333 0.333 0.333]

Gini coefficient-based weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[29.4, 83, 47, 114, 12, 30, 120, 240, 170, 90, 1717.75],
[30, 38.1, 124.7, 117, 16, 60, 60, 60, 93, 70, 2389],
[29.28, 59.27, 41.13, 58, 16, 30, 60, 120, 170, 78, 239.99],
[33.6, 71, 55, 159, 23.6, 60, 240, 240, 132, 140, 2099],
[21, 59, 41, 66, 16, 24, 60, 120, 170, 70, 439],
[35, 65, 42, 134, 12, 60, 240, 240, 145, 60, 1087],
[47, 79, 54, 158, 19, 60, 120, 120, 360, 72, 2499],
[28.3, 62.3, 44.9, 116, 12, 30, 60, 60, 130, 90, 999.99],
[36.9, 28.6, 121.6, 130, 12, 60, 120, 120, 80, 80, 1099],
[32, 59, 41, 60, 16, 30, 120, 120, 170, 60, 302.96],
[28.4, 66.3, 48.6, 126, 12, 60, 240, 240, 132, 135, 1629],
[29.8, 46, 113, 47, 18, 50, 50, 50, 360, 72, 2099],
[20.2, 64, 80, 70, 8, 24, 60, 120, 166, 480, 699.99],
[33, 60, 44, 59, 12, 30, 60, 120, 170, 90, 388],
[29, 59, 41, 55, 16, 30, 60, 120, 170, 120, 299],
[29, 59, 41, 182, 12, 30, 30, 60, 94, 140, 249],
[29.8, 59.2, 41, 65, 16, 30, 60, 120, 160, 90, 219.99],
[28.8, 62.5, 41, 70, 12, 60, 120, 120, 170, 138, 1399.99],
[24, 40, 59, 60, 12, 10, 30, 30, 140, 78, 269.99],
[30, 60, 45, 201, 16, 30, 30, 30, 170, 90, 199.99]])

weights = mcda_weights.gini_weighting(matrix)
print('Gini coefficient-based weights: ', np.round(weights, 4))

Output

Gini coefficient-based weights:  [0.0362 0.0437 0.0848 0.0984 0.048  0.0842 0.1379 0.1125 0.0745 0.1107 0.169 ]

MEREC weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[450, 8000, 54, 145],
[10, 9100, 2, 160],
[100, 8200, 31, 153],
[220, 9300, 1, 162],
[5, 8400, 23, 158]])

types = np.array([1, 1, -1, -1])

weights = mcda_weights.merec_weighting(matrix, types)
print('MEREC weights: ', np.round(weights, 4))

Output

MEREC weights:  [0.5752 0.0141 0.4016 0.0091]

Statistical variance weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[0.619, 0.449, 0.447],
[0.862, 0.466, 0.006],
[0.458, 0.698, 0.771],
[0.777, 0.631, 0.491],
[0.567, 0.992, 0.968]])

weights = mcda_weights.stat_var_weighting(matrix)
print('Statistical variance weights: ', np.round(weights, 4))

Output

Statistical variance weights:  [0.3441 0.3497 0.3062]

CILOS weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[3, 100, 10, 7],
[2.500, 80, 8, 5],
[1.800, 50, 20, 11],
[2.200, 70, 12, 9]])

types = np.array([-1, 1, -1, 1])

weights = mcda_weights.cilos_weighting(matrix, types)
print('CILOS weights: ', np.round(weights, 3))

Output

CILOS weights:  [0.334 0.22  0.196 0.25 ]

IDOCRIW weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[3.0, 100, 10, 7],
[2.5, 80, 8, 5],
[1.8, 50, 20, 11],
[2.2, 70, 12, 9]])

types = np.array([-1, 1, -1, 1])

weights = mcda_weights.idocriw_weighting(matrix, types)
print('IDOCRIW weights: ', np.round(weights, 3))

Output

IDOCRIW weights:  [0.166 0.189 0.355 0.291]

Angle weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[30, 30, 38, 29],
[19, 54, 86, 29],
[19, 15, 85, 28.9],
[68, 70, 60, 29]])

types = np.array([1, 1, 1, 1])

weights = mcda_weights.angle_weighting(matrix, types)
print('Angle weights: ', np.round(weights, 4))

Output

Angle weights:  [0.415  0.3612 0.2227 0.0012]

Coefficient of variation weighting method

import numpy as np
from pyrepo_mcda import weighting_methods as mcda_weights

matrix = np.array([[30, 30, 38, 29],
[19, 54, 86, 29],
[19, 15, 85, 28.9],
[68, 70, 60, 29]])

weights = mcda_weights.coeff_var_weighting(matrix)
print('Coefficient of variation weights: ', np.round(weights, 4))

Output

Coefficient of variation weights:  [0.4258 0.361  0.2121 0.0011]

Stochastic Multicriteria Acceptability Analysis Method - SMAA (VIKOR_SMAA)

from pyrepo_mcda.mcda_methods import VIKOR_SMAA

# Criteria number
n = matrix.shape[1]
# Number of weight vectors to generate for SMAA
iterations = 10000

# Create the object of the ``VIKOR_SMAA`` method
vikor_smaa = VIKOR_SMAA()
# Generate weight vectors for SMAA. Number of weight vectors is equal to ``iterations`` number. Vectors include ``n`` values.
weight_vectors = vikor_smaa._generate_weights(n, iterations)

# Calculate Rank acceptability index, Central weight vector and final ranking based on SMAA method combined with VIKOR
rank_acceptability_index, central_weight_vector, rank_scores = vikor_smaa(matrix, weight_vectors, types)

Distance metrics

Here are two examples of using distance metrics for Euclidean distance euclidean and Manhattan distance manhattan. Usage of other distance metrics provided in module distance metrics is analogous.

Euclidean distance

This method is used to calculate the Euclidean distance between two vectors A and B containing real values. The size od A and B must be the same. This method returns value of Euclidean distance between vectors A and B.

import numpy as np
from pyrepo_mcda import distance_metrics as dists

A = np.array([0.165, 0.113, 0.015, 0.019])
B = np.array([0.227, 0.161, 0.053, 0.130])

dist = dists.euclidean(A, B)
print('Distance: ', np.round(dist, 4))

Output

Distance:  0.1411

Manhattan distance

This method is used to calculate the Manhattan distance between two vectors A and B containing real values. The size od A and B must be the same. This method returns value of Manhattan distance between vectors A and B.

import numpy as np
from pyrepo_mcda import distance_metrics as dists

A = np.array([0.165, 0.113, 0.015, 0.019])
B = np.array([0.227, 0.161, 0.053, 0.130])

dist = dists.manhattan(A, B)
print('Distance: ', np.round(dist, 4))

Output

Distance:  0.259

Normalization methods

Here is an example of vector normalization usage. Other normalizations provided in module normalizations, namely minmax_normalization, max_normalization, sum_normalization, linear_normalization, multimoora_normalization are used in analogous way.

Vector normalization

This method is used to normalize decision matrix matrix. It requires providing decision matrix matrix with performance values of alternatives in rows considering criteria in columns and vector with criteria types types. This method returns normalized matrix.

import numpy as np
from pyrepo_mcda import normalizations as norms

matrix = np.array([[8, 7, 2, 1],
[5, 3, 7, 5],
[7, 5, 6, 4],
[9, 9, 7, 3],
[11, 10, 3, 7],
[6, 9, 5, 4]])

types = np.array([1, 1, 1, 1])

norm_matrix = norms.vector_normalization(matrix, types)
print('Normalized matrix: ', np.round(norm_matrix, 4))

Output

Normalized matrix:  [[0.4126 0.3769 0.1525 0.0928]
[0.2579 0.1615 0.5337 0.4642]
[0.361  0.2692 0.4575 0.3714]
[0.4641 0.4845 0.5337 0.2785]
[0.5673 0.5384 0.2287 0.6499]
[0.3094 0.4845 0.3812 0.3714]]

Methods for sensitivity analysis considering criteria weights modification

Sensitivity_analysis_weights_percentages

This method is used to perform the procedure of sensitivity analysis considering percentage modification the weight value of chosen criterion. This method requires providing two-dimensional decision matrix matrix, vector with criteria weights weights, vector with criteria types types, vector with real values of weight modification in percentages percentages (provided in range from 0 to 1), initialized object of chosen MCDA method method, index of column in decision matrix for chosen criterion j and list with directions of weight modification dir. dir can be set in three ways: when you want only increase weight value: [1], when you want only decrease weight value: [-1], when you want decrease and increase weight value: [-1, 1]. dir is set as [1] by default.

import numpy as np
from pyrepo_mcda.sensitivity_analysis_weights_percentages import Sensitivity_analysis_weights_percentages

import numpy as np
from pyrepo_mcda.mcda_methods import CODAS

# provide decision matrix in array numpy.darray
matrix = np.array([[45, 3600, 45, 0.9],
[25, 3800, 60, 0.8],
[23, 3100, 35, 0.9],
[14, 3400, 50, 0.7],
[15, 3300, 40, 0.8],
[28, 3000, 30, 0.6]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.2857, 0.3036, 0.2321, 0.1786])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, -1, 1, 1])

# provide vector with percentage values of chosen criterion weight modification
percentages = np.arange(0.05, 0.5, 0.1)

#create the chosen MCDA object
method = TOPSIS(normalization_method=norms.minmax_normalization, distance_metric=dists.euclidean)

# provide index of j-th chosen criterion whose weight will be modified in sensitivity analysis, for example j = 1 for criterion in the second column
j = 1

# Create the Sensitivity_analysis_weights_percentages object
sensitivity_analysis = Sensitivity_analysis_weights_percentages()

# Generate DataFrame with rankings for different modification of weight of chosen criterion
# Provide decision matrix ``matrix``, vector with criteria weights ``weights``, criteria types ``types``, initialized object of chosen MCDA
# method ``method``, index of chosen criterion whose weight will be modified and list with directions of weights value modification
data_sens = sensitivity_analysis(matrix, weights, types, percentages, method, j, [1])

Sensitivity_analysis_weights_values

This method is used to perform the procedure of sensitivity analysis considering setting chosen value as the weight of selected criterion. This method requires providing two-dimensional decision matrix matrix, vector with values weight_values to be set as selected criterion weight, vector with criteria types types, initialized object of chosen MCDA method method, and index of column in decision matrix for chosen criterion j

import numpy as np
from pyrepo_mcda.sensitivity_analysis_weights_values import Sensitivity_analysis_weights_values

import numpy as np
from pyrepo_mcda.mcda_methods import CODAS

# provide decision matrix in array numpy.darray
matrix = np.array([[45, 3600, 45, 0.9],
[25, 3800, 60, 0.8],
[23, 3100, 35, 0.9],
[14, 3400, 50, 0.7],
[15, 3300, 40, 0.8],
[28, 3000, 30, 0.6]])

# provide criteria weights in array numpy.darray. All weights must sum to 1.
weights = np.array([0.2857, 0.3036, 0.2321, 0.1786])

# provide criteria types in array numpy.darray. Profit criteria are represented by 1 and cost criteria by -1.
types = np.array([1, -1, 1, 1])

# provide vector with values to be set as weight of selected criterion.
weight_values = np.arange(0.05, 0.95, 0.1)

#create the chosen MCDA object
method = TOPSIS(normalization_method=norms.minmax_normalization, distance_metric=dists.euclidean)

# provide index of j-th chosen criterion whose weight will be modified in sensitivity analysis, for example j = 1 for criterion in the second column
j = 1

# Create the Sensitivity_analysis_weights_values object
sensitivity_analysis = Sensitivity_analysis_weights_values()

# Generate DataFrame with rankings for different modification of weight of chosen criterion
# Provide decision matrix ``matrix``, vector with values `weight_values` to be set as weight of selected criterion, criteria types ``types``, initialized object of chosen MCDA
# method ``method`` and index of chosen criterion whose weight will be modified.
data_sens = sensitivity_analysis_weights_values(matrix, weight_values, types, method, j)