This thesis explores the domain of Preference Learning, which involves deriving preferences based on observed data, enabling the development of personalized recommendation and decision-support systems. We focus specifically on outranking methods in the context of preference-ordered decision classes. For each proposed method, a comparative analysis is conducted, by taking into consideration the accuracy, computation time and tolerance to inconsistencies. The study begins with a survey of the literature. Subsequently, we implement an inv-MR-Sort and the U-NCS algorithms for Preference Learning. To address scalability issues, we propose two greedy algorithms and we study one metaheuristic, to efficiently handle large training sets. While demonstrating a faster computation time, these alternative approaches exhibit reduced accuracy. In summary, the U-NCS algorithm outperforms the other methods, showcasing the fastest learning time and the highest accuracy. Nonetheless, all the previous solutions face computational challenges with the presence of inconsistencies in the learning set. To address this issue, we conclude by introducing an algorithm based on rule induction: while yielding satisfactory results, this method comes at the cost of a reduced explanatory expressiveness for the resulting model.
Preference Learning: outranking methods for preference-ordered decision classes
MARITAN, ALBERTO
2022/2023
Abstract
This thesis explores the domain of Preference Learning, which involves deriving preferences based on observed data, enabling the development of personalized recommendation and decision-support systems. We focus specifically on outranking methods in the context of preference-ordered decision classes. For each proposed method, a comparative analysis is conducted, by taking into consideration the accuracy, computation time and tolerance to inconsistencies. The study begins with a survey of the literature. Subsequently, we implement an inv-MR-Sort and the U-NCS algorithms for Preference Learning. To address scalability issues, we propose two greedy algorithms and we study one metaheuristic, to efficiently handle large training sets. While demonstrating a faster computation time, these alternative approaches exhibit reduced accuracy. In summary, the U-NCS algorithm outperforms the other methods, showcasing the fastest learning time and the highest accuracy. Nonetheless, all the previous solutions face computational challenges with the presence of inconsistencies in the learning set. To address this issue, we conclude by introducing an algorithm based on rule induction: while yielding satisfactory results, this method comes at the cost of a reduced explanatory expressiveness for the resulting model.File | Dimensione | Formato | |
---|---|---|---|
Maritan_Alberto.pdf
accesso riservato
Dimensione
689.12 kB
Formato
Adobe PDF
|
689.12 kB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/58741