Machine learning techniques are very important nowadays. The most popular scenario for these problems is the supervised learning one, where a classification task is defined in order to classify new data. The logical approach of defining a classifier is to achieve the one failing less, so that this is equivalent to minimizing an error function. There are some techniques and algorithms used to solve these problems, but the ones used for nondifferentiable functions have a high computational cost. For a specific type of classifiers called Minimax Risk Classifiers, the error function obtained is not differentiable, so we use the theoretical base of different subgradient methods to develop an algorithm for this specific case. The presented techniques allow the optimization problem to be solved efficiently.

Machine learning techniques are very important nowadays. The most popular scenario for these problems is the supervised learning one, where a classification task is defined in order to classify new data. The logical approach of defining a classifier is to achieve the one failing less, so that this is equivalent to minimizing an error function. There are some techniques and algorithms used to solve these problems, but the ones used for nondifferentiable functions have a high computational cost. For a specific type of classifiers called Minimax Risk Classifiers, the error function obtained is not differentiable, so we use the theoretical base of different subgradient methods to develop an algorithm for this specific case. The presented techniques allow the optimization problem to be solved efficiently.

Subgradient methods for the efficient learning of minimax risk classifiers

BERISTAIN CORTES, MAIALEN
2023/2024

Abstract

Machine learning techniques are very important nowadays. The most popular scenario for these problems is the supervised learning one, where a classification task is defined in order to classify new data. The logical approach of defining a classifier is to achieve the one failing less, so that this is equivalent to minimizing an error function. There are some techniques and algorithms used to solve these problems, but the ones used for nondifferentiable functions have a high computational cost. For a specific type of classifiers called Minimax Risk Classifiers, the error function obtained is not differentiable, so we use the theoretical base of different subgradient methods to develop an algorithm for this specific case. The presented techniques allow the optimization problem to be solved efficiently.
2023
Subgradient methods for the efficient learning of minimax risk classifiers
Machine learning techniques are very important nowadays. The most popular scenario for these problems is the supervised learning one, where a classification task is defined in order to classify new data. The logical approach of defining a classifier is to achieve the one failing less, so that this is equivalent to minimizing an error function. There are some techniques and algorithms used to solve these problems, but the ones used for nondifferentiable functions have a high computational cost. For a specific type of classifiers called Minimax Risk Classifiers, the error function obtained is not differentiable, so we use the theoretical base of different subgradient methods to develop an algorithm for this specific case. The presented techniques allow the optimization problem to be solved efficiently.
Optimization
Subgradient method
Supervised learning
File in questo prodotto:
File Dimensione Formato  
Tesi di Magistrale. Maialen Beristain Cortes.pdf

accesso aperto

Dimensione 1.2 MB
Formato Adobe PDF
1.2 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/71084