The main intent of the thesis is following a path that try to obtain an optimal Neural Network model. At each step we are going to give great importance to formalization of the mathematical aspects. The aim, in fact, is also to mathematically study the problems encounter and, using mathematics again, try to solve them. The first part (first two chapters) is dedicated to introduce Neural Networks and the greater fields where they are contained. Moreover, here we understand what leads to their invention and how essential the optimization methods are in Machine Learning: learning in its essence means optimizing an error functional. The earlier optimization algorithms are studied in the second part where we mainly talk about variations of the Gradient Descent Method. We exhibits some of their limits and strengths. In the last part we present some results on the comparison between the previously shown algorithm. We end the thesis showing a modern method which somehow invert our process: rather than trying to solve a problem, we try to understand why a given solution is indeed working.

The main intent of the thesis is following a path that try to obtain an optimal Neural Network model. At each step we are going to give great importance to formalization of the mathematical aspects. The aim, in fact, is also to mathematically study the problems encounter and, using mathematics again, try to solve them. The first part (first two chapters) is dedicated to introduce Neural Networks and the greater fields where they are contained. Moreover, here we understand what leads to their invention and how essential the optimization methods are in Machine Learning: learning in its essence means optimizing an error functional. The earlier optimization algorithms are studied in the second part where we mainly talk about variations of the Gradient Descent Method. We exhibits some of their limits and strengths. In the last part we present some results on the comparison between the previously shown algorithm. We end the thesis showing a modern method which somehow invert our process: rather than trying to solve a problem, we try to understand why a given solution is indeed working.

### A Mathematical Approach to Neural Networks Optimization

#### Abstract

The main intent of the thesis is following a path that try to obtain an optimal Neural Network model. At each step we are going to give great importance to formalization of the mathematical aspects. The aim, in fact, is also to mathematically study the problems encounter and, using mathematics again, try to solve them. The first part (first two chapters) is dedicated to introduce Neural Networks and the greater fields where they are contained. Moreover, here we understand what leads to their invention and how essential the optimization methods are in Machine Learning: learning in its essence means optimizing an error functional. The earlier optimization algorithms are studied in the second part where we mainly talk about variations of the Gradient Descent Method. We exhibits some of their limits and strengths. In the last part we present some results on the comparison between the previously shown algorithm. We end the thesis showing a modern method which somehow invert our process: rather than trying to solve a problem, we try to understand why a given solution is indeed working.
##### Scheda Scheda DC
2023
A Mathematical Approach to Neural Networks Optimization
The main intent of the thesis is following a path that try to obtain an optimal Neural Network model. At each step we are going to give great importance to formalization of the mathematical aspects. The aim, in fact, is also to mathematically study the problems encounter and, using mathematics again, try to solve them. The first part (first two chapters) is dedicated to introduce Neural Networks and the greater fields where they are contained. Moreover, here we understand what leads to their invention and how essential the optimization methods are in Machine Learning: learning in its essence means optimizing an error functional. The earlier optimization algorithms are studied in the second part where we mainly talk about variations of the Gradient Descent Method. We exhibits some of their limits and strengths. In the last part we present some results on the comparison between the previously shown algorithm. We end the thesis showing a modern method which somehow invert our process: rather than trying to solve a problem, we try to understand why a given solution is indeed working.
Neural Networks
Optimization
Machine Learning
File in questo prodotto:
File
Tosi_Andrea_2029002.pdf

accesso aperto

Dimensione 2.24 MB
Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/20.500.12608/61995`