In recent years, more and more importance is given to interpretability in the ML field. The best known and most famous area in which the interpretability of a neural network is needed is that of cyber-security. The first paper to expose the potential issue is by Ian Goodfellow et al. 2014, in ”Intriguing properties of neural networks”, in which it is shown how an image, if altered in the right way, can be completely misclassified by a network trained to classify images. In this thesis I proposed a new method based on a hybrid network, i.e half biological and half artificial, in order to develop a neural network capable of resisting a lot of different adversarial attacks. The biological part is based on the hebbian-anti hebbian neural-dynamics, while the artificial one is based on probability and Boltzmann machines."

In recent years, more and more importance is given to interpretability in the ML field. The best known and most famous area in which the interpretability of a neural network is needed is that of cyber-security. The first paper to expose the potential issue is by Ian Goodfellow et al. 2014, in ”Intriguing properties of neural networks”, in which it is shown how an image, if altered in the right way, can be completely misclassified by a network trained to classify images. In this thesis I proposed a new method based on a hybrid network, i.e half biological and half artificial, in order to develop a neural network capable of resisting a lot of different adversarial attacks. The biological part is based on the hebbian-anti hebbian neural-dynamics, while the artificial one is based on probability and Boltzmann machines."

Biological networks as defense against adversarial attacks.

ZANOLA, ANDREA
2021/2022

Abstract

In recent years, more and more importance is given to interpretability in the ML field. The best known and most famous area in which the interpretability of a neural network is needed is that of cyber-security. The first paper to expose the potential issue is by Ian Goodfellow et al. 2014, in ”Intriguing properties of neural networks”, in which it is shown how an image, if altered in the right way, can be completely misclassified by a network trained to classify images. In this thesis I proposed a new method based on a hybrid network, i.e half biological and half artificial, in order to develop a neural network capable of resisting a lot of different adversarial attacks. The biological part is based on the hebbian-anti hebbian neural-dynamics, while the artificial one is based on probability and Boltzmann machines."
2021
Biological networks as defense against adversarial attacks.
In recent years, more and more importance is given to interpretability in the ML field. The best known and most famous area in which the interpretability of a neural network is needed is that of cyber-security. The first paper to expose the potential issue is by Ian Goodfellow et al. 2014, in ”Intriguing properties of neural networks”, in which it is shown how an image, if altered in the right way, can be completely misclassified by a network trained to classify images. In this thesis I proposed a new method based on a hybrid network, i.e half biological and half artificial, in order to develop a neural network capable of resisting a lot of different adversarial attacks. The biological part is based on the hebbian-anti hebbian neural-dynamics, while the artificial one is based on probability and Boltzmann machines."
Biological Networks
Adversarial Attacks
MNIST
Hebbian Rule
RBM
File in questo prodotto:
File Dimensione Formato  
Master_Thesis_Andrea_Zanola.pdf

accesso aperto

Dimensione 10.46 MB
Formato Adobe PDF
10.46 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/33213