As the representations output by Graph Neural Networks are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair; graph embeddings can in fact encode potentially harmful social biases, such as the information that women are more likely to be nurses, and men more likely to be bankers. This work explores and enforces state of the art methods to mitigate this bias and produce fairer representations of real world graph data while maintaining a good classification accuracy.

As the representations output by Graph Neural Networks are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair; graph embeddings can in fact encode potentially harmful social biases, such as the information that women are more likely to be nurses, and men more likely to be bankers. This work explores and enforces state of the art methods to mitigate this bias and produce fairer representations of real world graph data while maintaining a good classification accuracy.

Enforcing fairness in graph representation learning

CALDART, FEDERICO
2021/2022

Abstract

As the representations output by Graph Neural Networks are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair; graph embeddings can in fact encode potentially harmful social biases, such as the information that women are more likely to be nurses, and men more likely to be bankers. This work explores and enforces state of the art methods to mitigate this bias and produce fairer representations of real world graph data while maintaining a good classification accuracy.
2021
Enforcing fairness in graph representation learning
As the representations output by Graph Neural Networks are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair; graph embeddings can in fact encode potentially harmful social biases, such as the information that women are more likely to be nurses, and men more likely to be bankers. This work explores and enforces state of the art methods to mitigate this bias and produce fairer representations of real world graph data while maintaining a good classification accuracy.
Graph Neural Network
Deep Learning
Fairness
Machine Learning
File in questo prodotto:
File Dimensione Formato  
Caldart_Federico.pdf

accesso aperto

Dimensione 2.1 MB
Formato Adobe PDF
2.1 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/32821