Deep Reinforcement Learning models use a Deep Neural Network to compute the Q-function, avoiding some computational and memory issues related to the Q-table in classic Reinforcement Learning. However, DeepRL models suffer from high sample complexity; to deal with this problem, it has been recently proposed to exploit the Markov Decision Process underlying DeepRL models, allowing to use Graph Representation Learning approaches to obtain efficient state representations to feed the Deep Neural Network with. Following this idea, we focus on random walk based GRL: the main methods in this category are analyzed and compared, and the most suitable one is used for graph representation in a DeepRL model, whose results are then discussed and compared to the ones of a DeepRL model without GRL techniques.

Deep Reinforcement Learning models use a Deep Neural Network to compute the Q-function, avoiding some computational and memory issues related to the Q-table in classic Reinforcement Learning. However, DeepRL models suffer from high sample complexity; to deal with this problem, it has been recently proposed to exploit the Markov Decision Process underlying DeepRL models, allowing to use Graph Representation Learning approaches to obtain efficient state representations to feed the Deep Neural Network with. Following this idea, we focus on random walk based GRL: the main methods in this category are analyzed and compared, and the most suitable one is used for graph representation in a DeepRL model, whose results are then discussed and compared to the ones of a DeepRL model without GRL techniques.

Addressing State Representation in Deep Reinforcement Learning: a critical analysis of state-of-the-art” methods

CANNAVÒ, FIAMMETTA
2021/2022

Abstract

Deep Reinforcement Learning models use a Deep Neural Network to compute the Q-function, avoiding some computational and memory issues related to the Q-table in classic Reinforcement Learning. However, DeepRL models suffer from high sample complexity; to deal with this problem, it has been recently proposed to exploit the Markov Decision Process underlying DeepRL models, allowing to use Graph Representation Learning approaches to obtain efficient state representations to feed the Deep Neural Network with. Following this idea, we focus on random walk based GRL: the main methods in this category are analyzed and compared, and the most suitable one is used for graph representation in a DeepRL model, whose results are then discussed and compared to the ones of a DeepRL model without GRL techniques.
2021
Addressing State Representation in Deep Reinforcement Learning: a critical analysis of state-of-the-art” methods
Deep Reinforcement Learning models use a Deep Neural Network to compute the Q-function, avoiding some computational and memory issues related to the Q-table in classic Reinforcement Learning. However, DeepRL models suffer from high sample complexity; to deal with this problem, it has been recently proposed to exploit the Markov Decision Process underlying DeepRL models, allowing to use Graph Representation Learning approaches to obtain efficient state representations to feed the Deep Neural Network with. Following this idea, we focus on random walk based GRL: the main methods in this category are analyzed and compared, and the most suitable one is used for graph representation in a DeepRL model, whose results are then discussed and compared to the ones of a DeepRL model without GRL techniques.
Graph Representation
Random Walks
Deep Learning
DeepRL
Markov Decision proc
File in questo prodotto:
File Dimensione Formato  
Cannavo_Fiammetta.pdf

accesso aperto

Dimensione 6.33 MB
Formato Adobe PDF
6.33 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/42139