Numerical reasoning is a remarkable human ability that has been extensively investigated by many research fields, including psychology, neuroscience, anthropology and cognitive science. This work expands such investigation using a computational approach in an attempt to create deep learning models that could simulate sophisticated aspects of human cognition related to the manipulation of symbolic numbers. In particular, under specific circumstances, we were able to train an artificial agent by exploiting a model free deep reinforcement learning framework, with the learning goal of successfully solving long sequences of multi-digits additions and subtractions. We investigated which kind of guidance and processing constraints are needed to successfully train such agent, in terms of both reward shaping and manipulation biases in the available representational tools. While the agent usually learns by trial and error, in our final scenario we show that learning can be more effective when guided by explicit human supervision. In such case, the agent successfully learned to interact with a virtual abacus and use it as a representational tool to record the information needed to solve arithmetic problems.

Numerical reasoning is a remarkable human ability that has been extensively investigated by many research fields, including psychology, neuroscience, anthropology and cognitive science. This work expands such investigation using a computational approach in an attempt to create deep learning models that could simulate sophisticated aspects of human cognition related to the manipulation of symbolic numbers. In particular, under specific circumstances, we were able to train an artificial agent by exploiting a model free deep reinforcement learning framework, with the learning goal of successfully solving long sequences of multi-digits additions and subtractions. We investigated which kind of guidance and processing constraints are needed to successfully train such agent, in terms of both reward shaping and manipulation biases in the available representational tools. While the agent usually learns by trial and error, in our final scenario we show that learning can be more effective when guided by explicit human supervision. In such case, the agent successfully learned to interact with a virtual abacus and use it as a representational tool to record the information needed to solve arithmetic problems.

Using Deep Reinforcement Learning in order to solve algebraic problems through simple representation tools

CHEN, LING XUAN
2021/2022

Abstract

Numerical reasoning is a remarkable human ability that has been extensively investigated by many research fields, including psychology, neuroscience, anthropology and cognitive science. This work expands such investigation using a computational approach in an attempt to create deep learning models that could simulate sophisticated aspects of human cognition related to the manipulation of symbolic numbers. In particular, under specific circumstances, we were able to train an artificial agent by exploiting a model free deep reinforcement learning framework, with the learning goal of successfully solving long sequences of multi-digits additions and subtractions. We investigated which kind of guidance and processing constraints are needed to successfully train such agent, in terms of both reward shaping and manipulation biases in the available representational tools. While the agent usually learns by trial and error, in our final scenario we show that learning can be more effective when guided by explicit human supervision. In such case, the agent successfully learned to interact with a virtual abacus and use it as a representational tool to record the information needed to solve arithmetic problems.
2021
Using Deep Reinforcement Learning in order to solve algebraic problems through simple representation tools
Numerical reasoning is a remarkable human ability that has been extensively investigated by many research fields, including psychology, neuroscience, anthropology and cognitive science. This work expands such investigation using a computational approach in an attempt to create deep learning models that could simulate sophisticated aspects of human cognition related to the manipulation of symbolic numbers. In particular, under specific circumstances, we were able to train an artificial agent by exploiting a model free deep reinforcement learning framework, with the learning goal of successfully solving long sequences of multi-digits additions and subtractions. We investigated which kind of guidance and processing constraints are needed to successfully train such agent, in terms of both reward shaping and manipulation biases in the available representational tools. While the agent usually learns by trial and error, in our final scenario we show that learning can be more effective when guided by explicit human supervision. In such case, the agent successfully learned to interact with a virtual abacus and use it as a representational tool to record the information needed to solve arithmetic problems.
learn by reinforce
cognitive agent
virtual abacus
File in questo prodotto:
File Dimensione Formato  
Chen_Ling Xuan.pdf

accesso riservato

Dimensione 4.46 MB
Formato Adobe PDF
4.46 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/42064