Associative memory, a fundamental property of the human brain, allows the retrieval of stored information starting from partial or noisy external inputs. Artificial systems inspired by this cognitive ability aim to replicate this behavior by learning associations. In the brain, not only the input itself, but also the surrounding context can influence the retrieval of the correct pattern. For example, humans are likely to recall a specific event when exposed to a related context, highlighting the graph-like structure of memory organization. In a similar way, this work addresses relational tasks by demonstrating how generative models can construct internal representations of relational graphs, where nodes correspond to stored patterns and edges represent associations among them. This observation motivates the central theme of this work: modeling associative memories through generative architectures. We explore the use of generative models, specifically Variational Autoencoders (VAEs) and Denoising Diffusion Models (DDMs), as modern implementations of associative memory systems. A noticeable property of these generative approaches is their ability to generalize across related inputs . The MNIST dataset is used as a starting point to train models that output a digit, given a different and related one. The task difficulty is gradually increased. This is later extended to a customized version of MNIST consisting of concatenated pairs of digits (from 0 to 99), establishing relationships such as predecessors and successors. While the VAE is simpler to train and more computationally efficient, it does not perform as well as diffusion models in relational recall tasks. We present and analyze the results of this approach, followed by a comprehensive comparison of VAEs and DDMs in terms of their performance on relational memory tasks.

Associative memory, a fundamental property of the human brain, allows the retrieval of stored information starting from partial or noisy external inputs. Artificial systems inspired by this cognitive ability aim to replicate this behavior by learning associations. In the brain, not only the input itself, but also the surrounding context can influence the retrieval of the correct pattern. For example, humans are likely to recall a specific event when exposed to a related context, highlighting the graph-like structure of memory organization. In a similar way, this work addresses relational tasks by demonstrating how generative models can construct internal representations of relational graphs, where nodes correspond to stored patterns and edges represent associations among them. This observation motivates the central theme of this work: modeling associative memories through generative architectures. We explore the use of generative models, specifically Variational Autoencoders (VAEs) and Denoising Diffusion Models (DDMs), as modern implementations of associative memory systems. A noticeable property of these generative approaches is their ability to generalize across related inputs . The MNIST dataset is used as a starting point to train models that output a digit, given a different and related one. The task difficulty is gradually increased. This is later extended to a customized version of MNIST consisting of concatenated pairs of digits (from 0 to 99), establishing relationships such as predecessors and successors. While the VAE is simpler to train and more computationally efficient, it does not perform as well as diffusion models in relational recall tasks. We present and analyze the results of this approach, followed by a comprehensive comparison of VAEs and DDMs in terms of their performance on relational memory tasks.

Generative Models as Associative Memories

SPINELLO, VINCENZO
2024/2025

Abstract

Associative memory, a fundamental property of the human brain, allows the retrieval of stored information starting from partial or noisy external inputs. Artificial systems inspired by this cognitive ability aim to replicate this behavior by learning associations. In the brain, not only the input itself, but also the surrounding context can influence the retrieval of the correct pattern. For example, humans are likely to recall a specific event when exposed to a related context, highlighting the graph-like structure of memory organization. In a similar way, this work addresses relational tasks by demonstrating how generative models can construct internal representations of relational graphs, where nodes correspond to stored patterns and edges represent associations among them. This observation motivates the central theme of this work: modeling associative memories through generative architectures. We explore the use of generative models, specifically Variational Autoencoders (VAEs) and Denoising Diffusion Models (DDMs), as modern implementations of associative memory systems. A noticeable property of these generative approaches is their ability to generalize across related inputs . The MNIST dataset is used as a starting point to train models that output a digit, given a different and related one. The task difficulty is gradually increased. This is later extended to a customized version of MNIST consisting of concatenated pairs of digits (from 0 to 99), establishing relationships such as predecessors and successors. While the VAE is simpler to train and more computationally efficient, it does not perform as well as diffusion models in relational recall tasks. We present and analyze the results of this approach, followed by a comprehensive comparison of VAEs and DDMs in terms of their performance on relational memory tasks.
2024
Generative Models as Associative Memories
Associative memory, a fundamental property of the human brain, allows the retrieval of stored information starting from partial or noisy external inputs. Artificial systems inspired by this cognitive ability aim to replicate this behavior by learning associations. In the brain, not only the input itself, but also the surrounding context can influence the retrieval of the correct pattern. For example, humans are likely to recall a specific event when exposed to a related context, highlighting the graph-like structure of memory organization. In a similar way, this work addresses relational tasks by demonstrating how generative models can construct internal representations of relational graphs, where nodes correspond to stored patterns and edges represent associations among them. This observation motivates the central theme of this work: modeling associative memories through generative architectures. We explore the use of generative models, specifically Variational Autoencoders (VAEs) and Denoising Diffusion Models (DDMs), as modern implementations of associative memory systems. A noticeable property of these generative approaches is their ability to generalize across related inputs . The MNIST dataset is used as a starting point to train models that output a digit, given a different and related one. The task difficulty is gradually increased. This is later extended to a customized version of MNIST consisting of concatenated pairs of digits (from 0 to 99), establishing relationships such as predecessors and successors. While the VAE is simpler to train and more computationally efficient, it does not perform as well as diffusion models in relational recall tasks. We present and analyze the results of this approach, followed by a comprehensive comparison of VAEs and DDMs in terms of their performance on relational memory tasks.
Associative Memories
Diffusion Models
Variational AE
File in questo prodotto:
File Dimensione Formato  
Generative_Models_as_Associative_Memories.pdf

accesso aperto

Dimensione 1.47 MB
Formato Adobe PDF
1.47 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/91858