Associative memory refers to the human brain's ability to retrieve stored information when presented with related or partial cues. While this cognitive property has long inspired artificial associative memory systems, traditional approaches have primarily focused on the reconstruction or exact retrieval of stored patterns from corrupted or incomplete inputs. However, human recall is not a simple pattern matching mechanism, but rather a relational and constructive process. Recent studies have revealed deep connections between classical associative memory networks and modern generative models, showing how memory retrieval can be interpreted as a form of generative inference. Motivated by this perspective, this thesis investigates whether generative models, specifically Variational Autoencoders and Diffusion Models, can serve as relational associative memory systems, capable of supporting relational generation and structured memorization within a unified probabilistic framework. To this end, we first introduce a relational extension of the MNIST dataset, where numbers (concepts) are linked through arithmetic relations. Experiments in this setting show that generative models can produce relation-consistent outputs conditioned on input instances, capturing multi-valued mappings and generalizing to unseen concept-relation pairs. We then extend our analysis to structured graph data using the Cora citation network. In this setting, these models act as generative graph memories capable of recursively reconstructing the structure of the graph. We further evaluate them in a continual learning scenario, as memory networks inherently require the ability to accumulate information over time while preserving prior knowledge. We show that generative replay enables this behavior, mitigating catastrophic forgetting. Overall, our findings support a unified view of generative models as flexible memory systems that bridge relational generation and structural memorization within a single generative framework.
Associative memory refers to the human brain's ability to retrieve stored information when presented with related or partial cues. While this cognitive property has long inspired artificial associative memory systems, traditional approaches have primarily focused on the reconstruction or exact retrieval of stored patterns from corrupted or incomplete inputs. However, human recall is not a simple pattern matching mechanism, but rather a relational and constructive process. Recent studies have revealed deep connections between classical associative memory networks and modern generative models, showing how memory retrieval can be interpreted as a form of generative inference. Motivated by this perspective, this thesis investigates whether generative models, specifically Variational Autoencoders and Diffusion Models, can serve as relational associative memory systems, capable of supporting relational generation and structured memorization within a unified probabilistic framework. To this end, we first introduce a relational extension of the MNIST dataset, where numbers (concepts) are linked through arithmetic relations. Experiments in this setting show that generative models can produce relation-consistent outputs conditioned on input instances, capturing multi-valued mappings and generalizing to unseen concept-relation pairs. We then extend our analysis to structured graph data using the Cora citation network. In this setting, these models act as generative graph memories capable of recursively reconstructing the structure of the graph. We further evaluate them in a continual learning scenario, as memory networks inherently require the ability to accumulate information over time while preserving prior knowledge. We show that generative replay enables this behavior, mitigating catastrophic forgetting. Overall, our findings support a unified view of generative models as flexible memory systems that bridge relational generation and structural memorization within a single generative framework.
Neural Relational Associative Memories
LOLLATO, FRANCESCO
2025/2026
Abstract
Associative memory refers to the human brain's ability to retrieve stored information when presented with related or partial cues. While this cognitive property has long inspired artificial associative memory systems, traditional approaches have primarily focused on the reconstruction or exact retrieval of stored patterns from corrupted or incomplete inputs. However, human recall is not a simple pattern matching mechanism, but rather a relational and constructive process. Recent studies have revealed deep connections between classical associative memory networks and modern generative models, showing how memory retrieval can be interpreted as a form of generative inference. Motivated by this perspective, this thesis investigates whether generative models, specifically Variational Autoencoders and Diffusion Models, can serve as relational associative memory systems, capable of supporting relational generation and structured memorization within a unified probabilistic framework. To this end, we first introduce a relational extension of the MNIST dataset, where numbers (concepts) are linked through arithmetic relations. Experiments in this setting show that generative models can produce relation-consistent outputs conditioned on input instances, capturing multi-valued mappings and generalizing to unseen concept-relation pairs. We then extend our analysis to structured graph data using the Cora citation network. In this setting, these models act as generative graph memories capable of recursively reconstructing the structure of the graph. We further evaluate them in a continual learning scenario, as memory networks inherently require the ability to accumulate information over time while preserving prior knowledge. We show that generative replay enables this behavior, mitigating catastrophic forgetting. Overall, our findings support a unified view of generative models as flexible memory systems that bridge relational generation and structural memorization within a single generative framework.| File | Dimensione | Formato | |
|---|---|---|---|
|
Lollato_Francesco.pdf
Accesso riservato
Dimensione
2.45 MB
Formato
Adobe PDF
|
2.45 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/108230