Spiking Neural Networks (SNNs), inspired by the computational principles of the brain, offer a powerful framework for processing temporal information. A key learning mechanism in these networks is Spike-Timing-Dependent Plasticity (STDP), a biologically inspired rule that adjusts synaptic weights based on the precise timing of neural spikes. Foundational work has shown that SNNs equipped with STDP can self-organize to perform unsupervised detection of repeating spatio-temporal patterns embedded in noisy spike trains. While this paradigm provides a robust basis for pattern learning, several critical questions remain underexplored. First, the robustness of the learning process to different initial synaptic weight configurations has not been systematically investigated. Second, a quantitative understanding of how network performance scales with the number of available neurons relative to the number of patterns is lacking, leaving a gap in practical design principles. Most significantly, existing models are typically trained on a static set of patterns and are unable to learn new information without catastrophically forgetting previously acquired knowledge. This thesis addresses these three challenges through a series of computational experiments. We investigate the dynamics of a single-layer SNN trained with an STDP rule on synthetic spike train data. The primary contribution of this work is the proposal and validation of a novel framework for continual, unsupervised learning. This framework enables the network to acquire new patterns by dynamically adding new, randomly initialized neurons that are integrated into the network via an asymmetric inhibitory scheme. Our results demonstrate that the STDP-driven learning process is remarkably robust, reliably converging to a bimodal, selective state from uniform, Gaussian, and constant initial weight distributions. This demonstrates that underlying learning mechanism is stable and does not require carefully chosen initialization schemes. We provide a quantitative characterization of successful neuron saturation, showing that the number of successful neuron detectors saturates at a ratio of approximately $1.65{-}1.90$ neurons per pattern. This finding yields a practical guideline for resource allocation, suggesting an optimal network size of approximately two to three neurons per each pattern. Most importantly, we show that our continual learning framework successfully learns new patterns without supervision. During this process, established old neurons exhibit no memory degradation, while newly added neurons specialize on the novel patterns. In conclusion, this thesis advances the understanding of unsupervised learning in SNNs by confirming the robustness of STDP, providing practical guidelines for network design, and offering a novel, effective solution to the challenge of continual learning.

Spiking Neural Networks (SNNs), inspired by the computational principles of the brain, offer a powerful framework for processing temporal information. A key learning mechanism in these networks is Spike-Timing-Dependent Plasticity (STDP), a biologically inspired rule that adjusts synaptic weights based on the precise timing of neural spikes. Foundational work has shown that SNNs equipped with STDP can self-organize to perform unsupervised detection of repeating spatio-temporal patterns embedded in noisy spike trains. While this paradigm provides a robust basis for pattern learning, several critical questions remain underexplored. First, the robustness of the learning process to different initial synaptic weight configurations has not been systematically investigated. Second, a quantitative understanding of how network performance scales with the number of available neurons relative to the number of patterns is lacking, leaving a gap in practical design principles. Most significantly, existing models are typically trained on a static set of patterns and are unable to learn new information without catastrophically forgetting previously acquired knowledge. This thesis addresses these three challenges through a series of computational experiments. We investigate the dynamics of a single-layer SNN trained with an STDP rule on synthetic spike train data. The primary contribution of this work is the proposal and validation of a novel framework for continual, unsupervised learning. This framework enables the network to acquire new patterns by dynamically adding new, randomly initialized neurons that are integrated into the network via an asymmetric inhibitory scheme. Our results demonstrate that the STDP-driven learning process is remarkably robust, reliably converging to a bimodal, selective state from uniform, Gaussian, and constant initial weight distributions. This demonstrates that underlying learning mechanism is stable and does not require carefully chosen initialization schemes. We provide a quantitative characterization of successful neuron saturation, showing that the number of successful neuron detectors saturates at a ratio of approximately $1.65{-}1.90$ neurons per pattern. This finding yields a practical guideline for resource allocation, suggesting an optimal network size of approximately two to three neurons per each pattern. Most importantly, we show that our continual learning framework successfully learns new patterns without supervision. During this process, established old neurons exhibit no memory degradation, while newly added neurons specialize on the novel patterns. In conclusion, this thesis advances the understanding of unsupervised learning in SNNs by confirming the robustness of STDP, providing practical guidelines for network design, and offering a novel, effective solution to the challenge of continual learning.

Two Stage Pattern Learning using Spiking Neural Networks and Competitive STDP

FARAJINEGARESTAN, MEHRAN
2024/2025

Abstract

Spiking Neural Networks (SNNs), inspired by the computational principles of the brain, offer a powerful framework for processing temporal information. A key learning mechanism in these networks is Spike-Timing-Dependent Plasticity (STDP), a biologically inspired rule that adjusts synaptic weights based on the precise timing of neural spikes. Foundational work has shown that SNNs equipped with STDP can self-organize to perform unsupervised detection of repeating spatio-temporal patterns embedded in noisy spike trains. While this paradigm provides a robust basis for pattern learning, several critical questions remain underexplored. First, the robustness of the learning process to different initial synaptic weight configurations has not been systematically investigated. Second, a quantitative understanding of how network performance scales with the number of available neurons relative to the number of patterns is lacking, leaving a gap in practical design principles. Most significantly, existing models are typically trained on a static set of patterns and are unable to learn new information without catastrophically forgetting previously acquired knowledge. This thesis addresses these three challenges through a series of computational experiments. We investigate the dynamics of a single-layer SNN trained with an STDP rule on synthetic spike train data. The primary contribution of this work is the proposal and validation of a novel framework for continual, unsupervised learning. This framework enables the network to acquire new patterns by dynamically adding new, randomly initialized neurons that are integrated into the network via an asymmetric inhibitory scheme. Our results demonstrate that the STDP-driven learning process is remarkably robust, reliably converging to a bimodal, selective state from uniform, Gaussian, and constant initial weight distributions. This demonstrates that underlying learning mechanism is stable and does not require carefully chosen initialization schemes. We provide a quantitative characterization of successful neuron saturation, showing that the number of successful neuron detectors saturates at a ratio of approximately $1.65{-}1.90$ neurons per pattern. This finding yields a practical guideline for resource allocation, suggesting an optimal network size of approximately two to three neurons per each pattern. Most importantly, we show that our continual learning framework successfully learns new patterns without supervision. During this process, established old neurons exhibit no memory degradation, while newly added neurons specialize on the novel patterns. In conclusion, this thesis advances the understanding of unsupervised learning in SNNs by confirming the robustness of STDP, providing practical guidelines for network design, and offering a novel, effective solution to the challenge of continual learning.
2024
Two Stage Pattern Learning using Spiking Neural Networks and Competitive STDP
Spiking Neural Networks (SNNs), inspired by the computational principles of the brain, offer a powerful framework for processing temporal information. A key learning mechanism in these networks is Spike-Timing-Dependent Plasticity (STDP), a biologically inspired rule that adjusts synaptic weights based on the precise timing of neural spikes. Foundational work has shown that SNNs equipped with STDP can self-organize to perform unsupervised detection of repeating spatio-temporal patterns embedded in noisy spike trains. While this paradigm provides a robust basis for pattern learning, several critical questions remain underexplored. First, the robustness of the learning process to different initial synaptic weight configurations has not been systematically investigated. Second, a quantitative understanding of how network performance scales with the number of available neurons relative to the number of patterns is lacking, leaving a gap in practical design principles. Most significantly, existing models are typically trained on a static set of patterns and are unable to learn new information without catastrophically forgetting previously acquired knowledge. This thesis addresses these three challenges through a series of computational experiments. We investigate the dynamics of a single-layer SNN trained with an STDP rule on synthetic spike train data. The primary contribution of this work is the proposal and validation of a novel framework for continual, unsupervised learning. This framework enables the network to acquire new patterns by dynamically adding new, randomly initialized neurons that are integrated into the network via an asymmetric inhibitory scheme. Our results demonstrate that the STDP-driven learning process is remarkably robust, reliably converging to a bimodal, selective state from uniform, Gaussian, and constant initial weight distributions. This demonstrates that underlying learning mechanism is stable and does not require carefully chosen initialization schemes. We provide a quantitative characterization of successful neuron saturation, showing that the number of successful neuron detectors saturates at a ratio of approximately $1.65{-}1.90$ neurons per pattern. This finding yields a practical guideline for resource allocation, suggesting an optimal network size of approximately two to three neurons per each pattern. Most importantly, we show that our continual learning framework successfully learns new patterns without supervision. During this process, established old neurons exhibit no memory degradation, while newly added neurons specialize on the novel patterns. In conclusion, this thesis advances the understanding of unsupervised learning in SNNs by confirming the robustness of STDP, providing practical guidelines for network design, and offering a novel, effective solution to the challenge of continual learning.
SNNs
Pattern Learning
STDP
File in questo prodotto:
File Dimensione Formato  
MehranFarajinegarestan.pdf

Accesso riservato

Dimensione 736.33 kB
Formato Adobe PDF
736.33 kB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/91826