Continual learning (CL) studies how models can acquire new knowledge sequentially while retaining performance on previously learned tasks, a setting in which catastrophic forgetting remains a major challenge. State space models (SSMs) have recently emerged as competitive alternatives to Transformers, and Vision Mamba adapts selective SSMs to computer vision, raising the question of how such architectures behave under continual learning protocols. This thesis analyzes the effectiveness of Vision Mamba in incremental learning settings by combining: (i) a structured review of CL scenarios, evaluation metrics, and strategy families (regularization, replay, and architectural isolation), with (ii) an architecture-focused analysis of stability-plasticity trade-offs in selective SSMs, drawing on the results reported in Mamba-CL. The thesis summarizes current evidence, highlights failure modes and practical considerations specific to Vision Mamba, and outlines promising directions for continual learning with SSM-based vision models.

Continual learning (CL) studies how models can acquire new knowledge sequentially while retaining performance on previously learned tasks, a setting in which catastrophic forgetting remains a major challenge. State space models (SSMs) have recently emerged as competitive alternatives to Transformers, and Vision Mamba adapts selective SSMs to computer vision, raising the question of how such architectures behave under continual learning protocols. This thesis analyzes the effectiveness of Vision Mamba in incremental learning settings by combining: (i) a structured review of CL scenarios, evaluation metrics, and strategy families (regularization, replay, and architectural isolation), with (ii) an architecture-focused analysis of stability-plasticity trade-offs in selective SSMs, drawing on the results reported in Mamba-CL. The thesis summarizes current evidence, highlights failure modes and practical considerations specific to Vision Mamba, and outlines promising directions for continual learning with SSM-based vision models.

Analyzing the Effectiveness of Vision Mamba in Continual Learning Settings

ZEINALABEDIN ZADEGAN, PEDRAM
2025/2026

Abstract

Continual learning (CL) studies how models can acquire new knowledge sequentially while retaining performance on previously learned tasks, a setting in which catastrophic forgetting remains a major challenge. State space models (SSMs) have recently emerged as competitive alternatives to Transformers, and Vision Mamba adapts selective SSMs to computer vision, raising the question of how such architectures behave under continual learning protocols. This thesis analyzes the effectiveness of Vision Mamba in incremental learning settings by combining: (i) a structured review of CL scenarios, evaluation metrics, and strategy families (regularization, replay, and architectural isolation), with (ii) an architecture-focused analysis of stability-plasticity trade-offs in selective SSMs, drawing on the results reported in Mamba-CL. The thesis summarizes current evidence, highlights failure modes and practical considerations specific to Vision Mamba, and outlines promising directions for continual learning with SSM-based vision models.
2025
Analyzing the Effectiveness of Vision Mamba in Continual Learning Settings
Continual learning (CL) studies how models can acquire new knowledge sequentially while retaining performance on previously learned tasks, a setting in which catastrophic forgetting remains a major challenge. State space models (SSMs) have recently emerged as competitive alternatives to Transformers, and Vision Mamba adapts selective SSMs to computer vision, raising the question of how such architectures behave under continual learning protocols. This thesis analyzes the effectiveness of Vision Mamba in incremental learning settings by combining: (i) a structured review of CL scenarios, evaluation metrics, and strategy families (regularization, replay, and architectural isolation), with (ii) an architecture-focused analysis of stability-plasticity trade-offs in selective SSMs, drawing on the results reported in Mamba-CL. The thesis summarizes current evidence, highlights failure modes and practical considerations specific to Vision Mamba, and outlines promising directions for continual learning with SSM-based vision models.
Mamba
Continual Learning
State Space Models
AI
File in questo prodotto:
File Dimensione Formato  
Zeinalabedin_Zadegan_Pedram.pdf

accesso aperto

Dimensione 2.24 MB
Formato Adobe PDF
2.24 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/106022