Continual learning (CL) seeks to overcome catastrophic forgetting - the tendency of machine learning models to lose previously acquired knowledge when trained on new data. Existing CL approaches include regularization, replay, and parameter-isolation strategies. We focus on replay-based methods that rely on fixed-size memory exemplar buffers to capture evolving class representations alongside historical representations. Prototype-based methods enhance this approach by representing each class through mean embeddings (prototypes), which serve as compact exemplars for both classification and memory management. In this thesis, we investigate the importance of prototype selection in an online setting where the data stream is processed only once. We integrate Adaptive Prototype Feedback (APF), a sampling based mix-up strategy that calculates the probabilities of misclassification by measuring distance between each pair prototypes in memory buffer, into replay-based prototype continual learning frameworks. We initially conducted an extensive survey of online continual learning techniques and identified replay methods with fixed-size memory buffers as a foundation. We then extended these baselines by incorporating APF to continuously improve sampling of class prototypes based on their misclassification probabilities, shifting focus towards reinforcing decision boundaries for "confused" classes. Experiments conducted on standard benchmarks demonstrated that APF-integrated replay consistently achieves improvement in the performance of baseline methods, with a percentage of 3-5%. These results establish the importance of adaptive prototype sampling strategy for more robust and scalable continual learning in real-world streaming scenarios.

Continual learning (CL) seeks to overcome catastrophic forgetting - the tendency of machine learning models to lose previously acquired knowledge when trained on new data. Existing CL approaches include regularization, replay, and parameter-isolation strategies. We focus on replay-based methods that rely on fixed-size memory exemplar buffers to capture evolving class representations alongside historical representations. Prototype-based methods enhance this approach by representing each class through mean embeddings (prototypes), which serve as compact exemplars for both classification and memory management. In this thesis, we investigate the importance of prototype selection in an online setting where the data stream is processed only once. We integrate Adaptive Prototype Feedback (APF), a sampling based mix-up strategy that calculates the probabilities of misclassification by measuring distance between each pair prototypes in memory buffer, into replay-based prototype continual learning frameworks. We initially conducted an extensive survey of online continual learning techniques and identified replay methods with fixed-size memory buffers as a foundation. We then extended these baselines by incorporating APF to continuously improve sampling of class prototypes based on their misclassification probabilities, shifting focus towards reinforcing decision boundaries for "confused" classes. Experiments conducted on standard benchmarks demonstrated that APF-integrated replay consistently achieves improvement in the performance of baseline methods, with a percentage of 3-5%. These results establish the importance of adaptive prototype sampling strategy for more robust and scalable continual learning in real-world streaming scenarios.

A Study on prototype based Online Continual Learning

MANSOORI, SEJAL
2024/2025

Abstract

Continual learning (CL) seeks to overcome catastrophic forgetting - the tendency of machine learning models to lose previously acquired knowledge when trained on new data. Existing CL approaches include regularization, replay, and parameter-isolation strategies. We focus on replay-based methods that rely on fixed-size memory exemplar buffers to capture evolving class representations alongside historical representations. Prototype-based methods enhance this approach by representing each class through mean embeddings (prototypes), which serve as compact exemplars for both classification and memory management. In this thesis, we investigate the importance of prototype selection in an online setting where the data stream is processed only once. We integrate Adaptive Prototype Feedback (APF), a sampling based mix-up strategy that calculates the probabilities of misclassification by measuring distance between each pair prototypes in memory buffer, into replay-based prototype continual learning frameworks. We initially conducted an extensive survey of online continual learning techniques and identified replay methods with fixed-size memory buffers as a foundation. We then extended these baselines by incorporating APF to continuously improve sampling of class prototypes based on their misclassification probabilities, shifting focus towards reinforcing decision boundaries for "confused" classes. Experiments conducted on standard benchmarks demonstrated that APF-integrated replay consistently achieves improvement in the performance of baseline methods, with a percentage of 3-5%. These results establish the importance of adaptive prototype sampling strategy for more robust and scalable continual learning in real-world streaming scenarios.
2024
A Study on prototype based Online Continual Learning
Continual learning (CL) seeks to overcome catastrophic forgetting - the tendency of machine learning models to lose previously acquired knowledge when trained on new data. Existing CL approaches include regularization, replay, and parameter-isolation strategies. We focus on replay-based methods that rely on fixed-size memory exemplar buffers to capture evolving class representations alongside historical representations. Prototype-based methods enhance this approach by representing each class through mean embeddings (prototypes), which serve as compact exemplars for both classification and memory management. In this thesis, we investigate the importance of prototype selection in an online setting where the data stream is processed only once. We integrate Adaptive Prototype Feedback (APF), a sampling based mix-up strategy that calculates the probabilities of misclassification by measuring distance between each pair prototypes in memory buffer, into replay-based prototype continual learning frameworks. We initially conducted an extensive survey of online continual learning techniques and identified replay methods with fixed-size memory buffers as a foundation. We then extended these baselines by incorporating APF to continuously improve sampling of class prototypes based on their misclassification probabilities, shifting focus towards reinforcing decision boundaries for "confused" classes. Experiments conducted on standard benchmarks demonstrated that APF-integrated replay consistently achieves improvement in the performance of baseline methods, with a percentage of 3-5%. These results establish the importance of adaptive prototype sampling strategy for more robust and scalable continual learning in real-world streaming scenarios.
Continual Learning
Replay Sampling
APF
Prototype
File in questo prodotto:
File Dimensione Formato  
Sejal_Mansoori_MsC_Thesis____UniPD.pdf

Accesso riservato

Dimensione 993.48 kB
Formato Adobe PDF
993.48 kB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/91853