This thesis investigates the application of continual learning to anomaly detection, aiming to develop models that can adapt to new data while preserving previously acquired knowledge. A major challenge is catastrophic forgetting — when a model loses information from earlier training phases — and the related risk of learning anomalies as normal patterns if updates are not properly managed. Through a review of recent research, this work examines state-of-the-art continual learning strategies — including replay, generative replay, compressed replay, and regularisation — and analyses how they balance adaptability and stability. The findings show that replay remains the most effective mechanism for retaining knowledge, while regularisation improves stability and privacy. Compression further enhances scalability, making continual learning feasible for real-world applications.
This thesis investigates the application of continual learning to anomaly detection, aiming to develop models that can adapt to new data while preserving previously acquired knowledge. A major challenge is catastrophic forgetting — when a model loses information from earlier training phases — and the related risk of learning anomalies as normal patterns if updates are not properly managed. Through a review of recent research, this work examines state-of-the-art continual learning strategies — including replay, generative replay, compressed replay, and regularisation — and analyses how they balance adaptability and stability. The findings show that replay remains the most effective mechanism for retaining knowledge, while regularisation improves stability and privacy. Compression further enhances scalability, making continual learning feasible for real-world applications.
continual learning strategies for anomaly detection
RAHIMI, MAHAN
2024/2025
Abstract
This thesis investigates the application of continual learning to anomaly detection, aiming to develop models that can adapt to new data while preserving previously acquired knowledge. A major challenge is catastrophic forgetting — when a model loses information from earlier training phases — and the related risk of learning anomalies as normal patterns if updates are not properly managed. Through a review of recent research, this work examines state-of-the-art continual learning strategies — including replay, generative replay, compressed replay, and regularisation — and analyses how they balance adaptability and stability. The findings show that replay remains the most effective mechanism for retaining knowledge, while regularisation improves stability and privacy. Compression further enhances scalability, making continual learning feasible for real-world applications.| File | Dimensione | Formato | |
|---|---|---|---|
|
RAHIMI_MAHAN.pdf
accesso aperto
Dimensione
464.97 kB
Formato
Adobe PDF
|
464.97 kB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/97710