Abstract:
Deep learning models are usually limited to training on a fixed dataset, and the model cannot scale its behavior over time after training is complete. A trained model on new data will suffer from catastrophic forgetting. Continual learning is a machine learning method that can alleviate the catastrophic forgetting phenomenon of deep learning models. It aims to continuously expand the adaptability of the model, so that the model can learn knowledge of different tasks at different times. Current continual learning algorithms can be divided into four categories: regularization-based, parameter isolation based, replay based and synthesis method. This paper systematically summarizes and analyzes the research progress of these four methods, sorts out the existing assessment methods, and finally discusses the emerging research trends of continual learning.