林钰尧, 杜飞, 杨云. 持续学习:研究综述[J]. 云南大学学报(自然科学版), 2023, 45(2): 284-297. doi: 10.7540/j.ynu.20220312
引用本文: 林钰尧, 杜飞, 杨云. 持续学习:研究综述[J]. 云南大学学报(自然科学版), 2023, 45(2): 284-297. doi: 10.7540/j.ynu.20220312
LIN Yu-yao, DU Fei, YANG Yun. Continual learning: A research review[J]. Journal of Yunnan University: Natural Sciences Edition, 2023, 45(2): 284-297. DOI: 10.7540/j.ynu.20220312
Citation: LIN Yu-yao, DU Fei, YANG Yun. Continual learning: A research review[J]. Journal of Yunnan University: Natural Sciences Edition, 2023, 45(2): 284-297. DOI: 10.7540/j.ynu.20220312

持续学习:研究综述

Continual learning: A research review

  • 摘要: 深度学习模型通常限定在固定数据集中进行训练,训练完成之后模型无法随着时间而扩展其行为. 将已训练好的模型在新数据上训练,会出现灾难性遗忘现象. 持续学习是一种能够缓解深度学习模型灾难性遗忘的机器学习方法,它旨在不断扩展模型的适应能力,让模型能够在不同时刻学习不同任务的知识. 目前,持续学习算法主要分为4大方面,分别是正则化方法、记忆回放方法、参数孤立方法和综合方法. 对这4类方法的研究进展进行了系统地总结与分析,梳理了衡量持续学习算法性能的评估方法,讨论了持续学习的新兴研究趋势.

     

    Abstract: Deep learning models are usually limited to training on a fixed dataset, and the model cannot scale its behavior over time after training is complete. A trained model on new data will suffer from catastrophic forgetting. Continual learning is a machine learning method that can alleviate the catastrophic forgetting phenomenon of deep learning models. It aims to continuously expand the adaptability of the model, so that the model can learn knowledge of different tasks at different times. Current continual learning algorithms can be divided into four categories: regularization-based, parameter isolation based, replay based and synthesis method. This paper systematically summarizes and analyzes the research progress of these four methods, sorts out the existing assessment methods, and finally discusses the emerging research trends of continual learning.

     

/

返回文章
返回