Representational Continuity for Unsupervised Continual Learning (ICLR 2022)
https://arxiv.org/pdf/2110.06976
Madaan, Divyam, et al. "Representational continuity for unsupervised continual learning." ICLR 2022
Abstract
대부분의 Continual learning: Supervised 세팅
$\rightarrow$ This paper: Unsupervised 세팅 … UCL (Unsupervised Continual Learning)
1. Lifelong Unsupervised Mixup (LUMP)
한 줄 요약: Mixup을 통한 data augmentation 활용
- (1) 현재 task
- (2) 과거 task
$\mathcal{L}^{\text{Mixup}}(\tilde{x}, \tilde{y}) = \mathrm{CE}(h_\psi(f_\Theta(\tilde{x})), \tilde{y})$.
- $\tilde{x} = \lambda \cdot x_i + (1 - \lambda) \cdot x_j$.
- $\tilde{y} = \lambda \cdot y_i + (1 - \lambda) \cdot y_j$.
2. Experiments
위 방법으로 pretrain이후, KNN classifier로 evaluation