[Paper Review] 31. Freeze the Discriminator ; a Simple Baseline for Fine-Tuning GANs
Contents
-
Abstract
- Previous Methods
- Proposed Methods, FreezeD
1. Abstract
GAN : heavy computational cost!
-
solution : transfer learning!
but, prone to overfitting & limited to learning small distn shifts
Proposal : FreezeD
\(\rightarrow\) **simple fine-tuning ( freeze some parts of Discriminator )! **
2. Previous Methods
-
Fine tuning
-
(traditional) fine-tune both G & D
\(\rightarrow\) but…suffer from overfitting!
-
-
Scale/Shift
-
since naive fine tuning is prone to overfitting…
scale/shift suggest to update ONLY normalization layers
\(\rightarrow\) poor result due to restriction!
-
-
GLO (Generative Latent Optimization)
- fine–tune G with supervised learning
- loss : sum of L1 loss & perceptual loss
-
MineGAN
- fix G and modify the latent codes
3. Proposed Methods, FreezeD
outperform methods above!
-
FreezeD
-
freeze lower layers of D
( just fine tune upper layers )
-
simple & effective baseline!
-
-
L2-SP
- effective for classifiers
- regularizes target models, not to move far from source models
-
Feature distillation
- distill the activations of the source & target models$$