[Paper Review] 31. Freeze the Discriminator ; a Simple Baseline for Fine-Tuning GANs


Contents

  1. Abstract

  2. Previous Methods
  3. Proposed Methods, FreezeD


1. Abstract

GAN : heavy computational cost!

  • solution : transfer learning!

    but, prone to overfitting & limited to learning small distn shifts


Proposal : FreezeD

\(\rightarrow\) **simple fine-tuning ( freeze some parts of Discriminator )! **


2. Previous Methods

  1. Fine tuning

    • (traditional) fine-tune both G & D

      \(\rightarrow\) but…suffer from overfitting!

  2. Scale/Shift

    • since naive fine tuning is prone to overfitting…

      scale/shift suggest to update ONLY normalization layers

      \(\rightarrow\) poor result due to restriction!

  3. GLO (Generative Latent Optimization)

    • fine–tune G with supervised learning
    • loss : sum of L1 loss & perceptual loss
  4. MineGAN

    • fix G and modify the latent codes


3. Proposed Methods, FreezeD

outperform methods above!

  1. FreezeD

    • freeze lower layers of D

      ( just fine tune upper layers )

    • simple & effective baseline!

  2. L2-SP

    • effective for classifiers
    • regularizes target models, not to move far from source models
  3. Feature distillation

    • distill the activations of the source & target models$$

Tags:

Categories:

Updated: