Revisiting Self-Supervised Visual Representation Learning


Contents

  1. Abstract
  2. Introduction


0. Abstract

Previous works :

  • mostly focus on pre-text tasks
  • but not on CNN architectures

\(\rightarrow\) this paper revisit the previously proposed models!


1. Introduction

4 main contributions :

  1. best architecture design : FULLY-supervised \(\neq\) SELF-supervised
  2. (unlike AlexNet) ResNet architecture
    • learned representations do not degrade toward the end of the model
  3. increasing the model complexity of CNN

    \(\rightarrow\) increase the quality of learned visual representation

  4. (in evaluation procedure) lr is sensitive in linear model


figure2

Categories: ,

Updated: