Data augmentation techniques for small image datasets?

16,122

Solution 1

A good recap can be found here, section 1 on Data Augmentation: so namely flips, random crops and color jittering and also lighting noise:

Krizhevsky et al. proposed fancy PCA when training the famous Alex-Net in 2012. Fancy PCA alters the intensities of the RGB channels in training images.

Alternatively you can also have a look at the Kaggle Galaxy Zoo challenge: the winners wrote a very detailed blog post. It covers the same kind of techniques:

  • rotation,
  • translation,
  • zoom,
  • flips,
  • color perturbation.

As stated they also do it "in realtime, i.e. during training".

For example here is a practical Torch implementation by Facebook (for ResNet training).

Solution 2

I've collected a couple of augmentation techniques in my masters thesis, page 80. It includes:

  • Zoom,
  • Crop
  • Flip (horizontal / vertical)
  • Rotation
  • Scaling
  • shearing
  • channel shifts (rgb, hsv)
  • contrast
  • noise,
  • vignetting
Share:
16,122
whitewalker
Author by

whitewalker

Updated on June 03, 2022

Comments

  • whitewalker
    whitewalker almost 2 years

    Currently i am training small logo datasets similar to Flickrlogos-32 with deep CNNs. For training larger networks i need more dataset, thus using augmentation. The best i'm doing right now is using affine transformations(featurewise normalization, featurewise center, rotation, width height shift, horizontal vertical flip). But for bigger networks i need more augmentation. I tried searching on kaggle's national data science bowl's forum but couldn't get much help. There's code for some methods given here but i'm not sure what could be useful. What are some other(or better) image data augmentation techniques that could be applied to this type of(or in any general image) dataset other than affine transformations?