Abstract

The aim of this work is to study the reproducibility of the paper Latent Space Smoothing for Individually Fair Representation (LASSI) by Peychev et al. By doing this, we hope to verify the claims that the authors of the original paper make about their approach to incorporate individual fairness into deep learning models.

Their claims state that:

  • (i) LASSI enforces individual fairness by defining image similarity with respect to a generative model via attribute manipulation
  • (ii) LASSI can extend individual fairness to classification tasks with multiple sensitive attributes
  • (iii) LASSI can learn fair and transferable representations which are useful for unseen downstream tasks
LASSI Experiments