Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, 2013.
DOI : 10.1109/tpami.2013.50

URL : http://www.cs.princeton.edu/courses/archive/spring13/cos598C/Representation Learning - A Review and New Perspectives.pdf

S. Bickel, M. Brückner, and T. Scheffer, Discriminative learning for differing training and test distributions, ICML, 2009.
DOI : 10.1145/1273496.1273507

URL : http://imls.engr.oregonstate.edu/www/htdocs/proceedings/icml2007/papers/303.pdf

D. Bouchacourt, R. Tomioka, S. Nowozin, and . Multi, Learning disentangled representations from grouped observations. AAAI Conference on Artificial Intelligence, 2018.

T. Q. Chen, X. Li, R. Grosse, D. , and D. , Isolating sources of disentanglement in variational autoencoders, NeurIPS, 2018.

X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever et al., Infogan: Interpretable representation learning by information maximizing generative adversarial nets, NIPS, 2016.

E. L. Denton, Unsupervised learning of disentangled representations from video, NIPS, 2017.

G. Desjardins, A. Courville, and Y. Bengio, Disentangling factors of variation via generative entangling, ICML, 2012.

C. Donahue, Z. C. Lipton, A. Balsubramani, and J. Mcauley, Semantically decomposing the latent spaces of generative adversarial networks, 2018.

J. Donahue, P. Krähenbühl, D. , and T. , Adversarial feature learning, 2017.

V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb et al., , 2017.

P. Ekman and E. L. Rosenberg, What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS), 1997.

Z. Feng, X. Wang, C. Ke, A. Zeng, D. Tao et al., Dual swap disentangling, NeurIPS, pp.5898-5908, 2018.

A. Gonzalez-garcia, J. Van-de-weijer, and Y. Bengio, Image-to-image translation for cross-domain disentanglement, 2018.

I. Goodfellow, J. Pouget-abadie, M. Mirza, B. Xu, D. Warde-farley et al., Generative adversarial nets, NIPS, 2014.

I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot et al., BetaVAE: Learning basic visual concepts with a constrained variational framework, 2017.

.. .. , 10} using a uniform distribution. Apply a dilation operation over the image using a squared kernel with pixel-size equal to the generated number, Generate a random integer in the range {1

, Normalize the resulting vector as?c as? as?c = c/||c|| 1. Multiply the RGB components of all the pixels in the image by?cby? by?c

. Mollahosseini, 2017) are not accurate. As detailed in the original paper, the inter-observer agreement is significantly low for neutral images. In contrast, in our reference-set, each image was annotated in terms of "neutral" / "non-neutral" by two different annotators