S. Li, W. Deng, and J. Du, Reliable crowdsourcing and deep localitypreserving learning for expression recognition in the wild, CVPR, pp.2584-2593, 2017.

Y. Zhou, H. Xue, and X. Geng, Emotion distribution recognition from facial expressions, ACMMM, 2015.

A. Dantcheva, P. Bilinski, H. T. Nguyen, J. Broutart, and F. Bremond, Expression recognition for severely demented patients in music reminiscence-therapy, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01543231

H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, Joint fine-tuning in deep neural networks for facial expression recognition, ICCV, pp.2983-2991, 2015.

X. Zhao, X. Liang, L. Liu, T. Li, Y. Han et al., Peak-piloted deep network for facial expression recognition, ECCV, pp.425-442, 2016.

H. Ding, S. K. Zhou, and R. Chellappa, Facenet2expnet: Regularizing a deep face recognition net for expression recognition, Int. Conf. on Automatic Face & Gesture Recognition (FG), pp.118-126, 2017.

Z. Meng, P. Liu, J. Cai, S. Han, and Y. Tong, Identity-aware convolutional neural network for facial expression recognition, Int. Conf. on Automatic Face & Gesture Recognition (FG), pp.558-565, 2017.

H. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler, Deep learning for emotion recognition on small datasets using transfer learning, ACM Int. Conf. on multimodal interaction, pp.443-449, 2015.

S. Happy, A. Dantcheva, and F. Bremond, A weakly supervised learning technique for classifying facial expressions, Pattern Recognition Letters, vol.128, pp.162-168, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02381439

Z. Zhang, F. Ringeval, B. Dong, E. Coutinho, E. Marchi et al., Enhanced semi-supervised learning for multimodal emotion recognition, ICASSP, pp.5185-5189, 2016.

C. Rosenberg, M. Hebert, and H. Schneiderman, Semi-supervised self-training of object detection models, WACV Workshops, 2005.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, CVPR, pp.2818-2826, 2016.

T. Wang, J. Huan, and B. Li, Data dropout: Optimizing training data for convolutional neural networks, Int. Conf. on Tools with Artificial Intelligence (ICTAI), pp.39-46, 2018.

T. Wang, J. Huan, and M. Zhu, Instance-based deep transfer learning, pp.367-375, 2019.

L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-fei, Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels, in ICML, pp.2309-2318, 2018.

D. Lee, Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks, Workshop on Challenges in Representation Learning, ICML, vol.3, p.2, 2013.

P. Liu, S. Han, Z. Meng, and Y. Tong, Facial expression recognition via a boosted deep belief network, CVPR, pp.1805-1812, 2014.

A. T. Lopes, E. Aguiar, A. F. De, T. Souza, and . Oliveira-santos, Facial expression recognition with convolutional neural networks: coping with few data and the training sample order, Pattern Recog. (PR), vol.61, pp.610-628, 2017.

J. Zeng, S. Shan, and X. Chen, Facial expression recognition with inconsistently annotated datasets, in ECCV, pp.222-259, 2018.

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, Learning and transferring mid-level image representations using convolutional neural networks, CVPR, pp.1717-1724, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00911179

O. M. Parkhi, A. Vedaldi, and A. Zisserman, Deep face recognition, BMVC, vol.1, p.6, 2015.

A. Coates and A. Y. Ng, Learning feature representations with kmeans, Neural networks: Tricks of the trade, pp.561-580, 2012.

A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko, Semi-supervised learning with ladder networks, NIPS, pp.3546-3554, 2015.

A. Radford, L. Metz, and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, 2016.

A. Veit, N. Alldrin, G. Chechik, I. Krasin, A. Gupta et al., Learning from noisy large-scale datasets with minimal supervision, pp.839-847, 2017.

Y. Li, J. Yang, Y. Song, L. Cao, J. Luo et al., Learning from noisy labels with distillation, pp.1910-1918, 2017.

P. W. Koh and P. Liang, Understanding black-box predictions via influence functions, pp.1885-1894, 2017.

P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar et al., The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression, CVPR Workshops, pp.94-101, 2010.

O. Langner, R. Dotsch, G. Bijlstra, D. H. Wigboldus, S. T. Hawk et al., Presentation and validation of the radboud faces database, Cognition and emotion, vol.24, issue.8, pp.1377-1388, 2010.

A. Mollahosseini, B. Hasani, and M. H. Mahoor, Affectnet: A database for facial expression, valence, and arousal computing in the wild, IEEE Trans. on Affective Comput, 2017.

Y. Li, J. Zeng, S. Shan, and X. Chen, Patch-gated cnn for occlusionaware facial expression recognition, 2018.

C. Kervadec, V. Vielzeuf, S. Pateux, A. Lechervy, and F. Jurie, Cake: Compact and accurate k-dimensional representation of emotion, BMVC Workshop, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01849908

K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, Joint face detection and alignment using multitask cascaded convolutional networks, IEEE Signal Proces. Let, vol.23, issue.10, pp.1499-1503, 2016.

K. Sikka, G. Sharma, and M. Bartlett, Lomo: Latent ordinal model for facial analysis in videos, CVPR, pp.5580-5589, 2016.

W. Sun, H. Zhao, and Z. Jin, An efficient unconstrained facial expression recognition algorithm based on stack binarized auto-encoders and binarized neural networks, Neurocomputing, vol.267, pp.385-395, 2017.

Y. Zhou and B. E. Shi, Action unit selective feature maps in deep networks for facial expression recognition, pp.2031-2038, 2017.

P. Carcagnì, M. Coco, M. Leo, and C. Distante, Facial expression recognition and histograms of oriented gradients: a comprehensive study, SpringerPlus, vol.4, issue.1, p.645, 2015.

V. Vielzeuf, C. Kervadec, S. Pateux, A. Lechervy, and F. Jurie, An occam's razor view on learning audiovisual emotion recognition with small training sets, ICMI, pp.589-593, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01854019