D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua, StyleBank: an explicit representation for neural image style transfer, Proc. Conference on Computer Vision and Pattern Recognition (CVPR, 2017.

Y. Chen, J. Mairal, and Z. Harchaoui, Fast and robust archetypal analysis for representation learning, Proc. Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
URL : https://hal.archives-ouvertes.fr/hal-00995911

A. Cutler and L. Breiman, Archetypal analysis, Technometrics, vol.36, issue.4, pp.338-347, 1994.

V. Dumoulin, J. Shlens, and M. Kudlur, A learned representation for artistic style, Proc. International Conference on Learning Representations (ICLR, 2017.

L. A. Gatys, A. S. Ecker, and M. Bethge, A Neural Algorithm of Artistic Style, 2015.

L. A. Gatys, A. S. Ecker, and M. Bethge, Texture synthesis using convolutional neural networks, Adv. in Neural Information Processing Systems (NIPS), 2015.

G. Ghiasi, H. Leeu, M. Kudlur, V. Dumoulin, and J. Shlens, Exploring the structure of a real-time, arbitrary neural artistic stylization network, Proc. British Machine Vision Conference (BMVC, 2017.

D. J. Heeger and J. R. Bergen, Pyramid-based texture analysis/synthesis, Proc. 22nd annual conference on Computer graphics and interactive techniques (SIGGRAPH), 1995.

X. Huang and S. Belongie, Arbitrary style transfer in real-time with adaptive instance normalization, Proc. International Conference on Computer Vision (ICCV, 2017.

J. Johnson, A. Alahi, and L. Fei-fei, Perceptual losses for real-time style transfer and super-resolution, European Conference on Computer Vision (ECCV), 2016.

Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu et al., Universal style transfer via feature transforms, Adv. Neural Information Processing Systems (NIPS), 2017.

F. Luan, S. Paris, E. Shechtman, and K. Bala, Deep photo style transfer, Proc. Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

J. Mairal, F. Bach, and J. Ponce, Sparse modeling for image and vision processing, Foundations and Trends in Computer Graphics and Vision, vol.8, issue.2-3, pp.85-283, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01081139

J. Mairal, F. Bach, J. Ponce, and G. Sapiro, Online learning for matrix factorization and sparse coding, Journal of Machine Learning Research, vol.11, pp.19-60, 2010.
URL : https://hal.archives-ouvertes.fr/inria-00408716

B. A. Olshausen and D. J. Field, Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, vol.381, pp.607-609, 1996.

P. Paatero and U. Tapper, Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values, Environmetrics, vol.5, issue.2, pp.111-126, 1994.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang et al., Automatic differentiation in pytorch, 2017.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion et al., Scikit-learn: Machine learning in python, Journal of machine learning research (JMLR), vol.12, pp.2825-2830, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00650905

J. Portilla and E. P. Simoncelli, A parametric texture model based on joint statistics of complex wavelet coefficients, International journal of computer vision, vol.40, issue.1, pp.49-70, 2000.

M. Ruder, A. Dosovitskiy, and T. Brox, Artistic style transfer for videos and spherical images, International Journal on Computer, 2018.

K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, 2015.

D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky, Texture networks: feed-forward synthesis of textures and stylized images, Proc. International Conference on Machine Learning (ICML), 2016.

L. Van-der-maaten and G. Hinton, Visualizing data using t-SNE, Journal of Machine Learning Research, vol.9, pp.2579-2605, 2008.

M. Yeh, S. Tang, A. Bhattad, and D. A. Forsyth, Quantitative evaluation of style transfer, 2018.