N. Esnault, Production automatique d'interfaces graphiques 3D pour le Web, Thèse de doctorat, 2011.
URL : https://hal.archives-ouvertes.fr/tel-00574772

N. Esnault, J. Royan, R. Cozot, and C. Bouville, Métaphores de présentation de l'information basées sur des méthodes procédurales, Association Française d'Informatique Graphique (AFIG), 2009.

N. Esnault, J. Royan, R. Cozot, and C. , Bouville : A flexible framework to personalize 3d web users experience, 15th International Conference on Web 3D Technology, pp.35-44, 2010.

N. Esnault, J. Royan, R. Cozot, and C. , Bouville : A style-sheet technique for creating 3d web contents, Web3Dw '10 : Proceedings of the International Conference Web Virtual Reality and Three-Dimensional Worlds. IADIS, 2010.

N. Esnault, J. Royan, R. Cozot, and C. , Bouville : A style-sheet technique for creating interactive 3d web contents, IADIS International Journal on WWW/Internet, vol.9, issue.1, 2011.

C. Collin, Carte de photons progressive dans des scènes contenant des milieux participatifs, 2011.

C. Collin, M. Ribardière, R. Cozot, and K. Bouatouch, Progressive volume photon tracing, ACM SIGGRAPH 2012 Posters on, SIGGRAPH '12, pp.5-9
DOI : 10.1145/2343045.2343086

URL : https://hal.archives-ouvertes.fr/hal-01143410

C. Collin, M. Ribardière, A. Gruson, R. Cozot, N. Sumanta et al., Visibility-driven progressive volume photon tracing, The Visual Computer, vol.30, issue.2, pp.849-859, 2013.
DOI : 10.1007/s00371-013-0845-x

URL : https://hal.archives-ouvertes.fr/hal-01143410

A. Gruson, R. Ajit-hakke-patil, K. Cozot, . Bouatouch, and N. Sumanta, Pattanaik : Light propagation maps on parallel graphics architectures, Hank Childs, Torsten Kuhlen et Fabio Marton, éditeurs : Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), pp.81-88

S. Hillaire, Contribution to the Study of Visual Attention Models and Gaze Tracking to Improve Visual Feedback in 3D Interactive Applications Using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual environments, Thèse de doctorat, INSA de Rennes IEEE Virtual Reality Conference, pp.47-50, 2008.

S. Hillaire, A. Lécuyer, T. Regia-corte, R. Cozot, J. Royan et al., A real-time visual attention model for predicting gaze point during first-person exploration of virtual environments, Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology, VRST '10, pp.191-198, 2010.
DOI : 10.1145/1889863.1889907

URL : https://hal.archives-ouvertes.fr/inria-00540987

S. Hillaire, G. Breton, N. Ouarti, R. Cozot, and A. Lécuyer, Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications, Computer Graphics Forum, vol.20, issue.1, pp.1830-1841, 2010.
DOI : 10.1111/j.1467-8659.2010.01651.x

URL : https://hal.archives-ouvertes.fr/inria-00471151

S. Hillaire, A. Lécuyer, T. Regia-corte, R. Cozot, J. Royan et al., Design and Application of Real-Time Visual Attention Model for the Exploration of 3D Virtual Environments, IEEE Transactions on Visualization and Computer Graphics, vol.18, issue.3, pp.356-368, 2012.
DOI : 10.1109/TVCG.2011.154

URL : https://hal.archives-ouvertes.fr/hal-01142540

S. Hillaire, A. Lécuyer, R. Cozot, and G. Casiez, Effets de flou visuel pour la navigation en environnements virtuels en vue à la première personnes, Association Française d'Informatique Graphique (AFIG), pp.83-90, 2007.

S. Hillaire, A. Lécuyer, R. Cozot, and G. Casiez, Depth-of-field blur effects for first-person navigation in virtual environments, éditeurs : Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST), pp.203-206, 2007.
URL : https://hal.archives-ouvertes.fr/inria-00471162

S. Hillaire and A. Lécuyer, Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments, IEEE Computer Graphics and Applications, vol.28, issue.6, pp.47-55, 2008.
DOI : 10.1109/MCG.2008.113

URL : https://hal.archives-ouvertes.fr/inria-00471162

E. Reinhard, T. Kunkel, Y. Marion, J. Brouillat, R. Cozot et al., Image display algorithms for high- and low-dynamic-range display devices, Journal of the Society for Information Display, vol.5666, issue.12, pp.997-1014, 2007.
DOI : 10.1889/1.2825110

R. Boitard, D. Thoreau, R. Cozot, and K. Bouatouch, Temporal coherency in video tone mapping, a survey, HDRi2013 -First International Conference and SME Workshop on HDR imaging, 2013.
URL : https://hal.archives-ouvertes.fr/hal-01142308

R. Boitard, D. Thoreau, R. Cozot, and K. Bouatouch, Survey of temporal artifacts in video tone mapping, HDRi2014 -Second International Conference and SME Workshop on HDR imaging, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01142294

R. Boitard, K. Bouatouch, R. Cozot, D. Thoreau, and A. Gruson, Temporal coherency for video tone mapping, Applications of Digital Image Processing XXXV, 2012.
DOI : 10.1117/12.929600

URL : https://hal.archives-ouvertes.fr/hal-01142313

R. Boitard, D. Thoreau, R. Cozot, and K. Bouatouch, Impact of temporal coherence-based tone mapping on video compression, 21st European Signal Processing Conference, 2013.
URL : https://hal.archives-ouvertes.fr/hal-01142291

R. Boitard, D. Thoreau, R. Cozot, and K. Bouatouch, Motion-guided quantization for video tone mapping, 2014 IEEE International Conference on Multimedia and Expo (ICME), 2014.
DOI : 10.1109/ICME.2014.6890191

URL : https://hal.archives-ouvertes.fr/hal-01141849

R. Boitard, D. Mi-cozot, K. Thoreau, and . Bouatouch, Zonal brightness coherency for video tone mapping, Signal Processing : Image Communication, 2013.
DOI : 10.1016/j.image.2013.10.001

URL : https://hal.archives-ouvertes.fr/hal-01141841

A. Gruson, M. Ribardière, and R. Cozot, Eye-Centered Color Adaptation in Global Illumination, Computer Graphics Forum, vol.3, issue.6, pp.111-120, 2013.
DOI : 10.1111/cgf.12218

URL : https://hal.archives-ouvertes.fr/hal-01141842

J. T. Kajiya, The rendering equation, ACM SIGGRAPH Computer Graphics, vol.20, issue.4, pp.143-150, 1986.
DOI : 10.1145/15886.15902

D. Mark and . Fairchild, Color Appearance Model, 2005.

E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High Dynamic Range Imaging : Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics), 2005.

F. Banterle, A. Artusi, K. Debattista, and A. Chalmers, Advanced High Dynamic Range Imaging : Theory and Practice, p.1, 2011.
DOI : 10.1201/b11373

N. Sumanta, J. Pattanaik, H. Tumblin, D. P. Yee, and . Greenberg, Time-dependent visual adaptation for fast realistic image display, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '00, pp.47-54, 2000.

E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, Photographic tone reproduction for digital images, ACM Trans. Graph, vol.21, issue.3, pp.267-276, 2002.

R. Mantiuk, K. Myszkowski, and H. Seidel, A perceptual framework for contrast processing of high dynamic range images, ACM Transactions on Applied Perception, vol.3, issue.3, pp.286-308, 2006.
DOI : 10.1145/1166087.1166095

F. Drago, K. Myszkowski, T. Annen, and N. Chiba, Adaptive Logarithmic Mapping For Displaying High Contrast Scenes, Computer Graphics Forum, vol.60, issue.1, pp.419-426, 2003.
DOI : 10.1016/0001-6918(75)90037-2

F. Durand and J. Dorsey, Fast bilateral filtering for the display of high-dynamic-range images, ACM Trans. Graph, vol.21, issue.3, pp.257-266, 2002.

R. Fattal, D. Lischinski, and M. Werman, Gradient domain high dynamic range compression, ACM Trans. Graph, vol.21, issue.3, pp.249-256, 2002.
DOI : 10.1145/566570.566573

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.164.867

J. J. Mccann, Do humans discount the illuminant?, Human Vision and Electronic Imaging X, 2005.
DOI : 10.1117/12.594383

J. and V. Kries, Chromatic adaptation, pp.109-119, 1970.

D. H. Brainard and B. A. , Wandell : asymmetric color matching : how color appearance depends on the illuminant, Journal of the Optical Society of America A, vol.9, pp.1433-1448, 1992.

E. J. Chichilnisky and B. A. , Photoreceptor sensitivity changes explain color appearance shifts induced by large uniform backgrounds in dichoptic matching, Vision Research, vol.35, issue.2, pp.239-254, 1995.
DOI : 10.1016/0042-6989(94)00122-3

Y. Nayatani, K. Takahama, and H. Sobagaki, Formulation of a Nonlinear Model of Chromatic Adaptation, Color Research & Application, vol.11, issue.3, pp.161-171, 1981.
DOI : 10.1002/col.5080060308

S. L. Guth, Model for color vision and light adaptation, Journal of the Optical Society of America A, vol.8, issue.6, pp.976-993, 1991.
DOI : 10.1364/JOSAA.8.000976

E. J. Brenenam, Corresponding chromaticities for different states of adaptation to complex visual fields, Journal of the Optical Society of America A, vol.4, issue.6, pp.1115-1129, 1987.
DOI : 10.1364/JOSAA.4.001115

M. D. Fairchild, A model of incomplete chromatic adaptation, the 22nd Session of the CIE, pp.33-34, 1991.

M. D. Fairchild, Formulation and testing of an incomplete-chromatic-adaptation model, Color Research & Application, vol.15, issue.4, pp.243-250, 1991.
DOI : 10.1002/col.5080160406

M. Joly, Introduction à l'analyse de l'image, 2009.

J. Thomas and S. , Remarks on Rural Scenery : With Twenty Etchings of Cottages, from Nature, Nathanaiel Smith, p.1797

A. Laszlo, Every Frame a Rembrandt : Art and Practice of Cinematography, 2000.

V. Wissler, Illuminated Pixels : The Why, What, and How of Digital Lighting, Cengage Learning, 2012.

B. Cecil, DeMille : The Autobiography of Cecil B. DeMille, 1959.

A. Adams, The Negative : Exposure and Development, 1949.

A. Adams, Artificial Light Photography, 1956.

M. Thomas, Sipos : Horror Film Aesthetics : Creating the Visual Language of Fear, 2010.

J. Wolfgang and V. Goethe, Theory of Colours, 2006.

C. Michel-eugène, The Principles of Harmony and Contrast of Colours, and Their Applications to the Arts, 1855.

J. Itten, The Art of Colors, 1960.

D. Cohen-or and O. Sorkine, Ran Gal et Tommer : Color harmonization, ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH), vol.25, pp.1259-1282, 2006.
DOI : 10.1145/1179352.1141933

N. Sawant, J. Niloy, and . Mitra, Color Harmonization for Videos, 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp.576-582, 2008.
DOI : 10.1109/ICVGIP.2008.17

X. Huo and J. Tan, An Improved Method for Color Harmonization, 2009 2nd International Congress on Image and Signal Processing, 2009.
DOI : 10.1109/CISP.2009.5304408

L. Gruber, D. Kalkofen, and D. Schmalstieg, Color harmonization for Augmented Reality, 2010 IEEE International Symposium on Mixed and Augmented Reality, pp.227-228, 2010.
DOI : 10.1109/ISMAR.2010.5643580

A. Bosca, D. Bonino, M. Comerio, S. Grega, and F. Corno, A reusable 3D visualization component for the semantic web, Proceedings of the twelfth international conference on 3D web technology , Web3D '07, pp.89-96, 2007.
DOI : 10.1145/1229390.1229405

L. Sommaruga and N. Catenazzi, Curriculum visualization in 3D, Proceedings of the twelfth international conference on 3D web technology , Web3D '07, pp.177-180, 2007.
DOI : 10.1145/1229390.1229423

H. Halpin, D. Zielinski, R. Brady, and G. Kelly, Exploring Semantic Social Networks Using Virtual Reality, Proceedings of the 7th International Conference on The Semantic Web, p.614, 2008.
DOI : 10.1080/00107510500052444

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.415.1952

B. Bos, H. W. Lie, C. Lilley, and I. Jacobs, Cascading style sheets, level 2 CSS2 specification, 1998.

G. Klyne, J. J. Carroll, and B. Mcbride, Resource description framework (rdf) : concepts and abstract syntax, 2004.

A. Ray and S. , Plants, fractals, and formal languages, Proceedings of the 11th annual conference on Computer graphics and interactive techniques, p.808571, 1984.

M. Kay, Xsl transformations (xslt) version 2.0, 2007.

C. Leung and A. Salga, Enabling WebGL, Proceedings of the 19th international conference on World wide web, WWW '10, pp.1369-1370, 2010.
DOI : 10.1145/1772690.1772933

R. Arnaud, C. Mark, and . Barnes, Collada : Sailing the Gulf of 3D Digital Content Creation, 2006.
DOI : 10.1201/b10630

J. Hartman and J. Wernecke, The VRML 2.0 Handbook : The Official Guide to Constructing Virtual Worlds, 1996.

I. H. Yoav, P. Parish, and . Müller, Procedural modeling of cities, Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '01, pp.301-308, 2001.

G. Whelan, G. Kelly, and H. Mccabe, Roll your own city, Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts, DIMEA '08, pp.534-535, 2008.
DOI : 10.1145/1413634.1413742

W. Jarosz, D. Nowrouzezahrai, R. Thomas, P. Sloan, and M. Zwicker, Progressive photon beams, ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia, issue.6, p.30, 2011.

C. Knaus and M. Zwicker, Progressive photon mapping, ACM Transactions on Graphics, vol.30, issue.3, pp.1-2513, 2011.
DOI : 10.1145/1966394.1966404

T. Hachisuka, S. Ogaki, and H. W. Jensen, Progressive photon mapping, ACM Trans. Graph, vol.271308, pp.1-130, 2008.

H. Wann-jensen-et-per and H. Christensen, Efficient simulation of light transport in scences with participating media using photon maps, Proceedings of the 25th annual conference on Computer graphics and interactive techniques, SIGGRAPH '98, pp.311-320, 1998.

T. Hachisuka and H. W. Jensen, Robust adaptive photon tracing using photon path visibility, ACM Transactions on Graphics, vol.30, issue.5, pp.1-114, 2011.
DOI : 10.1145/2019627.2019633

W. Jarosz, M. Zwicker, and H. W. Jensen, The Beam Radiance Estimate for Volumetric Photon Mapping, Proceedings of Eurographics 2008), pp.557-566, 2008.
DOI : 10.1111/j.1467-8659.2008.01153.x

J. Vlastimil-havran, H. Bittner, and . Seidel, Ray maps for global illumination, ACM SIGGRAPH 2004 Sketches, SIGGRAPH '04, p.77, 2004.

W. Jakob, Mitsuba renderer, 2010.

T. Luft, C. Colditz, and O. Deussen, Image enhancement by unsharp masking the depth buffer, ACM Transactions on Graphics, vol.25, issue.3, pp.1206-1213, 2006.
DOI : 10.1145/1141911.1142016

M. B. Hullin, J. Hanika, and W. Heidrich, Polynomial Optics: A Construction Kit for Efficient Ray-Tracing of Lens Systems, Computer Graphics Forum, vol.13, issue.3, p.31, 2012.
DOI : 10.1111/j.1467-8659.2012.03132.x

M. Potmesil and I. Chakravarty, A lens and aperture camera model for synthetic image generation, ACM SIGGRAPH Computer Graphics, vol.15, issue.3, pp.297-305, 1981.
DOI : 10.1145/965161.806818

L. Itti, Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes, Visual Cognition, vol.26, issue.6, pp.1093-1123, 2005.
DOI : 10.1038/23936

K. Cater, A. Chalmers, and G. Ward, Detail to attention : Exploiting visual tasks for selective rendering, Proceedings of the 14th Eurographics Workshop on Rendering, EGRW '03, p.270, 2003.

L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.20, issue.11, pp.1254-1259, 1998.

A. M. Treisman and G. Gelade, A feature-integration theory of attention, Cognitive Psychology, vol.12, issue.1, pp.97-136, 1980.
DOI : 10.1016/0010-0285(80)90005-5

S. Lee, G. Kim, and S. Choi, Real-time tracking of visually attended objects in virtual environments and its application to lod. Visualization and Computer Graphics, IEEE Transactions on, vol.15, issue.1, pp.6-19, 2009.

V. Navalpakkam and L. Itti, Modeling the influence of task on attention, Vision Research, vol.45, issue.2, pp.205-231, 2005.
DOI : 10.1016/j.visres.2004.07.042

P. Longhurst, K. Debattista, and A. Chalmers, A GPU based saliency map for high-fidelity selective rendering, Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa , Afrigaph '06, pp.21-29, 2006.
DOI : 10.1145/1108590.1108595

V. Sundstedt, E. Stavrakis, M. Wimmer, and E. Reinhard, A psychophysical study of fixation behavior in a computer game, Proceedings of the 5th symposium on Applied perception in graphics and visualization, APGV '08, pp.43-50, 2008.
DOI : 10.1145/1394281.1394288

URL : https://hal.archives-ouvertes.fr/inria-00606816

H. Pfister, M. Zwicker, J. Van-baar, and M. Gross, Surfels, Proceedings of the 27th annual conference on Computer graphics and interactive techniques , SIGGRAPH '00, pp.335-342, 2000.
DOI : 10.1145/344779.344936

R. Marroquim and A. Maximo, Introduction to GPU Programming with GLSL, 2009 Tutorials of the XXII Brazilian Symposium on Computer Graphics and Image Processing, pp.3-16, 2009.
DOI : 10.1109/SIBGRAPI-Tutorials.2009.9

B. A. Barsky, Vision-realistic rendering, Proceedings of the 1st Symposium on Applied perception in graphics and visualization , APGV '04, pp.73-81, 2004.
DOI : 10.1145/1012551.1012564

P. Rokita, Generating depth of-field effects in virtual reality applications, IEEE Computer Graphics and Applications, vol.16, issue.2, pp.18-21, 1996.
DOI : 10.1109/38.486676

J. Demers, Depth of Field : A Survey of Techniques in GPU Gems, 2004.

A. Kenny, H. Koesling, and D. Delaney, A preliminary investigation into eye gaze data in a first person shooter game, Proceedings of European Conference on Modelling and Simulation, 2005.

N. Tatarchuk, G. Riguer, and J. Isidoro, Real-time depth of field simulation in ShaderX2 : Shader Programming Tips and Tricks with DirectX 9, 2003.

M. D. Tocci, C. Kiser, N. Tocci, and P. Sen, A versatile hdr video production system, ACM Trans. Graph, vol.30, issue.4, pp.41-42, 2011.

J. Kronander, S. Gustavson, G. Bonnet, and J. Unger, Unified HDR reconstruction from raw CFA data, IEEE International Conference on Computational Photography (ICCP), pp.1-9, 2013.
DOI : 10.1109/ICCPhot.2013.6528315

E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, Photographic tone reproduction for digital images, ACM Trans. Graph, vol.21, issue.3, pp.267-276, 2002.

Z. Farbman and D. Lischinski, Tonal stabilization of video, ACM Trans. Graph, vol.3089, issue.4, pp.1-8910, 2011.

R. Mantiuk, S. Daly, and L. Kerofsky, Display adaptive tone mapping, ACM Transactions on Graphics, vol.27, issue.31, 2008.

S. Ramsey, I. Johnson, and C. Hansen, Adaptive temporal tone mapping. Computer Graphics and Imaging, pp.3-7, 2004.

M. Sing-bing-kang, S. Uyttendaele, R. Winder, and . Szeliski, High dynamic range video, ACM Transactions on Graphics, vol.22, issue.3, pp.319-325, 2003.
DOI : 10.1145/882262.882270

C. Lee and C. Kim, Gradient Domain Tone Mapping of High Dynamic Range Videos, 2007 IEEE International Conference on Image Processing, p.461, 2007.
DOI : 10.1109/ICIP.2007.4379346

B. Guthier, S. Kopf, M. Eble, and W. Effelsberg, Flicker reduction in tone mapped high dynamic range video, Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications, 2011.
DOI : 10.1117/12.872449

R. Boitard, K. Bouatouch, R. Cozot, D. Thoreau, and A. Gruson, Temporal coherency for video tone mapping, Applications of Digital Image Processing XXXV, 2012.
DOI : 10.1117/12.929600

URL : https://hal.archives-ouvertes.fr/hal-01142313

Y. Li, L. Sharan, H. Edward, and . Adelson, Compressing and companding high dynamic range images with subband architectures, ACM Transactions on Graphics, vol.24, issue.3, p.836, 2005.
DOI : 10.1145/1073204.1073271

A. Tremeau, S. Tominaga, and N. Konstantinos, Plataniotis : Color in image and video processing : Most recent trends and future research directions, EURASIP Journal on Image and Video Processing, 2008.

Y. J. Kuk-jin-yoon, I. Chofi, and . Kweon, Dichromatic-based color constancy using dichromatic slope and dichromatic line space, IEEE International Conference on Image Processing 2005, pp.960-963, 2005.
DOI : 10.1109/ICIP.2005.1530553

E. H. Land and J. J. Mccann, Lightness and Retinex Theory, Journal of the Optical Society of America, vol.61, issue.1, pp.1-11, 1971.
DOI : 10.1364/JOSA.61.000001

J. Morel, A. B. Petro, and C. Sbert, Fast implementation of color constancy algorithms, Color Imaging XIV: Displaying, Processing, Hardcopy, and Applications, p.724106, 2009.
DOI : 10.1117/12.805474

J. Michel-morel, A. B. Petro, and C. Sbert, A PDE Formalization of Retinex Theory, IEEE Transactions on Image Processing, vol.19, issue.11, pp.2825-2837, 2010.
DOI : 10.1109/TIP.2010.2049239

E. H. Land, The Retinex Theory of Color Vision, Scientific American, vol.237, issue.6, pp.108-120, 1977.
DOI : 10.1038/scientificamerican1277-108

J. Van-de-weijer, C. Schmid, and J. Verbeek, Using highlevel visual information for color constancy, IEEE Conference on Computer Vision (ICCV), 2007.
URL : https://hal.archives-ouvertes.fr/inria-00321125

G. D. Finlayson and S. D. , Gamut Constrained Illuminant Estimation, International Journal of Computer Vision, vol.19, issue.1, pp.93-109, 2006.
DOI : 10.1007/s11263-006-4100-z

P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, Bayesian color constancy revisited, 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp.1-8, 2008.
DOI : 10.1109/CVPR.2008.4587765

V. Cardei, B. Funt, and K. Barnard, Estimating the scene illumination chromaticity by using a neural network, Journal of the Optical Society of America A, vol.19, issue.12, 2002.
DOI : 10.1364/JOSAA.19.002374

S. D. Hordley, Scene illuminant estimation: Past, present, and future, Color Research & Application, vol.70, issue.4, pp.303-314, 2006.
DOI : 10.1002/col.20226

D. Graham, E. Finlayson, and . Trezzi, Shades of Gray and Colour Constancy, Color Imaging Conference, pp.37-41, 2004.

A. Gijsenij, T. Gevers, and J. Weijer, Generalized Gamut Mapping using Image Derivative Structures for Color Constancy, International Journal of Computer Vision, vol.16, issue.9, pp.127-139, 2010.
DOI : 10.1007/s11263-008-0171-3

A. I. Ruppertsberg and M. Bloj, Reflecting on a room of one reflectance, Journal of Vision, vol.7, issue.13, 2007.
DOI : 10.1167/7.13.12

T. Kunkel and E. Reinhard, A neurophysiology-inspired steady-state color appearance model, Journal of the Optical Society of America A, vol.26, issue.4, pp.776-782, 2009.
DOI : 10.1364/JOSAA.26.000776

Y. N. Morone, M. D. Fairchild, R. W. Hunt, C. Li, M. R. Lou et al., The ciecam02 color appearance model, Color Imaging Conference IS&T, pp.23-27, 2002.

J. Kuang, G. M. Johnson, and M. D. , Fairchild : icam06 : A refined image appearance model for hdr image rendering, Journal of Visual Communication, 2007.

E. Reinhard, T. Pouli, T. Kunkel, and B. Long, Anders Ballestad et Gerwin Damberg : Calibrated image appearance reproduction, ACM Trans. Graph, vol.31201, issue.611, pp.1-201, 2012.

G. Ward and E. Eydelberg-vileshin, Picture perfect rgb rendering using spectral prefiltering and sharp color primaries, EGRW '02 : Proceedings of the 13th Eurographics workshop on Rendering, pp.117-124, 2002.

L. Neumann, F. Castro, A. Neumann, and M. Sbert, Color appearance in multispectral radiosity, G. Renner L. Szirmay-Kalos, éditeur : Proceedings on the 2nd Hungarian Computergraphics and Geometry Conference, pp.183-194, 2003.

A. Wilkie, Weidlich : A robust illumination estimate for chromatic adaptation in rendered images, Eurographics Symposium on Rendering, 2009.

P. Machado and A. Cardoso, Computing Aesthetics, Proceedings of the Brazilian Symposium on Artificial Intelligence, SBIA'98, pp.219-229, 1998.
DOI : 10.1007/10692710_23

F. Hoenig, Defining computational aesthetics, Proceedings of the First Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, Computational Aesthetics'05, pp.13-18

R. Datta, D. Joshi, J. Li, and J. Z. Wang, Studying Aesthetics in Photographic Images Using a Computational Approach, Proceedings of the 9th European Conference on Computer Vision -Volume Part III, ECCV'06, pp.288-301, 2006.
DOI : 10.1007/11744078_23

V. Rivotti, J. Proença, and J. Sousa, Composition principles for quality depiction and aesthetics, Proceedings of the Third Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, Computational Aesthetics'07, pp.37-44

S. Dhar, V. Ordonez, and T. L. Berg, High level describable attributes for predicting aesthetics and interestingness, CVPR 2011, pp.1657-1664, 2011.
DOI : 10.1109/CVPR.2011.5995467

S. Shehroz, D. Khan, and . Vogel, Evaluating visual aesthetics in photographic portraiture Eurographics As- sociation, Proceedings of the Eighth Annual Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, CAe '12, pp.55-62

J. Sicre, Classifying images by light aesthetics, 2013.
URL : https://hal.archives-ouvertes.fr/dumas-00854874

E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, Color transfer between images, IEEE Computer Graphics and Applications, vol.21, issue.4, pp.34-41, 2001.
DOI : 10.1109/38.946629

B. Wang, Y. Yu, and Y. Xu, Example-based image color and tone style enhancement, ACM Trans. Graph, vol.3064, issue.4, pp.1-6412, 2011.

A. O?uz-akyüz, K. Hadimli, M. Aydinlilar, and C. Bloch, Style-based tone mapping for hdr images, SIGGRAPH Asia 2013 Technical Briefs, SA '13, pp.1-23, 2013.

N. Bonneel, K. Sunkavalli, S. Paris, and H. Pfister, Example-based video color grading, ACM Transactions on Graphics, vol.32, issue.4, pp.1-3912, 2013.
DOI : 10.1145/2461912.2461939

R. Shacked and D. Lischinski, Automatic Lighting Design using a Perceptual Quality Metric, Computer Graphics Forum, vol.20, issue.3, 2001.
DOI : 10.1111/1467-8659.00514

V. Leon, Global illumination and image appearance, 2013.
URL : https://hal.archives-ouvertes.fr/dumas-00854976

. Paramètre-de-rendu, à droite) image obtenue avec les paramètres par défaut -(à gauche) image obtenue avec un changement des paramètres de l'opérateur de réduction des gammes de luminances, p.30

.. La-salle-et-les-affiches-de-films, 3 -les réalisateurs , les films), p.43

L. La-salle-de-cinéma, affiches des films (en bas de l'écran) -une affiche par film, p.44

3. Le-monde, phénotype) construit automatiquement à partir de la base de données de films, p.44

R. Courbe-de, entre l'image de référence et les différentes méthodes ) en fonction du temps pour la scène Breakfast hall, p.53

L. Scène-breakfast and H. , (En haut à gauche) image de référence. (Colonne de droite) images obtenues avec les autres méthodes 10h. Détails sur les images obtenues avec les différentes méthodes après : 30min. et 10h. Et images de différences par rapport à l'image de références, p.54

. La-scène-kitchen, (En haut à gauche) image obtenue avec P P T _metro (10h). (Colonne de droite) images obtenues avec les autres méthodes 10h, Détails sur les images obtenues avec les différentes méthodes après : 30min. et 10h. . . . . 55

P. Flou-de, (à gauche) flou de profondeur sans rotation du noyau d'échantillonnage -(au milieu) flou de profondeur avec rotation du noyau d'échantillonnage -(à droite) image finale avec flou de profondeur, p.64

L. Scintillement-dans-la-séquence, exemple de trois images consécutives , l'image du milieu est plus lumineuse, p.67

.. Altération-de-la-cohérence-du-contraste, (en haut) Les images HDR en fausse couleur (bleu : faible valeur de luminance, rouge : grande valeur de luminance), (en bas) Les images LDR correspondantes - Le changement de condition d'illumination entraine des images HDR plus sombres, ce changement n'est pas respecté dans les images LDR produites, p.68

H. .. Altération-de-la-cohérence-des-objets, (en haut) Les images HDR en fausse couleur (bleu : faible valeur de luminance, rouge : grande valeur de luminance), (en bas) Les images LDR correspondantes la luminance de la partie inférieure du pont évolue dans la séquence LDR alors qu'elle est constante dans la séquence, p.69

L. Séquence and H. Under-bridge, (à gauche) des images de séquence LDR obtenu avec l'opérateur de réduction de Li [117] -(à droite) les images LDR avec ZBC : la cohérence temporelle de la luminosité est maintenue avec la séquence HDR et le contraste est conservé, p.74

L. Séquence, H. Under-bridge, and -. , à gauche) des images de séquence LDR obtenu avec l'opérateur de réduction [109] -(à droite) les images LDR avec ZBC : la cohérence temporelle de la luminosité est conservée malgré le changement de conditions d'illumination sur le véhicule principal, p.75

R. Diagramme-de-chromaticité-dans-l-'espace-de-couleur, Les cercles de couleur (rouge, vert, bleu) correspondent à la couleur d'adaptation pour les trois positions et orientations de la caméra

W. La-figure-de-gauche-correspond-À-la-méthode, La figure de droite montre les résulats pour notre méthode, p.86

P. Paramètre-du-rendu, chaque méthode utilise le même temps de rendu (dernière ligne). photons volumiques = nombre de photons émis par passe dans APA et PPT, p.52