Towards Multimodal Content Representation - Archive ouverte HAL Access content directly
Conference Papers Year : 2002

Towards Multimodal Content Representation

, (1)
1

Abstract

Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance: - the understanding of possibly imprecise, partial or ambiguous multimodal input; - the generation of coordinated, cohesive, and coherent multimodal presentations; - the management of multimodal interaction (e.g., task completion, adapting the interface, error prevention) by representing and exploiting models of the user, the domain, the task, the interactive context, and the media (e.g. text, audio, video). The present document is intended to support the discussion on multimodal content representation, its possible objectives and basic constraints, and how the definition of a generic representation framework for multimodal content representation may be approached. It takes into account the results of the Dagstuhl workshop, in particular those of the informal working group on multimodal meaning representation that was active during the workshop (see http://www.dfki.de/~wahlster/Dagstuhl_Multi_Modality, Working Group 4).
Fichier principal
Vignette du fichier
BuntRomary.pdf (70 Ko) Télécharger le fichier
Vignette du fichier
TC37SC4Semantic.ppt (45 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Format : Other
Loading...

Dates and versions

inria-00100772 , version 1 (23-09-2009)

Identifiers

  • HAL Id : inria-00100772 , version 1
  • ARXIV : 0909.4280

Cite

Harry Bunt, Laurent Romary. Towards Multimodal Content Representation. LREC Workshop on International Standards of Terminology and Language Resources Management, May 2002, Las Palams, Spain. 7 p. ⟨inria-00100772⟩
167 View
105 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More