Skip to Main content Skip to Navigation
New interface
Conference papers

HEAR: An hybrid episodic-abstract speech recognizer

Sébastien Demange 1 Dirk van Compernolle 
1 PAROLE - Analysis, perception and recognition of speech
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : This paper presents a new architecture for automatic continuous speech recognition called HEAR - Hybrid Episodic-Abstract speech Recognizer. HEAR relies on both parametric speech models (HMMs) and episodic memory. We propose an evaluation on the Wall Street Journal corpus, a standard continuous speech recognition task, and compare the results with a state-of-the-art HMM baseline. HEAR is shown to be a viable and a competitive architecture. While the HMMs have been studied and optimized during decades, their performance seems to converge to a limit which is lower than human performance. On the contrary, episodic memory modeling for speech recognition as applied in HEAR offers flexibility to enrich the recognizer with information the HMMs lack. This opportunity as well as future work are exposed in a discussion.
Document type :
Conference papers
Complete list of metadata
Contributor : Sébastien Demange Connect in order to contact the contributor
Submitted on : Wednesday, April 6, 2011 - 5:53:08 PM
Last modification on : Saturday, June 25, 2022 - 7:44:48 PM


  • HAL Id : inria-00583851, version 1



Sébastien Demange, Dirk van Compernolle. HEAR: An hybrid episodic-abstract speech recognizer. 10th Annual Conference of the International Speech Communication Association - Interspeech 2009, Sep 2009, Brighton, United Kingdom. pp.3067--3070. ⟨inria-00583851⟩



Record views