Skip to Main content Skip to Navigation
New interface
Conference papers

How to Compare TTS Systems: A New Subjective Evaluation Methodology Focused on Differences

Abstract : Subjective evaluation is a crucial problem in the speech processing community and especially for the speech synthesis field, no matter what system is used. Indeed, when trying to assess the effectiveness of a proposed method, researchers usually conduct subjective evaluations by randomly choosing a small set of samples, from the same domain, taken from a baseline system and the proposed one. When selecting them randomly, statistically, samples with almost no differences are evaluated and the global measure is smoothed which may lead to judge the improvement not significant. To solve this methodological flaw, we propose to compare speech synthesis systems on thousands of generated samples from various domains and to focus subjective evaluations on the most relevant ones by computing a normalized alignment cost between sample pairs. This process has been successfully applied both in the HTS statistical framework and in the corpusbased approach. We have conducted two perceptive experiments by generating more than 27,000 samples for each system under comparison. A comparison between tests involving most different samples and randomly chosen samples shows clearly that the proposed approach
Complete list of metadata
Contributor : Damien Lolive Connect in order to contact the contributor
Submitted on : Monday, September 14, 2015 - 9:06:16 PM
Last modification on : Tuesday, October 19, 2021 - 11:58:58 PM


  • HAL Id : hal-01199082, version 1


Jonathan Chevelu, Damien Lolive, Sébastien Le Maguer, David Guennec. How to Compare TTS Systems: A New Subjective Evaluation Methodology Focused on Differences. Interspeech, Sep 2015, Dresden, Germany. ⟨hal-01199082⟩



Record views