Skip to Main content Skip to Navigation
Conference papers

A Comparison Between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition

Abstract : We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.
Complete list of metadata

Cited literature [25 references]  Display  Hide  Download
Contributor : Aurélien Bellet Connect in order to contact the contributor
Submitted on : Friday, June 10, 2016 - 5:20:53 PM
Last modification on : Thursday, January 20, 2022 - 4:12:32 PM


Files produced by the author(s)


  • HAL Id : hal-01329772, version 1
  • ARXIV : 1603.05800


Zhiyun Lu, Dong Guo, Alireza Bagheri Garakani, Kuan Liu, Avner May, et al.. A Comparison Between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), Mar 2016, Shanghai, China. ⟨hal-01329772⟩



Les métriques sont temporairement indisponibles