A Comparison Between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition

Abstract : We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.inria.fr/hal-01329772
Contributor : Aurélien Bellet <>
Submitted on : Friday, June 10, 2016 - 5:20:53 PM
Last modification on : Friday, March 22, 2019 - 1:34:37 AM

File

icassp16.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01329772, version 1
  • ARXIV : 1603.05800

Citation

Zhiyun Lu, Dong Guo, Alireza Bagheri Garakani, Kuan Liu, Avner May, et al.. A Comparison Between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), Mar 2016, Shanghai, China. ⟨hal-01329772⟩

Share

Metrics

Record views

278

Files downloads

300