A Comparison Between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition

Abstract : We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.
Type de document :
Communication dans un congrès
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), Mar 2016, Shanghai, China. 〈http://www.icassp2016.org〉
Liste complète des métadonnées

Littérature citée [25 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01329772
Contributeur : Aurélien Bellet <>
Soumis le : vendredi 10 juin 2016 - 17:20:53
Dernière modification le : jeudi 11 janvier 2018 - 06:27:32

Fichier

icassp16.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01329772, version 1
  • ARXIV : 1603.05800

Collections

Citation

Zhiyun Lu, Dong Guo, Alireza Bagheri Garakani, Kuan Liu, Avner May, et al.. A Comparison Between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), Mar 2016, Shanghai, China. 〈http://www.icassp2016.org〉. 〈hal-01329772〉

Partager

Métriques

Consultations de la notice

154

Téléchargements de fichiers

156