Skip to Main content Skip to Navigation
Conference papers

A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings

Xuechen Liu 1 Md Sahidullah 1 Tomi Kinnunen 2
1 MULTISPEECH - Speech Modeling for Facilitating Oral-Based Communication
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
Abstract : Modern automatic speaker verification relies largely on deep neural networks (DNNs) trained on mel-frequency cepstral coefficient (MFCC) features. While there are alternative feature extraction methods based on phase, prosody and long-term temporal operations, they have not been extensively studied with DNN-based methods. We aim to fill this gap by providing extensive re-assessment of 14 feature extractors on VoxCeleb and SITW datasets. Our findings reveal that features equipped with techniques such as spectral centroids, group delay function, and integrated noise suppression provide promising alternatives to MFCCs for deep speaker embeddings extraction. Experimental results demonstrate up to 16.3% (VoxCeleb) and 25.1% (SITW) relative decrease in equal error rate (EER) to the baseline.
Document type :
Conference papers
Complete list of metadata

Cited literature [44 references]  Display  Hide  Download

https://hal.inria.fr/hal-02909105
Contributor : Xuechen Liu <>
Submitted on : Wednesday, July 29, 2020 - 9:42:26 PM
Last modification on : Tuesday, September 29, 2020 - 4:14:01 PM
Long-term archiving on: : Tuesday, December 1, 2020 - 9:53:13 AM

File

xuechen_interspeech2020.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02909105, version 1

Collections

Citation

Xuechen Liu, Md Sahidullah, Tomi Kinnunen. A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings. INTERSPEECH 2020, Oct 2020, Shanghai, China. ⟨hal-02909105⟩

Share

Metrics

Record views

68

Files downloads

192