Similarity encoding for learning with dirty categorical variables

Abstract : For statistical learning, categorical variables in a table are usually considered as discrete entities and encoded separately to feature vectors, e.g., with one-hot encoding. "Dirty" non-curated data gives rise to categorical variables with a very high cardinality but redundancy: several categories reflect the same entity. In databases, this issue is typically solved with a deduplication step. We show that a simple approach that exposes the redundancy to the learning algorithm brings significant gains. We study a generalization of one-hot encoding, similarity encoding, that builds feature vectors from similarities across categories. We perform a thorough empirical validation on non-curated tables, a problem seldom studied in machine learning. Results on seven real-world datasets show that similarity encoding brings significant gains in prediction in comparison with known encoding methods for categories or strings, notably one-hot encoding and bag of character n-grams. We draw practical recommendations for encoding dirty categories: 3-gram similarity appears to be a good choice to capture morphological resemblance. For very high-cardinality, dimensionality reduction significantly reduces the computational cost with little loss in performance: random projections or choosing a subset of prototype categories still outperforms classic encoding approaches.
Type de document :
Article dans une revue
Machine Learning, Springer Verlag, 2018, 〈10.1007/s10994-018-5724-2〉
Liste complète des métadonnées
Contributeur : Patricio Cerda <>
Soumis le : vendredi 1 juin 2018 - 16:46:03
Dernière modification le : vendredi 22 juin 2018 - 01:20:42


Fichiers produits par l'(les) auteur(s)



Patricio Cerda, Gaël Varoquaux, Balázs Kégl. Similarity encoding for learning with dirty categorical variables. Machine Learning, Springer Verlag, 2018, 〈10.1007/s10994-018-5724-2〉. 〈hal-01806175〉



Consultations de la notice


Téléchargements de fichiers